NVIDIA is looking for engineers for our core AI Frameworks ( Megatron Core and NeMo Framework ) team to design, develop and optimize diverse real world workloads. Megatron Core and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM) and Multimodal (MM) foundation model pretraining and post-training. Our GenAI Frameworks provide end-to-end model training, including pretraining, reasoning, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. In this critical role, you will expand Megatron Core and NeMo Framework's capabilities, enabling users to develop, train, and optimize models by designing and implementing the latest in distributed training algorithms, model parallel paradigms, model optimizations, defining robust APIs, meticulously analyzing and tuning performance, and expanding our toolkits and libraries to be more comprehensive and coherent. You will collaborate with internal partners, users, and members of the open source community to analyze, design, and implement highly optimized solutions. What you’ll be doing: Develop algorithms for AI/DL, data analytics, machine learning, or scientific computing Contribute and advance open source NeMo-RL , Megatron Core , NeMo Framework Solve large-scale, end-to-end AI training and inference challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment. Work at the intersection of compter-architecture, libraries, frameworks, AI applications and the entire software stack. Innovate and improve model architectures, distributed training algorithms, and model parallel paradigms. Performance tuning and optimizations, model training and finetuning with mixed precision recipes on next-gen NVIDIA GPU architectures. Research, prototype, and develop robust and scalable AI tools and pipelines.