Principal Full Stack Software Engineer

NVIDIADurham, NC
63d$272,000 - $425,500

About The Position

NVIDIA is at the forefront of innovations in Artificial Intelligence, High-Performance Computing, and Visualization. Our invention—the GPU—functions as the visual cortex of modern computing and is central to groundbreaking applications from generative AI to autonomous vehicles. We are now looking for a Principal Full Stack Software Engineer to help accelerate the next era of machine learning innovation. In this role, you will propose and implement engineering solutions to ensure delivery of functional, reliable, secure, and performance-optimal GPU clusters to internal researchers, enabling them to focus on training and development by reducing operational disruption and overhead, empowering them for self-service continuous improvement on reliability, operational excellence & performance. Your work will empower scientists and engineers to train, fine-tune, and deploy the most advanced ML models on some of the world’s most powerful GPU systems.

Requirements

  • BS/MS in Computer Science, Engineering, or equivalent experience.
  • 15+ years in software/platform engineering, including 3+ years in ML infrastructure or distributed systems.
  • Proficiency with full-stack development: Relational Data Modeling, DB optimization, REST API Semantics, Javascript, CSS, providing API as a service.
  • Experience in software development lifecycle on Linux-based platforms.
  • Strong coding skills in languages such as Python, C++ or Rust.
  • Experience with AIOps or Agentic AI and apply it successfully in production environment.
  • Experience with Docker, Kubernetes, GitLab CI, automated deployments.

Nice To Haves

  • Familiarity with GPU computing, Linux systems internals, and performance tuning at scale.
  • Experience running Slurm or custom scheduling frameworks in production ML environments.
  • Experience with ML orchestration tools such as Kubeflow, Flyte, Airflow, or Ray.

Responsibilities

  • Work with coworkers across the Managed AI Research Supercluster organization to understand the pain points of validating, monitoring and operating GPU clusters at scale.
  • Design, develop and maintain engineering solutions to solve those pain points systematically.
  • Research in traditional AIOps and the emerging Agentic AI, and leverage them to further reduce the operation toil.
  • Participate in on-call support for systems and platforms built and owned by the team.

Benefits

  • Equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service