About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We are seeking a Principal Software Engineer to join our Software Infrastructure Team in Santa Clara, CA. This team is at the heart of the NVIDIA AI Factory initiative, building and maintaining the core infrastructure that powers our closed and open source AI models. In this role, you will be a key leader in designing and developing our Inference as a Service platform, creating the systems that manage GPU resources, ensure service stability, and deliver high-performance, low-latency inference at a massive scale.

Requirements

  • 15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure.
  • BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
  • Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems.
  • Proven experience with container orchestration technologies like Kubernetes.
  • A deep understanding of system architecture for high-performance, low-latency API services.
  • Experience in designing, implementing, and optimizing systems for GPU resource management.
  • Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry).
  • Demonstrated experience with deployment strategies and CI/CD pipelines.
  • Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.

Nice To Haves

  • Experience with specialized inference serving frameworks.
  • Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space.
  • Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression.
  • Expertise in building platforms that support a wide variety of AI model architectures.
  • Strong understanding of the full lifecycle of an AI model, from training to deployment and serving.

Responsibilities

  • Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service.
  • Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads.
  • Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services.
  • Optimize system performance and latency for various model types, from large language models (LLMs) to computer vision models, ensuring high-throughput and responsiveness.
  • Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service