About The Position

NVIDIA is looking for an outstanding, passionate, and talented Senior AI Infrastructure Engineer to join our DGX Cloud group. This engineering role will design, build and maintain large scale production systems with high efficiency and availability using the combination of software and systems engineering practices. This role demands knowledge across different systems, networking, coding, database, capacity management, continuous delivery and deployment and open source cloud enabling technologies like Kubernetes and OpenStack. DGX Cloud SRE at NVIDIA ensures that our internal and external facing GPU cloud services run maximum reliability and uptime and at the same time making changes to the existing system through careful preparation and planning while managing capacity and performance. NVIDIA's culture of diversity, intellectual curiosity, problem solving and openness is important to our success. Our organization brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn and grow.

Requirements

  • BS degree in Computer Science or a related technical field involving coding (e.g., physics or mathematics), or equivalent experience.
  • 6+ years of experience.
  • A track record showing a good balance between initiating your own projects, convincing others to collaborate with you and collaborating well on projects initiated by others.
  • Experience with infrastructure automation and distributed systems design developing tools for running large scale private or public cloud systems in production.
  • Experience in one or more of the following: Python, Go, C/C++, Java
  • In depth knowledge in one or more of Linux, Networking, Storage, and Containers Technologies
  • Experience with Public Cloud and Infrastructure as a code (IAAC) and Terraform
  • Distributed system experience

Nice To Haves

  • Interest in crafting, analyzing and fixing large-scale distributed systems.
  • Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
  • Ability to debug and optimize code and automate routine tasks.
  • Experience in using or running large private and public cloud systems based on Kubernetes or Slurm.

Responsibilities

  • Design, build, deploy, and run internal tooling for large scale AI training and Inferencing platform built on top of cloud infrastructure
  • Conduct in-depth performance characterization and analysis on large multi-GPU and multi-node clusters.
  • Engage in and improve the whole lifecycle of services—from inception and design through deployment, operation and refinement.
  • Support services before they go live through activities such as system design consulting, developing software tools, platforms and frameworks, capacity management and launch reviews.
  • Maintain services once they are live by measuring and monitoring availability, latency and overall system health.
  • Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity
  • Practice sustainable incident response and blameless postmortems
  • Be part of an on call rotation to support production systems

Benefits

  • You will also be eligible for equity and benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service