Nvidia-posted 3 months ago
$104,000 - $172,500/Yr
Full-time • Entry Level
Remote • Santa Clara, CA
Computer and Electronic Product Manufacturing

NVIDIA is hiring engineers to scale up its AI Infrastructure. We expect you to have a strong programming background, knowledge of datacenter hardware, operations, and networking, familiarity with software testing and deployment, familiarity with distributed systems, and excellent communication and planning abilities. Experience working with High Performance Computing (HPC), GPUs, and high-performance networking (RDMA, Infiniband, RoCE) are strongly preferred. We also welcome out-of-the-box thinkers who can provide new ideas with a strong execution bias. Expect to be constantly challenged, improving, and evolving for the better. You and other engineers on this team will help advance NVIDIA's capacity to build and deploy leading infrastructure solutions for a broad range of AI-based applications that affect core data science. For two decades, we have pioneered visual computing, the art and science of computer graphics. With the invention of the GPU - the engine of modern visual computing - the field has expanded to encompass video games, movie production, product design, medical diagnosis and scientific research. Today, we stand at the beginning of the next era, the AI computing era, ignited by a new computing model, GPU deep learning.

  • Contribute to a platform that automates GPU asset provisioning, configuration, and lifecycle management across cloud providers.
  • Build end-to-end automation of datacenter operations, break/fix, and lifecycle management for large-scale Machine Learning systems.
  • Implement monitoring and health management capabilities that enable industry-leading reliability, availability, and scalability of GPU assets.
  • Harness multiple data streams, ranging from GPU hardware diagnostics to cluster and network telemetry.
  • Work on software that manages NVLINK topography across GPU clusters.
  • Build automated test infrastructure to qualify distributed systems for operation.
  • Ensure software integrates seamlessly from the hardware up to the AI training applications.
  • Pursuing or recently completed a BS or MS in Computer Science/Engineering/Physics/Mathematics or other comparable Degree or equivalent experience.
  • Software engineering experience on large-scale production systems.
  • Experience working successfully with multi-functional teams, principles and architects and coordinate effectively across organizational boundaries and geographies.
  • Strong level knowledge of a systems programming language (Go, Python) and a solid understanding of Data Structure and Algorithms.
  • High level knowledge of Linux system administration and management.
  • Understanding of cluster management systems (Kubernetes, SLURM).
  • Understanding of performance, security and reliability in complex distributed systems.
  • Familiarity with system level architecture, data synchronization, fault tolerance and state management.
  • Proficiency in architecting and managing large-scale distributed systems, independent of cloud providers.
  • Deep knowledge of datacenter operations and GPU hardware.
  • Hands-on experience working with RDMA networking.
  • Advanced hands-on experience and deep understanding of cluster management systems (Kubernetes, SLURM).
  • Hands-on experience in Machine Learning Operations.
  • Hands-on experience with Bright Cluster Manager.
  • Hands-on experience developing and/or operating hardware fleet management systems.
  • Proven operational excellence in designing and maintaining AI infrastructure.
  • Equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service