About The Position

NVIDIA is seeking a Forward Deployed Engineer to join our AI Accelerator team, working directly with strategic customers to implement and optimize pioneering AI workloads! You will provide hands-on technical support for advanced AI implementations, complex distributed systems, and ensure customers achieve optimal performance from NVIDIA's AI platform across diverse environments. We work directly with the world's most innovative AI companies to solve their toughest technical challenges. What you will be doing: In this role, you will implement innovative solutions that push the boundaries of what's possible with AI infrastructure while directly impacting customer success with breakthrough AI initiatives! Technical Implementation: Design and deploy custom AI solutions including distributed training, inference optimization, and MLOps pipelines across customer environments Customer Support: Provide remote technical support to strategic customers, optimize AI workloads, diagnose and resolve performance issues, and guide technical implementations through virtual collaboration Infrastructure Management: Deploy and manage AI workloads across DGX Cloud, customer data centers, and CSP environments using Kubernetes, Docker, and scheduling systems for GPU Performance Optimization: Profile and optimize large-scale model training and inference workloads, implement monitoring solutions, and resolve scaling challenges Integration Development: Build custom integrations with customer systems, develop APIs and data pipelines, and implement enterprise software connections End-user Documentation: Create implementation guides, documentation for resolution approaches and standard methodologies for complex AI deployments

Requirements

  • 8+ years of experience in customer-facing technical roles (Solutions Engineering, DevOps, ML Infrastructure Engineering)
  • BS, MS, or Ph.D. in CS, CE, EE (related technical field) or equivalent experience.
  • Strong proficiency with Linux systems, distributed computing, Kubernetes, and GPU scheduling
  • AI/ML experience supporting inference workloads and training at large-scale
  • Programming skills in Python, with experience in PyTorch, TensorFlow, or similar AI frameworks
  • Customer engagement ability to work effectively with technical teams under high-pressure situations

Nice To Haves

  • NVIDIA ecosystem experience with DGX systems, CUDA, NeMo, Triton, or NIM
  • Cloud platforms hands-on experience with AWS, Azure, or GCP AI services
  • MLOps expertise with containerization, CI/CD pipelines, and observability tooling
  • Infrastructure as code experience with Terraform, Ansible, or similar automation tools
  • Enterprise software integration experience with Salesforce, ServiceNow, or similar platforms

Responsibilities

  • Design and deploy custom AI solutions including distributed training, inference optimization, and MLOps pipelines across customer environments
  • Provide remote technical support to strategic customers, optimize AI workloads, diagnose and resolve performance issues, and guide technical implementations through virtual collaboration
  • Deploy and manage AI workloads across DGX Cloud, customer data centers, and CSP environments using Kubernetes, Docker, and scheduling systems for GPU
  • Profile and optimize large-scale model training and inference workloads, implement monitoring solutions, and resolve scaling challenges
  • Build custom integrations with customer systems, develop APIs and data pipelines, and implement enterprise software connections
  • Create implementation guides, documentation for resolution approaches and standard methodologies for complex AI deployments

Benefits

  • competitive salaries
  • generous benefits package
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service