NVIDIA is seeking a Forward Deployed Engineer to join our AI Accelerator team, working directly with strategic customers to implement and optimize pioneering AI workloads! You will provide hands-on technical support for advanced AI implementations, complex distributed systems, and ensure customers achieve optimal performance from NVIDIA's AI platform across diverse environments. We work directly with the world's most innovative AI companies to solve their toughest technical challenges. What you will be doing: In this role, you will implement innovative solutions that push the boundaries of what's possible with AI infrastructure while directly impacting customer success with breakthrough AI initiatives! Technical Implementation: Design and deploy custom AI solutions including distributed training, inference optimization, and MLOps pipelines across customer environments Customer Support: Provide remote technical support to strategic customers, optimize AI workloads, diagnose and resolve performance issues, and guide technical implementations through virtual collaboration Infrastructure Management: Deploy and manage AI workloads across DGX Cloud, customer data centers, and CSP environments using Kubernetes, Docker, and scheduling systems for GPU Performance Optimization: Profile and optimize large-scale model training and inference workloads, implement monitoring solutions, and resolve scaling challenges Integration Development: Build custom integrations with customer systems, develop APIs and data pipelines, and implement enterprise software connections End-user Documentation: Create implementation guides, documentation for resolution approaches and standard methodologies for complex AI deployments
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees