About The Position

NVIDIA is seeking an early-in-career Solution Architect to join its growing AI Infrastructure team. This role involves helping customers design, architect, and implement accelerated computing data center solutions that power workflows such as AI inference at scale and physical AI through digital simulation. The position requires full-stack design expertise, including hardware architecture, workload orchestration, and application performance tuning, leveraging NVIDIA's industry-leading AI software tools. The successful candidate will work in a diverse and encouraging environment, contributing to significant advancements in AI development.

Requirements

  • Bachelor's degree or equivalent experience in Engineering or Computer Science
  • 3+ years of meaningful work experience, ideally in an IT infrastructure or related field of expertise
  • An outstanding passion for groundbreaking IT infrastructures that optimize AI workloads
  • Knowledge of infrastructure management including Linux, Kubernetes, Ethernet networking, cloud native tooling
  • Familiarity with Linux system environments including using Linux for systems administration
  • Comfortable with Python programming
  • Understanding how software and hardware work together to optimize applications
  • Ability to work independently with a remote team with minimal direction
  • Outstanding communication skills, strong interpersonal and be an excellent teammate!

Nice To Haves

  • Experience with on-prem infrastructure architecture and big cloud deployments
  • Experience with cloud native tooling including Terraform, Kubernetes, Helm
  • Background in building large scale infrastructure that deliver workloads via containers
  • Experience developing and deploying solutions in hybrid and/or cloud environments
  • Critical thinking capabilities that leverage fundamentals to deduce solutions to unforseen problems

Responsibilities

  • Help customers with their AI factory journey, including workflow pipelines and performance optimization
  • Focus data center implementations for inference use cases, including distributed, disaggregated and scaled out workflows
  • Scope physical AI journeys on Omniverse, including synthetic data generation, data aggregation, application development and simulation pipelines
  • Lead technical sales activities for AI factories with focus on hybrid deployments between cloud and on-prem
  • Providing expertise in infrastructure workflows, including hardware, task coordination and application tuning
  • Understand different solutions trade-offs and propose enterprise customers the best architecture and technical execution
  • Work directly with key customers to understand workflows and share feedback with internal product and engineering teams

Benefits

  • equity and benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service