About The Position

NVIDIA is seeking an early-in-career Solution Architect to join the growing NVIDIA AI Infrastructure team. This role involves helping customers design, architect, and implement accelerated computing data center solutions that will power workflows including AI inference at scale and physical AI through digital simulation. NVIDIA solutions are built around industry-leading AI software tools, and this role requires full-stack design including hardware architecture, workload orchestration, and application performance tuning. The successful candidate will be immersed in a diverse, encouraging environment and will have the opportunity to make a lasting impact on the world.

Requirements

  • Bachelor's degree or equivalent experience in Engineering or Computer Science
  • 3+ years of meaningful work experience, ideally in an IT infrastructure or related field of expertise
  • An outstanding passion for groundbreaking IT infrastructures that optimize AI workloads
  • Knowledge of infrastructure management including Linux, Kubernetes, Ethernet networking, cloud native tooling
  • Familiarity with Linux system environments including using Linux for systems administration
  • Comfortable with Python programming
  • Understanding how software and hardware work together to optimize applications
  • Ability to work independently with a remote team with minimal direction
  • Outstanding communication skills, strong interpersonal and be an excellent teammate!

Nice To Haves

  • Experience with on-prem infrastructure architecture and big cloud deployments
  • Experience with cloud native tooling including Terraform, Kubernetes, Helm
  • Background in building large scale infrastructure that deliver workloads via containers
  • Experience developing and deploying solutions in hybrid and/or cloud environments
  • Critical thinking capabilities that leverage fundamentals to deduce solutions to unforseen problems

Responsibilities

  • Help customers with their AI factory journey, including workflow pipelines and performance optimization
  • Focus data center implementations for inference use cases, including distributed, disaggregated and scaled out workflows
  • Scope physical AI journeys on Omniverse, including synthetic data generation, data aggregation, application development and simulation pipelines
  • Lead technical sales activities for AI factories with focus on hybrid deployments between cloud and on-prem
  • Deliver hybrid cloud architectures for data pipelines, storage, security and user streaming connectivity
  • Providing expertise in infrastructure workflows, including hardware, task coordination and application tuning
  • Understand different solutions trade-offs and propose enterprise customers the best architecture and technical execution
  • Work directly with key customers to understand workflows and share feedback with internal product and engineering teams
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service