About The Position

NVIDIA is seeking an outstanding Solutions Architect, Foundation Models to join our growing team focused on partner enablement for reasoning models, multimodal models, and production inference! In this role, you will act as both a strategic technical expert and a hands-on advisor, helping partners build, benchmark, fine-tune, optimize, and deploy foundation model solutions for customer workloads. The Partner Solutions Architecture team acts as a trusted advisor to the ecosystem. We enable partners to translate customer requirements into architectures, benchmark recipes, cluster test plans, compute sizing, and production readiness—accelerating time to value through the full-stack accelerated computing platform. What you'll be doing: Serve as the lead technical advisor for partners delivering reasoning, multimodal, fine-tuning, and model-serving solutions. Guide partners to the right approach for customer workloads across fine-tuning, distillation, quantization, compression, benchmarking, and evaluation. Define benchmark plans, synthetic data and evaluation workflows, and repeatable validation recipes. Advise on compute planning, including cluster sizing, GPU and network selection, storage, memory tradeoffs, latency and throughput targets, and production-readiness testing. Guide inference architecture across prefill and decode tradeoffs, batching, routing, disaggregated inference, and serving efficiency. Develop reference architectures, playbooks, benchmark recipes, TCO calculators, and sizing models across CUDA, NeMo, Nemotron, Dynamo, TensorRT-LLM, Triton, NIMs, and related tooling. Support pre- and post-sales engagements by translating complex model and infrastructure topics for partner and customer teams.

Requirements

  • MSc, PhD in Computer Science, Electrical Engineering, Software Engineer, ML Engineer, or related fields (or equivalent experience).
  • 5+ years of relevant experience working with LLMs, VLMs, and large-scale inference systems, with hands-on expertise in fine-tuning, benchmarking, evaluation, optimization, and production deployment as a Research Engineer, Deep Learning Engineer, or equivalent.
  • Strong understanding of foundation models across data preparation, fine-tuning, post-training, evaluation, and inference.
  • Familiarity with reasoning models, reinforcement learning, and synthetic data generation and evaluation workflows.
  • Strong programming skills in Python and hands-on experience with PyTorch, JAX, or TensorFlow.
  • Familiarity with Nemotron, NeMo, Dynamo, TensorRT-LLM, Triton, vLLM, and similar inference and optimization stacks.
  • Strong communication and presentation skills, with the ability to advise both technical teams and executives.

Nice To Haves

  • Experience helping partners or customers deploy large-scale AI systems in production.
  • Built benchmark suites, fine-tuning recipes, sizing calculators, or TCO models for AI workloads.
  • Strong knowledge of GPU infrastructure, including NVLink, InfiniBand, MPI, NCCL, or adjacent cluster technologies.
  • Active OSS contributions in model tooling, inference, evaluation, or performance optimization.
  • Comfortable moving between deep technical reviews, architecture guidance, benchmarking, and partner enablement.

Responsibilities

  • Serve as the lead technical advisor for partners delivering reasoning, multimodal, fine-tuning, and model-serving solutions.
  • Guide partners to the right approach for customer workloads across fine-tuning, distillation, quantization, compression, benchmarking, and evaluation.
  • Define benchmark plans, synthetic data and evaluation workflows, and repeatable validation recipes.
  • Advise on compute planning, including cluster sizing, GPU and network selection, storage, memory tradeoffs, latency and throughput targets, and production-readiness testing.
  • Guide inference architecture across prefill and decode tradeoffs, batching, routing, disaggregated inference, and serving efficiency.
  • Develop reference architectures, playbooks, benchmark recipes, TCO calculators, and sizing models across CUDA, NeMo, Nemotron, Dynamo, TensorRT-LLM, Triton, NIMs, and related tooling.
  • Support pre- and post-sales engagements by translating complex model and infrastructure topics for partner and customer teams.

Benefits

  • You will also be eligible for equity and benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service