About The Position

NVIDIA is building the next generation of AI systems that can perceive, reason about, and generate dynamic worlds. Our team advances world foundation models to enable high-fidelity, temporally stable video and world generation for Physical AI, simulation, and interactive experiences. This role operates at the applied-research boundary: developing and validating model improvements, then hardening them into production-grade checkpoints and recipes that teams can reliably build on. The technical focus is on human appearance, motion and action understanding. Progress is measured through disciplined experimentation, robust diagnostics, and repeatable side-by-side evaluation. Work is delivered in close partnership with data, platform, and product engineering to ensure improvements translate into real performance and quality.

Requirements

  • PhD in Computer Science, Graphics, Computer Engineering, or a closely related field (or equivalent experience).
  • 8+ years of applied research and/or industry experience in vision, graphics, or adjacent ML domains or similar area.
  • 3+ years of direct experience designing, training, and evaluating generative models for image/video/audio, with strong fundamentals in modern deep learning.
  • Hands-on experience improving generative models with a focus on perceptual quality and temporal stability, especially for generating humans.
  • Advanced proficiency in Python, PyTorch, C++, and CUDA with strong research-engineering practices (reproducibility, testing, profiling, experiment tracking).
  • Experience training and debugging large models in multi-GPU and/or multi-node environments and distributed training workflows
  • Practical knowledge of inference/runtime bottlenecks and optimization techniques.
  • Strong “eye for quality” and interest in diagnosing visual artifacts (sharpness, texture detail, temporal stability, etc.) using perceptual metrics, human preference signals, or learned evaluators.

Nice To Haves

  • Proven track record in related research, including publications in top conferences (e.g., NeurIPS, CVPR, ICLR), with clear evidence of impact on model quality or robustness.
  • Experience using agentic workflows, and AI coding companions, to accelerate research and production development, including code generation, debugging, test creation, experiment automation, benchmark development, documentation, and large-codebase navigation.

Responsibilities

  • Research, implement, and validate model architecture and algorithm changes that improve video generation fidelity, with emphasis on human-centric quality (identity preservation, anatomy, motion coherence, and interaction realism).
  • Explore and prototype improvements across spatial multimodal modeling, modality alignment, flow-based or diffusion-based video generation, and neural rendering-inspired representations to improve controllability and long-horizon consistency.
  • Improve training and inference efficiency through architectural and post-training techniques (compute/memory optimizations, distillation, pruning, and compression).
  • Define model training objectives that improve sim-to-real and real-to-sim generalization, especially for human motion, contact, and interaction dynamics across real-world and synthetic/simulation data.
  • Develop detailed, domain-specific benchmarks for evaluating world foundation models, especially generation and understanding world models that reason about video, simulation, and physical environments.
  • Translate research results into robust implementations like training code, production-grade checkpoints, model integrations, and demos that clearly showcase capability gains across teams.

Benefits

  • highly competitive salaries
  • comprehensive benefits package
  • equity
  • benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service