About The Position

NVIDIA's invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company”. In this role you will work closely with deep learning compiler engineers to build the infrastructure and automation that powers day-to-day development and releases. Responsibilities include designing and maintaining sophisticated CI/CD systems that run ML workloads at scale across diverse GPU environments, produce actionable signals for compiler developers, testers, and release engineers, and continuously improve stability and turnaround time. This includes building performance-aware pipelines and workload harnesses that support release confidence and long-term quality of deep learning compiler stacks.

Requirements

  • BS, MS, or PhD (or equivalent experience) in Computer Science, Computer/Electrical Engineering, Mathematics, or related field
  • 3+ years of professional experience designing and scaling CI/CD, build/release, or developer productivity infrastructure for DL/GPU software environments
  • Strong software engineering skills (Python required) with ability to architect, implement, and debug complex systems end-to-end
  • Hands-on experience building CI/MLOps platform capabilities—pipeline orchestration, artifact/package management, and production-grade observability (logs/metrics/dashboards)—with strong reliability and maintainability
  • Experience with deep learning frameworks/runtime stacks (e.g., PyTorch, JAX, vLLM, SGLang, TensorRT, NeMo) and running real workloads in production-like environments
  • Working knowledge of Linux-based development and debugging across complex software/hardware stacks (drivers, CUDA libraries, containers, cluster schedulers, etc.)

Nice To Haves

  • Experience applying AI/LLMs and agent-based workflows to improve CI and infrastructure (e.g., smarter triage/routing, automated failure summarization, intelligent test selection, regression isolation, or developer-assist tooling)
  • Experience with compiler-focused verification techniques (e.g., differential testing across backends/versions, IR-level checks, automated reduction/minimization, fuzzing/property-based testing, or translation-validation style approaches)
  • Compiler-adjacent knowledge, including familiarity with LLVM/MLIR-based toolchains and the ability to debug issues that span compilation/codegen, runtime execution, and hardware/software boundaries

Responsibilities

  • Drive CI and infrastructure capabilities that make deep learning compiler development fast, reliable, and scalable. This includes improving signal-to-noise (flake reduction, reproducibility, and richer diagnostics), accelerating iteration cycles, scaling capacity and coverage across models/hardware/software configurations, and building strong observability (metrics, logging, tracing, dashboards) so failures are easy to understand and fix.
  • Explore practical uses of AI to enhance CI workflows—such as smarter test selection, automated triage/summarization, and faster issue isolation—ultimately increasing the quality and speed of deep learning compiler development, testing, and release.

Benefits

  • competitive salaries
  • generous benefits package
  • equity
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service