Ml Engineer Jobs

790 jobs found — updated daily

Senior ML Engineer - Model Compression

GMSunnyvale, CA
Remote

About The Position

The Compression and Parity team in GM’s Autonomous Vehicle (AV) Organization enables repeatable, high-velocity model deployments through principled and automated model compression under strict safety guarantees. We partner closely with model developers and deployment and infra engineers to ship numerically robust, low-latency models to the car, blending rigorous analysis with state-of-the-art methods and our own innovations. Over time, you will help grow and evolve the Compression and Parity function through developing and iterating on quantization and compression strategies for AV models, considering model numerical properties, safety and latency constraints, and hardware performance. You will partner on the deployment of quantized models to NVIDIA-based AV hardware with deployment, compiler, and kernel teams. You will advance numerical sensitivity analyses to recommend safe compression policies per op/layer/block, use AV-relevant metrics to evaluate compressed models, and collaborate with Embodied AI to support compression-aware modeling. You will evolve sensitivity analysis, compression, and parity tooling into a connected, automated flow that makes low-precision deployments repeatable, reliable, and low-touch, with an emphasis on robust execution and maintainability. You will bridge the gap between state-of-the-art model compression research and safety-constrained deployment while making strong technical contributions in cross-functional projects and educating others on best practices.

Requirements

  • Bachelor's degree in Computer Science, Electrical Engineering, Physics, Mathematics, Data Science / ML, or a closely related quantitative field (or equivalent experience)
  • 3+ years of industry experience focused on model optimization and deployment, with significant hands-on work in neural network quantization / model compression / efficient inference or relevant experience
  • Strong proficiency in PyTorch and experience with graph-level representations (e.g., PyTorch FX, ONNX) for capture and manipulation
  • Background in numerical linear algebra and optimization (conditioning, spectral properties, Jacobians, Hessians) and how they relate to quantization robustness

Nice To Haves

  • Master's or PhD degree in related quantitative fields
  • Deep experience with PTQ and QAT, compression frameworks (e.g., PT2E, ModelOpt, torchao) and advanced quantization algorithms (e.g., GPTQ, AWQ, SmoothQuant, QuIP, SparseGPT), as well as with building or extending quantization toolchains
  • Hands-on experience designing numerics observability and sensitivity tooling integrated into training or evaluation pipelines (logging ranges, saturation, quant noise, etc.)
  • A track record of collaboration, including leading cross-functional initiatives and mentoring others
  • Experience with additional compression techniques such as structured/unstructured pruning, low-rank decomposition, or knowledge distillation
  • Experience with perception and/or transformer-based models (e.g., multi-view encoders, BEV backbones, detection/segmentation heads, trajectory or planning networks), ideally in AV / ADAS
  • General understanding of kernel performance and optimization for reduced precision formats
  • Direct experience with specialized hardware accelerators for edge deployment on tight latency and memory budgets (automotive SoCs, robotics platforms, or similar)
  • Published research, open-source contributions, or other notable, intellectually curious work in quantization, compression, or efficient inference
  • 3+ years of industry experience focused on model optimization and deployment, with significant hands-on work in neural network quantization / model compression / efficient inference or relevant experience

Responsibilities

  • Developing and iterating on quantization and compression strategies for our AV models, considering model numerical properties, safety and latency constraints, and hardware performance
  • Partnering on deployment of quantized models to NVIDIA-based AV hardware with our deployment, compiler, and kernel teams
  • Advancing our numerical sensitivity analyses to recommend safe compression policies per op/layer/block
  • Using AV-relevant metrics (perception, trajectory, etc.) to evaluate compressed models
  • Collaborating with Embodied AI to support compression-aware modeling
  • Evolving sensitivity analysis, compression, and parity tooling into a connected, automated flow that makes low-precision deployments repeatable, reliable, and low-touch, with an emphasis on robust execution and maintainability
  • Bridging the gap between state-of-the-art model compression research and safety-constrained deployment
  • Making strong technical contributions in cross-functional projects
  • Educating others on best practices

Benefits

  • medical
  • dental
  • vision
  • Health Savings Account
  • Flexible Spending Accounts
  • retirement savings plan
  • sickness and accident benefits
  • life insurance
  • paid vacation & holidays
  • tuition assistance programs
  • employee assistance program
  • GM vehicle discounts

Build a Resume for Ml Engineer

The resume builder that gets results.

  • Get clear feedback so you look as qualified as you are
  • Align your resume with the job to get further in the process, faster
  • Take the guesswork out of resume writing

Explore Related Job Searches

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service