About The Position

Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: • Enable unprecedented generalization across diverse tasks • Integrate multi-modal learning capabilities (visual, tactile, linguistic) • Accelerate skill acquisition through demonstration learning • Enhance robotic perception and environmental understanding • Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As a Senior Applied Scientist, you will lead the development of machine learning systems that help robots perceive, reason, and act in real-world environments. You will set technical direction for adapting and advancing state-of-the-art models (open source and internal research) into robust, safe, and high-performing “robot brain” capabilities for our target tasks, environments, and robot embodiments. You will drive rigorous capability profiling and experimentation, lead targeted innovation where gaps exist, and partner across research, controls, hardware, and product teams to ensure outputs can be further customized and deployed on specific robots.

Requirements

  • PhD, or Master's degree and 6+ years of applied research experience
  • Hands-on deep learning expertise, with demonstrated impact in one or more: multimodal modeling, imitation learning/RL for robotics, vision-language/video models, sim2real/real2sim
  • Experience leading technical projects end-to-end (problem framing → execution → measurable outcomes), with strong experimental rigor and reproducible baselines
  • Software engineering skills (Python + C++/Java), ability to build reliable training/evaluation infrastructure and production-quality research code
  • Demonstrated technical contributions (publications/patents/open-source or substantial internal impact) and ability to communicate technical direction clearly

Nice To Haves

  • Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc.
  • Experience with large scale distributed systems such as Hadoop, Spark etc.
  • Experience driving robotics foundation models (VLA/visuomotor/video-action/world-model-based policies) and multi-task robot learning
  • Experience building evaluation/benchmarking systems that meaningfully guide roadmap and model iteration
  • Experience with scaling (large-scale data, distributed training, inference efficiency/latency optimization)
  • Experience with real-robot experimentation and sim↔real pipelines; comfort partnering across controls/hardware teams

Responsibilities

  • Lead technical initiatives for foundation-model capabilities (e.g., visuomotor / VLA / video-action worldmodel-action policies), from problem definition through validated model deliverables.
  • Own model readiness for our embodiment class: drive adaptation, fine-tuning, and optimization (latency/throughput/robustness), and define success criteria that downstream teams can build on.
  • Establish and evolve capability evaluation: define benchmark strategy, metrics, and profiling methodology to quantify performance, generalization, and failure modes; ensure evaluations drive clear roadmap decisions.
  • Drive the data + training strategy needed to close key capability gaps, including data requirements, collection/curation standards, dataset quality/provenance, and repeatable training recipes (sim + real).
  • Invent and validate new methods when leveraging SOTA is insufficient—new training schemes, model components, supervision signals, or sim↔real techniques—backed by strong empirical evidence.
  • Influence cross-team technical decisions by collaborating with controls/WBC, hardware, and product teams on interfaces, constraints, and integration plans; communicate results via design docs and technical reviews.
  • Mentor and raise the bar: guide junior scientists/engineers, set best practices for experimentation and code quality, and drive a culture of rigor and reproducibility.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service