Lead ML Engineer (VLA/ALM focused)

May MobilityAnn Arbor, MI

About The Position

May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think. Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We’re building the world’s best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we’ve given more than 500,000 autonomous rides to real people around the globe. And we’re just getting started. We’re hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us. Job Summary May Mobility is entering an exciting phase of growth as we expand our first-of-its-kind autonomous shuttle and mobility services across the nation. Launched in 2017 with a strong team of experienced roboticists and software engineers with decades of experience fielding robotic systems in the wild, May Mobility is looking to expand its team of robotics engineers with a background in robotics or autonomous vehicles.

Requirements

  • A minimum of 5+ years of industry experience working on real-world robot systems maintaining high-quality industrial-grade code.
  • Master’s degree in Robotics, Computer Science, or Computer Engineering, or a field that requires a strong mathematical and/or engineering foundation.
  • Strong programming skills in C/C++/Python; software development in Linux environments.
  • Strong experiences in ML/DL development with PyTorch/TensorFlow.
  • Direct experience developing or fine-tuning Vision-Language Models (e.g., CLIP, BLIP, or custom VLM backbones) for real-world applications.
  • Experience working on a combination of several of the following real-time areas: Computer Vision: Object detection, classification, segmentation. Semantic scene understanding and open-vocabulary detection. Multi-target Tracking and Sensor Fusion. Localization/prediction/planning.
  • Extensive experience deploying features/ML/DL models in real-time systems with high accuracy and low latency.
  • Familiar with ML development cycle, deployment, and optimization.
  • Deep understanding of data: data pipeline, data balancing, data mining, and data-driving performance improvement.
  • Knowledge of multimodal learning techniques, including contrastive learning and prompt engineering for zero-shot visual recognition.
  • Deep understanding of testing frameworks and workflows.
  • Excellent attention to detail and rigorous testing methodology.
  • Exceptional written and verbal communication skills and team leading abilities.

Nice To Haves

  • Experience utilizing VLMs for edge-case (long-tail) detection and explainable AI in autonomous systems.
  • Familiar with synthetic data in ML system development.
  • Familiar with Reinforcement Learning (RL) for ML systems.
  • Familiar with ML/DL optimization on real-time products with limited compute resources (e.g., quantization of large transformer models).
  • Strong background demonstrated through high-quality capability deliveries to robots working in the field.

Responsibilities

  • Work independently with cross-functional teams to develop software and system requirements.
  • Design, implement, and test state-of-the-art perception features on time with high quality, industrial-grade production code stack.
  • Integrate Vision-Language Models (VLMs) and Large Language Models (LLMs) into the perception stack to improve semantic scene understanding and reasoning.
  • Track and trend technical performance of perception in the field.
  • Lead major feature development including feature design, code reviews, issue diagnosis, and resolution.
  • Lead extensive testing to validate features and satisfy release schedules.
  • Lead development related to data, development, and ML pipelines, specifically focused on multimodal data alignment for training foundation models.

Benefits

  • Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate.
  • Health Savings and Flexible Spending Healthcare and Dependent Care Accounts available.
  • Rich retirement benefits, including an immediately vested employer safe harbor match.
  • Generous paid parental leave as well as a phased return to work.
  • Flexible vacation policy in addition to paid company holidays.
  • Total Wellness Program providing numerous resources for overall wellbeing
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service