GM-posted 1 day ago
Intern
Hybrid • Mountain View, CA
5,001-10,000 employees

To help facilitate administration of relocation benefits if you are selected, please apply using the permanent address you would move from. Work Arrangement: Hybrid: This internship is categorized as hybrid . The selected intern is expected to report to the office up to three times per week or as determined by the team. Locations: Mountain View, California Sunnyvale, California We are seeking highly motivated interns to research, explore, and evaluate cutting-edge AI-driven approaches for robot localization/map construction, perception , motion planning, scenario simulation, and data engineering. The role will involve hands-on experimentation, algorithm development, and integration of multi-modal sensor data to advance autonomous robotic systems. About the Team: The Robotics Software team is developing the next generation of autonomous robotic systems, focusing on autonomous mobile robots (AMRs) and intelligent robotic platforms. We develop full-stack robotics capabilities—from perception and planning to control and system integration— bringing innovative, real-world autonomous solutions to the future of the work. About the Role: We are looking for a self-motivated intern to prototype the development of AI-driven sense-plan-act architecture that supports the development, testing, and validation of autonomous robotic systems in manufacturing plants. In this role, you will focus on developing camera- and LiDAR-based wheel-drive robotic system, design technical specification, creating and executing test plan, integrating the software with physical and simulation platforms, and enabling teams to accomplish the technical and business objectives . You will work cross-functionally with experts in autonomy, contributing to system-level validation and the continuous improvement of system robustness and validation workflows. You will focus on one or more of the following areas: Localization Evaluate and test LiDAR-based localization repositories. Investigate Gaussian splatting localization pipelines and assess feasibility for embedded platforms. Explore machine-learning techniques for feature point correspondence between image frames. Implement and benchmark place recognition algorithms using computer vision. Integrate dynamic object handling into localization workflows. Develop multi-agent map-building and construction processes (offboard). Design sensor fusion strategies for heterogeneous modalities (e.g., 3D LiDAR, 2D LiDAR, monocular camera, IMU, wheel odometer). Apply post-processing optimization algorithms (e.g., factor graph and pose graph ). Data Engineering Create, curate, and manage datasets for training AI models. Ensure data quality and diversity for robust algorithm development. Simulation Upgrade the existing simulation environment to support generation of realistic 3D LiDAR data and photorealistic image rendering for advanced perception testing. Design and implement adversarial scenarios to identify potential safety vulnerabilities and enhance overall system robustness. Perception Develop perception solutions leveraging joint representation of Bird’s Eye View (BEV) and DETR-based object detection using multi-modality inputs. Enhance robustness in perception pipelines for dynamic environments. Motion Planning Research and implement denoising diffusion-based motion planning algorithms. Reinforcement learning in simulation engine to improve path generation policy. Evaluate performance and scalability of AI-driven planning approaches in real-world scenarios.

  • Design and implement high-precision localization methods using camera, LiDAR, wheel encoder and inertial sensors.
  • Develop scalable and real-time localization module optimized for autonomous robotic systems.
  • Create engineering specifications and test procedures to ensure system compliance.
  • Evaluate and benchmark the performance of systems.
  • Review the state-of-the-art in camera- and LiDAR-based algorithms
  • Troubleshoot using strong knowledge of probabilistic estimation, sensor fusion, and real-time system implementation.
  • Adjust and fine-tune system parameters to improve accuracy and robustness
  • Has a Masters Degree and is currently enrolled in a PhD program in Robotics, Computer Science, Electrical/Mechanical Engineering, or related technical fields.
  • Proficiency in C++ or Python.
  • Adhere to continuous development and deployment practices in robotic software development
  • Expertise in one or more of the technical areas: C amera- and LiDAR-based localization algorithms, statistical estimation theory, and practices such as pose graph and factor graph optimization and implementation.
  • Understanding state-of-the-art solutions in place recognition for addressing loop-closure detection issues.
  • Perception , e.g., feature embedding, object detection, bird’s eye view (BEV) semantic representation
  • Motion path planning algorithms, e.g., Nav2
  • Simulation engines: e. g., IsaacSim , IsaacLab , and etc.
  • Dataset curation and annotation tools
  • Experience optimizing algorithm/software to balance performance within resource constraints.
  • Familiarity with ROS2 or other robotics middleware.
  • Machine learning knowledge and practice experience.
  • Proficiency with deep learning frameworks and toolchains like PyTorch and TensorFlow
  • Familiarity with repositories like DETR, BEVformer , BEVfusion , SAMv2, Ceres Library/GTSAM, ORB-SLAM, VINS- Mono, and etc.
  • Experience working with cloud-based data collection and data pipeline systems.
  • AV/ADAS integration or industrial automation experience is a bonus .
  • Graduating between December 2026 and June 2027.
  • Paid US GM Holidays
  • GM Family First Vehicle Discount Program
  • Result-based potential for growth within GM
  • Intern events to network with company leaders and peers
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service