Simulation & Test Engineer (US Based)

Andromeda RoboticsSan Francisco, CA
7h$150,000 - $205,000

About The Position

At Andromeda Robotics, we’re not just imagining the future of human-robot relationships; we’re building it. Abi is the first emotionally intelligent humanoid companion robot, designed to bring care, conversation, and joy to the people who need it most. Backed by tier-1 investors and with customers already deploying Abi across aged care and healthcare, we’re scaling fast, and we’re doing it with an engineering-first culture that’s obsessed with pushing the limits of what’s possible. This is a rare moment to join: we’re post-technical validation, pre-ubiquity, and building out the team that will take Abi from early access to global scale. We are looking for a creative and driven Simulation and Test Engineer to own the simulation and test infrastructure, and be responsible for developing the Abi acceptance criteria that underpins both Abi’s autonomous navigation and conversational AI & embodied behaviours, ideally both, but a deep understanding in at least one of these areas is essential. Your work will set the foundations for us to extend those simulation environments to generate synthetic data for our Machine Learning (ML) models.

Requirements

  • Bachelor’s degree in Computer Science, Robotics, Engineering, or a related field (Master’s preferred but not required).
  • ~5+ years of professional experience building simulation and/or test infrastructure for complex systems such as robots, autonomous vehicles, drones, conversational AI systems, perception systems or embodied AI.
  • Strong programming proficiency in Python (essential) and C++ (valuable).
  • Hands-on experience with at least one or more robotics simulation platforms e.g.:
  • NVIDIA Isaac Lab / Isaac Sim, Gazebo, CARLA, AirSim, or similar
  • Physics engines such as PhysX, MuJoCo, etc.
  • Solid understanding of core robotics principles: kinematics, dynamics, perception and control.
  • Experience testing AI/ML systems, ideally at least one or more of:
  • LLM-based or voice-based conversational systems
  • Audio/speech pipelines
  • Computer vision or perception models
  • Embodied / interactive AI behaviours e.g. autonomous systems
  • Experience with testing frameworks and CI/CD tools (e.g. pytest, Jenkins, GitHub Actions, GitLab CI) for automated regression testing.
  • Familiarity with ML evaluation metrics and basic experimental design.
  • Demonstrated strength in requirements gathering, documentation and traceability from requirements → test cases → pass/fail criteria.
  • A proactive, first-principles thinker who is excited by the prospect of owning a critical system at an early-stage startup.

Nice To Haves

  • Experience using simulation for ML applications, such as reinforcement learning, imitation learning, or synthetic data generation.
  • Experience with character animation systems, motion capture pipelines or gesture generation for embodied agents.
  • Strong experience with 3D modelling, game engines and content generation (Unity, Unreal Engine).
  • Knowledge of sensor modelling techniques for cameras, LiDAR and audio (microphone arrays, room acoustics).
  • Experience building and managing large-scale, cloud-based simulation or test infrastructure.
  • Experience with ROS/ROS2 integration in human-rated environments or regulated domains; exposure to standards such as ISO 13482 for personal care robots.
  • Experience working with robots or autonomous systems in human-centric environments (healthcare, aged care, hospitality, etc.).

Responsibilities

  • Architect & Build Simulation and Test Platforms
  • Design, develop and maintain a scalable, high-fidelity simulation platform for Abi that supports both navigation and embodied interaction use cases.
  • Own sim-to-real and test-to-deployment: Develop robust CI/CD pipelines for automated testing in simulation and synthetic test environments, enabling rapid iteration and guaranteeing software quality before deployment onto our physical robots.
  • Model with fidelity: Implement accurate models of Abi’s hardware, including sensors (cameras, microphones, LiDAR, etc.), actuators, kinematics and upper-body motion as needed for both navigation and interaction scenarios.
  • Develop Worlds, Scenarios and Test Suites
  • Develop virtual worlds and test scenarios:
  • Navigation-focused environments (indoor facilities, dynamic human traffic, obstacles, edge cases)
  • Conversational & social interaction scenarios (multi-speaker audio scenes, social group configurations, gesture contexts)
  • Conversational AI & memory testing: Build synthetic test environments for:
  • Voice-to-voice conversational quality and response appropriateness
  • Tool-calling / action selection behaviour
  • Memory systems – context retention, recall accuracy, conversation coherence
  • Perception & audio testing: Create test suites and synthetic scenes for:
  • Social awareness (face detection, gaze tracking, person tracking)
  • Audio modelling (multi-speaker, room acoustics, noise conditions, VAD)
  • Gesture / embodiment testing: Working with Controls/ML, create infrastructure to validate that Abi’s body gestures and animations are appropriate, synchronised and safe in real and simulated interactions.
  • Own Quality, Metrics and Regression
  • Custodian of quality metrics: If they don’t exist, work with stakeholders to elicit use cases, derive requirements, and define measurable quality metrics for navigation, conversational AI, audio, perception and gesture.
  • Formalise requirements and traceability: Capture requirements and trace them through to test cases and automated regression suites.
  • Analyse and improve: Build dashboards, tools and analysis pipelines to mine test and simulation data, identify bugs, track performance over time, and feed actionable insights back to engineering teams.
  • Scale to Synthetic Data & ML Training
  • Extend test environments into training data generation pipelines, working closely with character and autonomy teams.
  • Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac) to generate high-fidelity synthetic interaction data at scale for:
  • Character animation and gesture models
  • Perception models (vision, audio, social awareness)
  • Navigation & planning in human-centred environments
  • Enable ML-generated gesture and navigation behaviours to augment hand-crafted workflows, and help validate them in rich, multi-actor simulated scenarios.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service