Andromeda Robotics-posted 22 days ago
$150,000 - $250,000/Yr
Full-time • Mid Level
San Francisco, CA

Andromeda Robotics is seeking a Simulation and Test Engineer to build Andromeda's testing infrastructure for our conversational AI systems and embodied character behaviours. Your immediate focus will be creating robust test systems for Abi's voice-to-voice chatbot, social awareness perception, and gesture motor control. As this infrastructure matures, you'll extend it into simulation environments for generating synthetic training data for character animation and gesture models. You'll work at the intersection of our character software, robotics, perception, conversational AI, controls, and audio engineering teams. You'll collaborate with product owners and technical specialists to define requirements, integrate systems, and ensure quality across our AI/ML stack. Phase 1: Build The Test Foundation Define and stand up synthetic test environments for our AI/ML conversational stack Conversational AI testing: voice-to-voice chat quality, response appropriateness, tool calling accuracy Memory system testing: context retention, recall accuracy, conversation coherence Audio modelling and testing: multi-speaker scenarios, room acoustics, voice activity detection Perception system testing: social awareness (face detection, gaze tracking, person tracking) Gesture appropriateness testing: Working with our Controls/ML team, create test infrastructure to validate that Abi's body gestures CI/CD and automated regression testing for all AI/ML subsystems Custodian of quality metrics: if they don't exist, work with stakeholders to elicit use cases, derive requirements, and establish measurable quality metrics Requirements formalisation: you're skilled at gathering, documenting, and tracing requirements back to test cases Phase 2: Scale To ML Training Infrastructure Our approach to gesture generation requires high-fidelity synthetic interaction data at scale. You'll investigate and build the infrastructure to generate this data, working closely with our character software team to define requirements and validate approaches. Extend test environments into training data generation pipelines Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac Sim) to support our machine learning pipeline with synthetic data and validation infrastructure Build infrastructure for fine-tuning character animation models on simulated multi-actor scenarios Enable ML-generated gesture development to augment hand-crafted animation workflows Create virtual environments with diverse social interaction scenarios for training and evaluation Success In This Role Looks Like Months 1-3, stabilise our conversational system with automated regression tests and measurable quality benchmarks. By month 6, deliver an integrated simulation environment enabling rapid testing and iteration across our AI/ML stack. You'll design tests that push our systems beyond their limits and find what's brittle. Through trade studies and make-vs-buy decisions, you'll establish the infrastructure, set up automatic regression tests, and trace test cases back to high-level requirements. You'll be the final guardian, verifying that our AI and machine learning systems work as intended before integration with Abi's physical platform. Your work will directly impact the speed and quality of our development, ensuring that every software build is robust, reliable, and safe.

  • Architect and Build: Design, develop, and maintain scalable test infrastructure for conversational AI, perception, and gesture control systems
  • Own Testing Pipeline: Develop a robust CI/CD pipeline for automated regression testing, enabling rapid iteration and guaranteeing quality before deployment
  • Develop Test Scenarios: Create diverse audio environments, multi-actor social scenarios, and edge cases to rigorously test Abi's conversational and social capabilities
  • Model with Fidelity: Implement accurate models of Abi's hardware stack (cameras, microphone array, upper body motion) as needed for test and simulation scenarios
  • Enable Future ML Training: Design test infrastructure with an eye towards evolution into a simulation platform for generating synthetic training data for character animation and gesture models
  • Integrate and Collaborate: Work closely with the robotics, AI, and software teams to seamlessly integrate their stacks into the test infrastructure and define testing requirements
  • Analyse and Improve: Develop metrics, tools, and dashboards to analyse test data, identify bugs, track performance, and provide actionable feedback to the engineering teams
  • Bachelor or Masters in Computer Science, Robotics, Engineering, or a related field
  • 5+ years of professional experience testing complex AI/ML systems (conversational AI, perception systems, or embodied AI)
  • Strong programming proficiency in Python (essential); C++ experience valuable
  • Hands-on experience with LLM testing, voice AI systems, or chatbot evaluation frameworks
  • Understanding of audio processing, speech recognition, and/or computer vision fundamentals
  • Experience with testing frameworks and CI/CD tools (pytest, Jenkins, GitHub Actions, etc.)
  • Familiarity with ML evaluation metrics and experimental design
  • A proactive, first-principles thinker who is excited by the prospect of owning a critical system at an early-stage startup
  • Experience with simulation platforms (e.g. Unity, Unreal Engine, NVIDIA Isaac Sim, Gazebo) and physics engines
  • Experience with character animation systems, motion capture data, or gesture generation
  • Knowledge of reinforcement learning, imitation learning, or synthetic data generation for training ML models
  • Experience with 3D modelling tools and game engine content creation
  • Understanding of ROS2 for robotics integration
  • Knowledge of sensor modelling techniques for cameras and audio
  • Experience building and managing large-scale, cloud-based simulation infrastructure
  • PhD in a relevant field
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service