Andromeda Robotics is seeking a Simulation and Test Engineer to build Andromeda's testing infrastructure for our conversational AI systems and embodied character behaviours. Your immediate focus will be creating robust test systems for Abi's voice-to-voice chatbot, social awareness perception, and gesture motor control. As this infrastructure matures, you'll extend it into simulation environments for generating synthetic training data for character animation and gesture models. You'll work at the intersection of our character software, robotics, perception, conversational AI, controls, and audio engineering teams. You'll collaborate with product owners and technical specialists to define requirements, integrate systems, and ensure quality across our AI/ML stack. Phase 1: Build The Test Foundation Define and stand up synthetic test environments for our AI/ML conversational stack Conversational AI testing: voice-to-voice chat quality, response appropriateness, tool calling accuracy Memory system testing: context retention, recall accuracy, conversation coherence Audio modelling and testing: multi-speaker scenarios, room acoustics, voice activity detection Perception system testing: social awareness (face detection, gaze tracking, person tracking) Gesture appropriateness testing: Working with our Controls/ML team, create test infrastructure to validate that Abi's body gestures CI/CD and automated regression testing for all AI/ML subsystems Custodian of quality metrics: if they don't exist, work with stakeholders to elicit use cases, derive requirements, and establish measurable quality metrics Requirements formalisation: you're skilled at gathering, documenting, and tracing requirements back to test cases Phase 2: Scale To ML Training Infrastructure Our approach to gesture generation requires high-fidelity synthetic interaction data at scale. You'll investigate and build the infrastructure to generate this data, working closely with our character software team to define requirements and validate approaches. Extend test environments into training data generation pipelines Investigate and stand up simulation tools (e.g. Unity, Unreal Engine, Isaac Sim) to support our machine learning pipeline with synthetic data and validation infrastructure Build infrastructure for fine-tuning character animation models on simulated multi-actor scenarios Enable ML-generated gesture development to augment hand-crafted animation workflows Create virtual environments with diverse social interaction scenarios for training and evaluation Success In This Role Looks Like Months 1-3, stabilise our conversational system with automated regression tests and measurable quality benchmarks. By month 6, deliver an integrated simulation environment enabling rapid testing and iteration across our AI/ML stack. You'll design tests that push our systems beyond their limits and find what's brittle. Through trade studies and make-vs-buy decisions, you'll establish the infrastructure, set up automatic regression tests, and trace test cases back to high-level requirements. You'll be the final guardian, verifying that our AI and machine learning systems work as intended before integration with Abi's physical platform. Your work will directly impact the speed and quality of our development, ensuring that every software build is robust, reliable, and safe.