Autonomous Solutions-posted about 23 hours ago
Full-time • Mid Level
51-100 employees

At ASI, we are revolutionizing industries with state-of-the-art autonomous robotics solutions. Within the fields of agriculture, construction, landscaping, and logistics, we deliver technologies that enhance safety, productivity, and efficiency. With our core values of Simplicity, Safety, Transparency, Humility, Attention to Detail and Growth guiding everything we do, we're shaping the future of automation in dynamic markets. As a Software Development Engineer in Test (SDET) II, you will play a critical role in ensuring the quality, reliability, and real-world readiness of our software and AI driven autonomous systems. This role blends traditional SDET responsibilities with advanced autonomy and AI validation, focusing on automated testing, performance verification, and system level validation across simulation, hardware, software, and field environments. You will design scalable test frameworks, expose edge cases, and support safe, reliable deployments of mission critical systems.

  • Define and own the AI driven testing strategy for autonomy across simulation, hardware, software, and real-world validation.
  • Develop automated verification pipelines that use AI, data driven analysis, and intelligent test generation to evaluate system performance at scale.
  • Design tests that expose edge cases, failure modes, rare events, and long tail conditions critical for safe autonomous operation.
  • Integrate testing workflows with model training pipelines, deployment systems, data infrastructure, and robotics platforms.
  • Build metrics, dashboards, and evaluation frameworks that measure reliability, robustness, safety, and regression impacts across model updates.
  • Collaborate with AI researchers, robotics engineers, software developers, and safety teams to ensure testing requirements align with system capabilities and operational constraints.
  • Use simulation tools, digital twins, and scenario generation to replicate diverse operating conditions and evaluate autonomous behaviors.
  • Validate AI performance on hardware in the loop, software in the loop, and real-world testing environments.
  • Develop tools that automate labeling, anomaly detection, and performance triage to accelerate debugging and model improvement.
  • Identify gaps in test coverage, implement continuous improvements in test methodologies, and maintain high verification standards.
  • Support release processes by providing structured validation results, go or no-go recommendations, and risk assessments.
  • Bachelor's degree in Computer Science, Software Engineering, or a related field.
  • 3-5 years of experience in software testing, validation engineering, machine learning engineering, or autonomous systems development.
  • Strong understanding of AI behavior, model evaluation, data pipelines, and real time system interactions.
  • Hands on experience with automated testing frameworks, simulation tools, scenario generation, or hardware in the loop validation.
  • Ability to design testing architectures that scale across cloud, embedded, and robotics environments.
  • Experience analyzing metrics, failure cases, regression patterns, and long tail performance challenges.
  • Ability to collaborate with research, robotics, infrastructure, and product teams to define and execute complex testing plans.
  • Strong programming skills in languages used for verification and automation such as Python, C++, or similar.
  • Experience with CI/CD systems, version control, and structured testing workflows.
  • Strong problem solving and analytical capabilities with a focus on reliability and safety.
  • Ability to communicate testing results, risks, and recommendations clearly to technical and non-technical stakeholders.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service