About The Position

Shield AI builds autonomy systems for defense applications, including air, maritime, and space platforms operating in complex and contested environments. We are establishing a centralized AI and Data Platform organization responsible for the infrastructure that underpins autonomy development across Hivemind and other programs. This team owns the systems used to train models, run simulation, manage data, and deploy models to operational environments. We are seeking a Principal Engineer that will scale an initial architecture into a platform that supports multiple autonomy programs. Success in this role requires disciplined execution, delivering fast iteration for engineering teams while maintaining reliability, cost control, and architectural consistency as the system scales. The Principal Engineer is accountable for ensuring engineers can move efficiently from idea to trained model to deployed capability, and that infrastructure decisions reflect the realities of the domain, including simulation-driven development, continuously evolving multi-modal sensor data, and deployment to constrained and reliability-critical systems. This role spans the full lifecycle of autonomy development, training foundation models, running large-scale and multi-fidelity simulation, managing training data, evaluating models, and deploying optimized models to edge systems. A key part of this role is defining how these capabilities extend beyond internal use. This includes establishing how Shield AI delivers AI infrastructure in customer environments across on-premise, cloud, hybrid, and sovereign or nationally constrained environments.

Requirements

  • Experience building and operating ML infrastructure at scale (100+ GPU clusters, distributed systems)
  • Experience defining compute strategy, including on-premise vs cloud tradeoffs, capacity planning, and cost management
  • Strong understanding of ML workloads, including foundation models, RL/MARL, simulation-based training, and fine-tuning
  • Experience building data platforms with dataset versioning, lineage, and cataloging
  • Ability to debug and resolve system issues when needed

Nice To Haves

  • Experience in defense or classified environments (e.g., air-gapped systems, SCIFs)
  • Experience with simulation-heavy ML systems (robotics, autonomy, or similar domains)
  • Experience deploying and optimizing models for edge hardware
  • Familiarity with HPC systems (schedulers, parallel storage, high-speed networking)

Responsibilities

  • Define and operate the core AI and data platform across training, simulation, data management, evaluation, and deployment.
  • Own where and how workloads run across on-premise, cloud, and hybrid environments. Drive capacity planning, utilization, and cost-per-compute decisions, including support for classified and air-gapped systems.
  • Build infrastructure for distributed training (supervised learning, RL/MARL, foundation models) and large-scale, multi-fidelity simulation. Ensure training and simulation systems operate together without bottlenecks.
  • Ingest and manage multi-modal sensor data (EO, IR, radar, EW, IMU). Establish dataset versioning, data lineage, feature storage, data cataloging, and classification-aware storage and access controls.
  • Establish a consistent workflow for experiment tracking, model registry, artifact provenance, and automated validation. Implement evaluation and V&V gates so models meet defined standards before deployment.
  • Own the pipeline from training to deployment, including model optimization (e.g., distillation, quantization, pruning), deployment to edge systems, monitoring, drift detection, and retraining triggers.
  • Define how AI infrastructure is deployed in customer environments across on-premise, cloud, hybrid, and sovereign settings. Establish a consistent approach that avoids one-off solutions while adapting to operational constraints.
  • Define common tools, interfaces, and workflows across teams. Reduce duplication while maintaining flexibility where needed.
  • Work directly with Hivemind and other autonomy teams to ensure the platform supports real workloads and evolves with program needs.

Benefits

  • Bonus
  • Benefits
  • Equity
  • Temporary benefits package (applicable after 60 days of employment)
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service