AI Engineer - AI Platform

TraversalNew York, NY
97d$150,000 - $300,000

About The Position

Traversal is the AI Site Reliability Engineer (SRE) for the enterprise—already trusted by some of the largest companies in the world to troubleshoot, remediate, and even prevent the most complex production incidents. Our mission is to free engineers from endless firefighting and enable them to focus on creative, high-impact work. Our roots remain deeply embedded in AI research, and we’re channeling that scientific rigor and creativity into building the premier AI agent lab for the enterprise. Hence, what we’re proudest of is assembling the most talented yet nicest group of individuals, including researchers from MIT, Harvard, and Berkeley, to world-class engineers from industry: Citadel Securities, Cockroach Labs, Datadog, DE Shaw, Meta, Hebbia, Perplexity, Glean, Pinecone, and more, to take on one of the hardest problems for AI to solve. Without the entire team, none of this would be possible.

Requirements

  • Strong system design skills for distributed systems.
  • Proven production-scale software engineering experience.
  • Experience with LLM-based applications and/or multi-agent systems.
  • Strong data modeling skills and a track record of writing clean, maintainable code.
  • Collaborative, impact-driven mindset and ability to work across research and engineering teams.

Nice To Haves

  • Knowledge of software incidents and production SRE workflows.
  • Prior experience with AI benchmarking or evaluation systems.
  • Experience creating quantitative scoring systems or benchmarks in new problem domains.
  • Familiarity with observability stacks (logs, metrics, traces) and telemetry systems.
  • Background in agentic architectures, orchestration frameworks, or applied AI research.

Responsibilities

  • Design and build agent frameworks, orchestration layers, and developer tooling for Traversal’s AI agents.
  • Architect scalable distributed systems to support real-time workloads over petabytes of heterogeneous telemetry data.
  • Build live evaluation pipelines, automated scoring systems, and benchmarks to measure and drive AI performance.
  • Integrate evaluation systems into the developer lifecycle to create a fast research-to-production loop.
  • Surface evaluation signals and benchmarks to customers as a core product capability.
  • Partner with research scientists to prototype and productionize agentic architectures.
  • Own observability, latency, and reliability for agents in production.
  • Evolve and scale the agent + evaluation platform as the backbone of Traversal’s AI systems.

Benefits

  • Competitive compensation
  • Startup equity
  • Health insurance
  • Flexible time off
  • In-office snacks
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service