Forward Deployed Engineer, RL Environments

LabelboxSan Francisco, CA
Hybrid

About The Position

We’re hiring a Forward Deployed Engineer to own the design, development, and operationalization of reinforcement learning environments. You’ll build the sandboxed, reproducible execution environments that AI agents interact with during training and evaluation—things like terminal-based task benchmarks, browser and computer-use environments, and tool-augmented agentic workspaces. This is a hands-on engineering role. You’ll write production-quality infrastructure code, integrate with open-source RL tooling, and work closely with our data operations team to ensure environments are robust, observable, and ready for human annotators and model agents alike. You won’t be doing ML research, but you’ll need to deeply understand how RL training loops consume environments and where the bottlenecks live.

Requirements

  • 2+ years of professional software engineering experience, with strong fundamentals in Python and at least one systems-level language (Go, Rust, C++)
  • Demonstrated experience with containerization and sandboxing (Docker, Podman, Firecracker, or similar) in production or near-production contexts
  • Familiarity with RL concepts: MDPs, reward shaping, episode structure, observation/action spaces. You don’t need to have trained models, but you need to understand what an environment must provide to an RL training loop
  • Experience building or maintaining developer tooling, CLI tools, or infrastructure automation
  • Comfort working with browser automation frameworks or terminal interaction tooling
  • Strong debugging instincts—you can trace failures across process boundaries, container layers, and network calls
  • Ability to read and implement from academic papers and open-source benchmark repositories without extensive hand-holding

Nice To Haves

  • Direct experience building or contributing to RL environments (Gymnasium/Gym, PettingZoo, or custom environment implementations)
  • Experience with agentic AI evaluation frameworks (SWE-bench, WebArena, OSWorld, TerminalBench, or similar)
  • Familiarity with GCP or AWS infrastructure (Compute Engine, ECS/EKS, Cloud Build)
  • Prior work at an AI data company, ML platform company, or AI research lab
  • Contributions to open-source projects in the RL, agents, or dev-tools space

Responsibilities

  • Design, build, and maintain sandboxed RL environments for agentic AI training—including terminal emulators, browser automation harnesses, computer-use simulators, and tool-augmented workspaces (e.g., environments built on frameworks like TerminalBench, OSWorld, and Tau-bench)
  • Develop reproducible, containerized execution environments (Docker, VMs, lightweight sandboxes) that support deterministic task rollouts and reward signal collection
  • Integrate with and extend open-source agentic tooling and custom CLI/API harnesses to enable multi-step agent interaction
  • Build instrumentation and observability layers—structured logging, trajectory capture, state snapshotting—so training runs and human annotation sessions produce clean, auditable data
  • Collaborate with data operations to design task curricula and evaluation protocols that stress-test model capabilities across environment types
  • Own environment deployment and reliability: CI/CD pipelines, automated testing of environment configurations, and monitoring for drift or breakage across versions
  • Rapidly prototype new environment types as client and internal requirements evolve, moving from spec to working system in days, not weeks

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service