About The Position

In 1600, William Gilbert published De Magnete—the first systematic study of magnetism. He didn't just theorize; he built instruments, ran experiments, and shared what he learned so that others could go further. At Mercury, we're making a deliberate, company-wide bet on AI. Frontier users are already pushing boundaries—building agents, automating workflows, moving fast. But they're doing it in silos. This role exists to change that: to take those scattered experiments and turn them into shared infrastructure, shared context, and shared capability. The goal is a multiplier effect—where the most ambitious AI work inside Mercury lifts the velocity of everyone else. You'll join a team that has already started building Mercury's internal AI platform and enablement layer. Your work will be to extend, harden, and scale what's in motion, and to help partner teams adopt it.

Requirements

  • Has 5+ years of backend development experience in complex, production systems—you've built things that other engineers depended on.
  • Is fluent across programming languages and can navigate platform engineering, infrastructure, and developer tooling without needing a map.
  • Has hands-on experience building LLM-powered systems—RAG pipelines, agents, eval frameworks—and has shipped at least one of these to production.
  • Understands the real tradeoffs in AI deployments: cost modeling, observability, latency, and safety—not just the exciting parts.
  • Is high-agency and self-directed. You can operate effectively without tightly-defined scope, find the highest-leverage work, and get it done.
  • Communicates clearly across technical and non-technical audiences—you can explain what you built and why it matters.

Responsibilities

  • Extend the AI platform foundation
  • Build and evolve MCP servers that connect internal systems and data sources into a coherent interface for agents and engineers.
  • Expand and operate our LLM gateway infrastructure: routing, rate limiting, cost attribution, and observability across teams.
  • Turn early patterns into durable defaults: shared prompt libraries, guardrails, and policy-as-code so teams can move fast safely.
  • Strengthen the shared company knowledge layer
  • Shape and maintain structured context artifacts—clean, reliable, agent-consumable—so LLMs working in Mercury's systems can reason accurately about our domain.
  • Improve internal knowledge discoverability and retrieval so both humans and agents can quickly find accurate answers.
  • Partner with domain teams to standardize key sources of truth, and keep them fresh.
  • Enable faster prototyping and iteration across the company
  • Build and refine sandbox environments and tooling that let engineers experiment with AI safely and at speed.
  • Create self-service scaffolding so non-engineers—PMs, ops, finance—can prototype and deploy AI-powered workflows with minimal hand-holding.
  • Build playgrounds and evaluation harnesses so internal AI agents can be tested and iterated in controlled environments before hitting production.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service