Backend LLM / MCP Engineer

Recruiting From ScratchSan Francisco, CA
1d$160,000 - $220,000Onsite

About The Position

Our client is a fast-growing, venture-backed AI infrastructure startup building one of the most widely adopted open-source LLM gateways in the ecosystem. Their platform standardizes and unifies 100+ LLM APIs into a single, consistent OpenAI-compatible interface—powering developers and enterprises to seamlessly integrate across providers. Backed by leading early-stage investors and generating multi-million ARR with strong growth, the company is profitable and scaling quickly. With a small, high-caliber team based in San Francisco, they are defining the interoperability layer for the modern AI stack. This is a rare opportunity to join an early team shaping core AI infrastructure used by thousands of developers worldwide. As a Backend LLM / MCP Engineer, you will play a foundational role in building and scaling the interoperability layer for large language models.

Requirements

  • 1+ years of backend engineering experience building production systems
  • Strong proficiency in Python and experience with modern backend frameworks (e.g., FastAPI)
  • Experience designing and integrating APIs at scale
  • Exposure to distributed systems, performance optimization, or high-throughput services
  • Comfortable working in small, fast-moving teams with high ownership

Nice To Haves

  • Experience maintaining or contributing to open-source projects is a plus
  • Background in AI/ML infrastructure, developer tooling, or API platforms is highly valued
  • Startup experience (founder or early employee) is a strong plus
  • Preferred Experience working with LLM APIs (OpenAI, Anthropic, Bedrock, Azure, Vertex, etc.)
  • Familiarity with asynchronous networking (e.g., httpx, aiohttp)
  • Experience with Postgres, Redis, cloud storage (S3, GCS), or observability tools
  • Background in API standardization, data pipelines, or developer SDKs
  • Experience scaling systems handling millions of events or logs

Responsibilities

  • Design and implement transformations that map OpenAI-compatible API requests to provider-specific LLM APIs
  • Add and maintain support for new LLM providers and evolving API specifications
  • Handle provider-specific edge cases, performance considerations, and streaming constraints
  • Build scalable logging, cost tracking, and spend aggregation systems across millions of API calls
  • Improve reliability and performance of high-throughput backend services
  • Contribute directly to a widely adopted open-source project
  • Collaborate closely with the founding team on architecture and product direction
  • Engage with users and developers to understand real-world integration needs

Benefits

  • Meaningful early-stage equity (0.5% – 3% range depending on experience)
  • Full-time position
  • Onsite collaboration in San Francisco
  • High ownership and direct exposure to leadership
  • Opportunity to shape foundational AI infrastructure used globally
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service