Software Engineer, Agents Runtime

GleanSan Francisco, CA
79d$140,000 - $265,000

About The Position

The Agents Runtime team builds the low‑latency, reliable, and secure foundation that powers Glean’s AI agents and assistant experiences at scale. You’ll design and operate core runtime services for multi‑turn orchestration, tool calling, model routing, memory, streaming, and safety. You’ll work across distributed systems, production observability, and ML infra integrations to deliver an experience that feels instant, accurate, and trustworthy — while optimizing cost and reliability.

Requirements

  • 3+ years of software engineering experience building production distributed systems or cloud‑native applications.
  • BS/BA in Computer Science or related field, or equivalent practical experience.
  • Strong coding skills in at least one of: Python, Go, Java, or C++, with a focus on reliability, performance, and tests.
  • Product‑minded: you prioritize customer impact, clear SLAs/SLOs, and pragmatic iteration.
  • Ownership‑driven with a positive, proactive attitude; comfortable leading projects or learning from battle‑tested engineers.
  • Experience operating services on Kubernetes and at least one major cloud (e.g., GCP, AWS, or Azure).
  • Familiarity with event/streaming systems (e.g., Pub/Sub, Kafka), caching (e.g., Redis), and data stores for low‑latency paths.
  • Practical understanding of LLM/agents building blocks: tool/function calling, structured outputs, streaming, and model selection/routing.
  • Strong observability and debugging skills: tracing (e.g., OpenTelemetry), metrics, dashboards, and production forensics.

Nice To Haves

  • Background in one or more areas is a plus: policy/guardrails, multi‑tenant isolation, rate‑limiting, concurrency control, cost optimization.

Responsibilities

  • Own impactful runtime problems end‑to‑end — from architecture and design to production launch and ongoing reliability.
  • Build and evolve core services for session lifecycle, streaming responses (e.g., gRPC/WebSockets), structured tool execution, memory/state, and policy/guardrails.
  • Design for performance, correctness, and cost: reduce p50/p95 latency, improve tail behavior, and optimize token/tool budgets.
  • Integrate with leading LLM providers (e.g., OpenAI, Anthropic, Google Gemini) and internal evaluation frameworks to improve quality and predictability.
  • Harden the platform with fault isolation, retries, timeouts, circuit‑breaking, backpressure, and graceful degradation.
  • Instrument deep observability (tracing, metrics, logs) and create playbooks/SLOs for high availability and on‑call excellence.
  • Collaborate closely with product, quality, and application teams to prioritize the most impactful roadmap investments.

Benefits

  • Comprehensive benefits package including competitive compensation, Medical, Vision, and Dental coverage.
  • Generous time-off policy.
  • Opportunity to contribute to your 401k plan.
  • Home office improvement stipend.
  • Annual education and wellness stipends.
  • Vibrant company culture through regular events.
  • Healthy lunches daily.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service