About The Position

Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse. We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises. Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls. We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue. Previously backed by Y Combinator, Lightspeed, and General Catalyst. We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work). This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s). This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements. Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end. We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline. You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. If you wonder what to build next, our users are a Slack message or a Github discussions post away. You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.

Requirements

  • Senior experience in a customer-facing technical role: TAM, Solutions Engineer, Solutions Architect, Forward Deployed Engineer, Customer Success Engineer, or similar where you owned outcomes.
  • Strong technical foundation: you can debug integrations, reason about distributed systems, APIs/SDKs, and cloud infrastructure.
  • Demonstrated work in applied AI / AI engineering: building, operating, or enabling LLM applications (agents, RAG, eval pipelines, prompt tooling, experimentation).
  • Excellent communication: you can lead technical meetings, drive decisions, and write docs engineers actually follow.
  • High ownership: you ship artifacts, close loops, and create repeatable systems rather than bespoke one-offs.

Nice To Haves

  • Experience with devtools / OSS ecosystems and developer-centric GTM.
  • Familiarity with observability concepts (tracing/metrics/logs), data pipelines, and evaluation frameworks.
  • Track record of technical writing or enablement (workshops, reference architectures, public docs).

Responsibilities

  • Own strategic customer relationships (portfolio ownership)
  • Be the primary technical partner for 10–20 strategic accounts (large, highly engaged, or aligned with our roadmap).
  • Run onboarding, success planning, and regular deep dives into the customer’s AI architecture and workflows.
  • Drive adoption of key product capabilities across the lifecycle: initial setup, team workflows, scaling, and expansion.
  • Production readiness + architectural guidance
  • Lead customers through production readiness: instrumentation strategy, data modeling choices, evaluation setup, alerting/monitoring expectations, security & privacy considerations, and operational playbooks.
  • Provide pragmatic architecture guidance for real LLM systems (agents, tool use, RAG, evals, prompt iteration, dataset curation, feedback loops).
  • Build small prototypes, reference implementations, and demos when it unblocks a customer. Turn them into reusable templates that can be published.
  • Escalation leadership
  • Own the technical leadership during high-severity customer moments: triage, root-cause coordination, and crisp communication.
  • Be the point of contact for the customer and partner closely with Engineering, be proactive in how you resolve issues.
  • Establish escalation paths, runbooks, and prevention mechanisms for repeat issues.
  • Turn customer signal into product + docs + enablement
  • Aggregate patterns across your portfolio and translate them into actionable product feedback (clear problem statements, impact, and recommended solutions).
  • Create customer-facing assets (docs, guides, best practices, demos) that start as one customer’s question and become durable collateral.
  • Enable the broader ClickHouse GTM org: training, playbooks, crisp messaging, and “how to win” narratives for AI engineering teams.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service