Application Security Engineer

BraintrustSeattle, WA

About The Position

We're looking for an Application Security Engineer who lives in the code. Braintrust is a real-time, high-availability data platform that runs in both SaaS and self-hosted environments, with open source libraries embedded inside thousands of customer applications and a model proxy in front of OpenAI, Anthropic, Gemini, and other major model providers. This is a hands-on IC role. You'll review code, build threat models, ship paved-road libraries, and lead AI-specific security work: prompt injection, agent sandbox escapes, tool-use abuse, and the new attack surface that comes with LLM-native applications. If you reach for agentic coding tools as your default workflow and can hold your own in a design review with a backend or systems engineer, we'd love to work with you.

Requirements

  • 5+ years in application security, product security, or backend engineering with a security focus — you've shipped real code and reviewed a lot of it
  • Strong code reading and writing skills in at least two of TypeScript/Node.js, Python, Go, or Rust
  • Deep knowledge of common web and API vulnerability classes and the architectural patterns that prevent them — not just OWASP Top 10 trivia
  • Track record of building secure-by-default libraries, frameworks, or services that other engineers actually adopt
  • Hands-on experience with authn/authz design, multi-tenant data isolation, and secrets/key management at scale
  • Comfortable with the realities of a high-availability data platform: real-time pipelines, ingestion at scale, semi-structured data, Postgres, Redis, AWS
  • A clear point of view on AI/LLM security — prompt injection, agent abuse, tool-use sandboxing, model proxy threats — and ideally hands-on experience defending against them
  • Daily user of agentic coding tools and excited to push the frontier of how AppSec gets done with them
  • Clear communicator who documents decisions, writes tickets engineers want to pick up, and lifts the team's security awareness without becoming a bottleneck

Nice To Haves

  • Prior experience with LLM red-teaming, agent sandbox research, or shipping security-focused open source libraries

Responsibilities

  • Drive secure design across the platform: lead threat models for new features, review architecture proposals, and partner with product and backend engineers to ship features that are secure by default
  • Review code across our TypeScript, Python, and Go services, our open source tracing libraries, and our model proxy — and find the bugs others miss
  • Build the paved road: authn/authz primitives, RBAC and tenancy isolation patterns, secret handling, safe data pipelines, and sandboxed code execution for user-supplied JavaScript and Python snippets
  • Own our SAST, DAST, SCA, and secret-scanning tooling end-to-end, keeping signal-to-noise high enough that engineers actually fix what you ship
  • Run our vulnerability management program and triage external bug bounty reports; close the loop with durable fixes, not point patches
  • Lead AI-specific security work: prompt injection defenses, model proxy abuse detection, agent and tool-use sandboxing, data-exfiltration controls in multimodal pipelines, and security for the eval workflows our customers run
  • Partner with our open source maintainers on the security of libraries that get embedded inside customer applications
  • Use agentic coding workflows to scale yourself: automated code review, exploit prototyping, control validation, and IR triage

Benefits

  • Medical, dental, and vision insurance
  • Daily lunch, snacks, and beverages
  • Flexible time off
  • Competitive salary and equity
  • AI Stipend
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service