About The Position

This role involves providing strategic and tactical technical guidance on security across the organization, with input into leadership decisions. The engineer will research emerging threats and translate findings into actionable guidance. They will own escalations requiring deep expertise and design/evolve the secure software development lifecycle (SDLC), including threat modeling, security design reviews, developer enablement, and integrating security tooling (SAST, DAST, SCA, secrets detection) into CI/CD pipelines. Building and running security champions programs to foster collaboration with developers is key. The role requires tracking progress with metrics and communicating risk clearly to diverse audiences. A significant focus will be on AI/LLM security, including security reviews and threat modeling for AI-powered features, evaluating AI tools and APIs, and defining internal standards for responsible AI-integrated application development. The engineer will also use AI-powered security tooling and design innovative solutions to protect systems and data, staying curious about new technologies and their security implications. Collaboration with engineering, GRC, legal, and privacy teams is essential to ensure controls are effective within regulated environments (HIPAA, FedRAMP). At the Principal level, this includes shaping multi-year technical strategy for the AppSec program, influencing the engineering organization, serving as an authority on AI/LLM security for senior leadership, and mentoring junior engineers.

Requirements

  • 7+ years in application security, security-focused software engineering, or a closely related discipline.
  • Real experience with threat modeling (STRIDE, PASTA, or your preferred framework) applied to complex, distributed systems.
  • Strong command of web application and API security vulnerabilities and how to actually fix them.
  • Hands-on experience embedding SAST, DAST, SCA, and secrets scanning into developer workflows.
  • Enough coding ability (Python, Java, Go, TypeScript, etc.) to meaningfully review code for security issues and build lightweight automation.
  • Experience working in or alongside a regulated industry with real compliance requirements.
  • The ability to write a clear, compelling security finding — and explain it to a VP without losing them.
  • Strong collaboration ethos. The security team is an enabler of the business, not a hindrance.

Nice To Haves

  • Practical experience securing AI/ML systems or LLM-integrated applications.
  • Familiarity with agentic AI security risks: tool misuse, prompt injection chains, privilege escalation via agents.
  • Experience building developer security education or security champions programs that actually stick.
  • Cloud security depth (AWS, Azure, or GCP) — IAM, workload security, IaC hardening.
  • Container and Kubernetes security experience.
  • Offensive security background that informs how you think defensively.
  • Relevant certifications: OSCP, CSSLP, GWEB, GPEN, cloud security specialty, or equivalent.
  • Prior experience in legal research or AI workflow.

Responsibilities

  • Provide strategic and tactical technical guidance that shapes how we approach security across the organization.
  • Research emerging threats, new attack techniques, and novel mitigation approaches, then translate that research into actionable guidance.
  • Own escalations that require deep expertise.
  • Design and evolve our secure software development lifecycle — threat modeling, security design reviews, developer enablement, and the toolchain that ties it all together.
  • Integrate modern security tooling (SAST, DAST, SCA, secrets detection) into CI/CD pipelines.
  • Build and run security champions programs.
  • Track what’s working with real metrics and communicate risk clearly to technical and non-technical audiences alike.
  • Lead security reviews and threat modeling for AI-powered features — LLMs, RAG pipelines, vector databases, agentic workflows.
  • Get hands-on with the OWASP, NIST, and the latest research on prompt injection, model supply chain risks, inference-based data leakage, and insecure tool use.
  • Evaluate AI tools and APIs being introduced into the SDLC.
  • Define internal standards for building AI-integrated applications responsibly.
  • Use AI-powered security tooling yourself.
  • Design innovative solutions that protect the confidentiality, integrity, and availability of our systems and data.
  • Stay curious about new technologies: evaluate them, understand the security implications, and give leadership the insight they need to make smart bets.
  • Collaborate across engineering, GRC, legal, and privacy to ensure our controls hold up in a regulated environment (HIPAA, FedRAMP) without slowing everything to a crawl.
  • Shape multi-year technical strategy for the AppSec program and influence engineering organization-wide.
  • Serve as a go-to authority on AI/LLM security for senior engineering and product leadership.
  • Mentor the next generation of security engineers and raise the bar across the team.

Benefits

  • Annual incentive bonus
  • Country specific benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service