Staff Engineer, Enterprise AI

FaireSan Francisco, CA
Hybrid

About The Position

The Corporate Security team at Faire owns the tools and policies that keep our people, data, and systems protected. The team's scope includes endpoint detection, data loss prevention, email security, corporate threat detection, compliance training, and AI governance. As our Staff Engineer focusing on Enterprise AI, you will be Security and IT's technical leader for AI systems, partnering with engineering teams in driving AI initiatives to ensure governance, security, and enablement scale alongside adoption. You'll work across every level of the organization, balancing the drive to "Make it happen fast" with the discipline to move safely.

Requirements

  • 7+ years of experience in security engineering, platform engineering, infrastructure engineering, or IT engineering.
  • Experience building and presenting decision frameworks, risk assessments, or strategy recommendations to senior leadership.
  • Ability to operate independently, setting your own strategy, building stakeholder relationships, and executing with minimal oversight.
  • Strong communication skills across a wide range of audiences, from mentoring junior engineers to partnering with executives in strategy discussions.
  • Proficiency in at least one object-oriented programming language (e.g., Python, Go, Ruby, Java) with the ability to build integrations, tooling, and automations from scratch.
  • Hands-on experience administering or deeply integrating with AI/LLM platforms. Not just using them, but managing them at the platform level (e.g., building MCP servers, extending LLM functionality, managing enterprise AI admin consoles).
  • Experience designing scalable internal systems and infrastructure. You think about solving categories of problems at the platform level, not building one-off solutions.
  • Track record of owning company-wide programs that delivered measurable impact to downstream users or the business.
  • Experience with observability or SIEM tooling (e.g., Datadog, Splunk) and building data pipelines for monitoring and compliance.

Nice To Haves

  • Understanding of LLM internals, including transformer architectures, tokenization, context window management, RAG patterns, prompt engineering techniques, and the security risks inherent to each (e.g., prompt injection, data leakage through context, output manipulation).
  • Prior experience in an AI governance, AI enablement, or AI strategy role.
  • Experience with observability or SIEM tooling (e.g., Datadog, Splunk) and building data pipelines for monitoring and compliance.
  • Familiarity with security and compliance considerations specific to AI tooling, such as data residency, DLP policies for AI-generated content, agent access controls, and OAuth scoping for third-party integrations.
  • Experience building internal enablement programs, whether training courses, documentation, or developer advocacy, that drove measurable adoption of new tools or practices.

Responsibilities

  • Own and drive Faire's company-wide AI strategy in partnership with engineering, business, and executive stakeholders. This means setting the direction for which tools we adopt, how we govern them, and how we scale adoption.
  • Serve as the technical authority on AI platform administration for tools like Anthropic and OpenAI. You'll evaluate, enable, and disable native connectors, plugins, and features based on risk, security posture, and business value.
  • Build and maintain decision frameworks (e.g., risk matrices, enablement criteria) that make AI governance repeatable and transparent.
  • Design and engineer secure experimentation infrastructure, including sandboxed environments, isolated MCP connectors for testing new features, and scoped OAuth flows, so teams can safely explore new AI capabilities.
  • Design and build custom MCP integrations when out-of-the-box options don't exist or don't meet Faire's security requirements (e.g., overly broad permissions, insufficient audit logging).
  • Lead AI pilot programs end-to-end, from scoping and stakeholder alignment through rollout, troubleshooting, feedback collection, and iteration.
  • Engineer observability and compliance infrastructure, ensuring compliance logs from AI platforms end up in our SIEM.
  • Own the operational mechanics of AI adoption tracking. This includes automating usage data pipelines into Snowflake and partnering with data analysts and domain teams (engineering, CX, etc.) who own their respective adoption metrics.
  • Partner with Learning & Development and engineering teams to build and deliver AI training, prompt libraries, and enablement resources tailored to different roles and workflows.
  • Embed with teams across the company to understand their workflows, identify where AI can accelerate their work, and just as importantly, where it shouldn't be used.
  • Communicate AI strategy, risk trade-offs, and recommendations clearly to audiences ranging from junior engineers to C-level executives, including respectfully challenging perspectives when the data warrants it.
  • Stay current with the rapidly evolving AI product landscape and proactively assess new capabilities (e.g., new platform features, agent frameworks) for security implications and business value.

Benefits

  • Competitive pay
  • Equity
  • Comprehensive benefits designed to support your life inside and outside of work
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service