Principal AI Architect

AlchemerLouisville, CO

About The Position

As Principal AI Architect, you will set the technical direction of our Generative and Agentic AI portfolio and build the hardest pieces yourself. This is a hands-on, individual contributor role for an architect who has spent the last several years shipping modern AI systems to production. Alchemer's acquisition of Chatmeter brought a live, proven AI engine with 32+ LLM endpoints, 100M+ vector embeddings, semantic search, real-time feedback intelligence, and full cost observability, all in production. Your mandate is to unify it, extend it across multiple products, and evolve it into an industry-leading agentic platform. The hard problems you will own: a unified AI entry point across five products, an agent runtime that coordinates across product boundaries, a retrieval layer at a scale most architects never encounter, and a Trust Layer that makes it all observable, compliant, and governable. You will partner closely with Product, Engineering, and Data leaders. You will not manage the team and you will set the technical bar. This role has board-level visibility.

Requirements

  • 10+ years building production software, with recent hands-on experience shipping Generative and Agentic AI systems to real users at meaningful scale.
  • Demonstrable ownership of at least one production multi-agent or tool-using system — planning, tool and function calling, memory, routing, fallback strategies, and cost and latency control.
  • Deep, hands-on experience with retrieval-augmented generation at scale: hybrid retrieval, chunking and embedding strategy, reranking, multi-tenant scoping, freshness, and retrieval evaluation.
  • Strong understanding of NLP fundamentals: tokenization, embeddings, language modeling, and text generation.
  • B2B SaaS experience with multi-tenant data, customer-configurable workflows, and enterprise security expectations (SOC 2, GDPR, HIPAA-adjacent).
  • Strong Python comfort with at least one modern agent framework (LangGraph, LlamaIndex, CrewAI, AutoGen, OpenAI Agents SDK, or equivalent).
  • Production fluency with foundation-model APIs (OpenAI, Anthropic, Bedrock, Vertex), vector databases, and LLM observability and evaluation tooling.
  • Experience with modern AI/ML and Lakehouse platforms (Databricks, SageMaker, Azure ML, or Vertex AI) and data warehouses (Snowflake, Redshift, or BigQuery).
  • Real evaluation discipline: golden sets, LLM-as-judge with calibration, online eval and feedback loops, and regression gates in CI.
  • Working knowledge of SQL and NoSQL databases, vector stores, and streaming and batch processing (Kafka, Spark, or equivalent).
  • Familiarity with DevOps and MLOps: Git, CI/CD, infrastructure as code, and containerization.
  • Excellent written and verbal communication; able to translate complex AI concepts to non-technical audiences.
  • Passion for coaching and mentoring engineers and ML practitioners.
  • Excels in a dynamic, fast-paced environment with evolving priorities.

Nice To Haves

  • Fine-tuning and model adaptation (PEFT/LoRA, distillation, small-model routing) where it produced measurable wins.
  • Background in customer experience, feedback analytics, reputation, or unstructured-text analytics.
  • Open-source contributions, conference talks, or published work on Generative or Agentic AI.
  • Formal degree (BS or MS) in Computer Science, Mathematics, Statistics, or a related technical field.

Responsibilities

  • Shape and execute the vision for embedding Generative and Agentic AI across the platform — from natural language understanding and feedback analytics to multi-agent workflows that drive closed-loop customer action.
  • Define and own the multi-year AI architecture, sequencing the agentic roadmap and aligning it with long-term business goals.
  • Make the build / fine-tune / buy decisions on models, frameworks, and tooling, and own the rationale.
  • Champion responsible and ethical AI: evaluation, bias mitigation, transparency, data governance, privacy, and regulatory compliance.
  • Build, not just diagram. We expect you to be writing production code on the hardest parts — multi-agent orchestration, retrieval, tool and function-calling layers, and evaluation harnesses — every week.
  • Own the agent runtime: planning, tool use, memory, routing, fallbacks, and cost and latency budgets.
  • Own the retrieval layer at scale: hybrid search, chunking and embedding strategy, reranking, freshness, and multi-tenant scoping over large volumes of structured and unstructured data.
  • Architect and build the Integration Gateway — the unified entry point routing all AI traffic across products, with per-tenant cost controls, smart routing, circuit breakers, and the connector layer that lets agents act inside products.
  • Stand up evaluation and observability end to end — offline eval sets, online eval, regression gates in CI, traces, and cost dashboards. No agent ships without an eval.
  • Set the patterns the rest of Engineering uses to build AI features — reference implementations, internal SDKs, prompt and tool-use conventions, and evaluation templates.
  • Partner with Product, Engineering, and Data teams to integrate AI seamlessly into existing platform capabilities.
  • Coach senior engineers and ML practitioners on agentic patterns, retrieval design, evaluation discipline, and production readiness.
  • Present AI architecture and roadmaps to executive leadership and serve as a credible technical voice with customers and partners.

Benefits

  • health and disability coverages
  • 401(k) option that includes a per-payroll match and immediate vesting
  • unlimited time off
  • twelve paid company holidays
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service