Principal Agentic AI Engineer

Zeta GlobalNew York, NY
1d$220,000 - $250,000

About The Position

We’re hiring a hands-on Agentic AI engineer to build bidder-adjacent agents that improve campaign performance using RAG + tool-using workflows in a closed-loop ACE (Observe → Reason → Act → Evaluate). This is not a classic “train a model and deploy it” role—this is agentic decisioning + evaluation + safety in a low-latency AdTech environment.

Requirements

  • 10+ years building production backend and/or applied AI systems (design → ship → operate).
  • Strong engineering in Java or Go (Python OK for evals/tooling).
  • Hands-on experience with LLMs and agentic patterns (tool use, structured outputs, multi-step loops).
  • Hands-on RAG experience (embeddings, hybrid retrieval, reranking, chunking, context assembly).
  • Experience building evals and monitoring for AI systems (offline benchmarks + online experiments).
  • Cloud + distributed systems fundamentals (APIs, microservices, streaming/eventing; AWS preferred).

Nice To Haves

  • Programmatic advertising (DSP/SSP/RTB) or other high-scale real-time decisioning domains.
  • Experience with safety/policy enforcement for agent tools (prompt/tool sanitization, allowlists, schema validation).
  • Experience designing “AI control plane” systems that influence production outcomes while keeping serving paths stable.
  • Familiarity with experimentation platforms, feature stores, and large-scale telemetry pipelines.

Responsibilities

  • Build bidder-adjacent agentic workflows that recommend/execute campaign control actions (targeting constraints, budget & pacing levers, bid modifiers, supply/inventory selection, creative routing).
  • Implement production-grade RAG: retrieval from policies/playbooks, campaign history, aggregates, and near-real-time telemetry; optimize grounding and reduce hallucinations.
  • Create safe tool/action interfaces: idempotent execution, audit logs, dry-run + approval gates, rate limits, rollback/fallback behaviors.
  • Own AgentOps: eval harnesses, regression suites, online experimentation (A/B), metrics tied to outcomes (CPA/ROAS, pacing, quality, margin).
  • Add observability end-to-end (tracing prompts/retrieval/tool calls/latency) and reliability patterns (timeouts, circuit breakers, safe defaults).
  • Partner with Backend/Bidding, Data Platform, DS/Optimization, Product, and SRE to define the boundary between deterministic per-request bidding and agent-driven control-plane decisions.

Benefits

  • Unlimited PTO
  • Excellent medical, dental, and vision coverage
  • Employee Equity
  • Employee Discounts, Virtual Wellness Classes, and Pet Insurance
  • And more!!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service