Senior AI Governance Scientist - Artificial Intelligence

Centene Corporation
$107,700 - $199,300Hybrid

About The Position

This role leads advanced technical evaluation and assurance activities within Centene's AI Governance function. It is a hands-on, execution-focused position responsible for designing, conducting, and scaling rigorous evaluations of various AI systems, including traditional machine learning, generative AI, and agentic AI. The primary goal is to assess the safety, reliability, robustness, and alignment of these systems with their intended use. The Senior AI Governance Scientist plays a critical role in operationalizing AI governance through experimentation, red teaming, and evaluation frameworks, working in close partnership with engineering, product, and research teams to integrate evaluation practices throughout the AI development lifecycle.

Requirements

  • Bachelor's Degree in Computer Science, Machine Learning, Statistics, or a related quantitative field, or equivalent applied research experience.
  • 5+ years AI/ML research or applied AI development, including at least 3 years focused on model evaluation, safety, robustness or validation.
  • Strong technical foundation in machine learning and deep learning, with hands-on experience evaluating or developing modern AI systems.
  • Demonstrated experience designing and executing AI evaluation, testing, or validation methodologies across multiple AI paradigms.
  • Solid understanding of statistical analysis, experimental design, and data analysis techniques relevant to AI evaluation.

Nice To Haves

  • Master's Degree preferred
  • 7+ years experience preferred
  • Experience designing evaluation methodologies and publication record preferred
  • Industry related certifications preferred
  • Familiarity with Python‑based AI/ML stack using PyTorch and Databricks, with agentic AI frameworks (LangChain, LlamaIndex, LangGraph, AutoGen, CrewAI) for single‑ and multi‑agent systems. Strong focus on LLM observability, MLOps, and evaluation using LangSmith, MLflow, Weights & Biases, Datadog, OpenTelemetry, and testing frameworks like DeepEval and LangTest

Responsibilities

  • Leads advanced technical evaluation and assurance activities within our AI Governance function.
  • Designs, conducts, and scales rigorous evaluations of AI systems—including traditional machine learning, generative AI, and agentic AI—to assess safety, reliability, robustness, and alignment with intended use.
  • Plays a critical role in operationalizing AI governance through experimentation, red teaming, and evaluation frameworks.
  • Partners closely with engineering, product, and research teams to embed evaluation practices into the AI development lifecycle.
  • Executes comprehensive red team and stress-testing exercises to identify vulnerabilities, failure modes, and safety risks across AI systems, including large language models, generative models, and autonomous agents.
  • Designs, implements, and refines evaluation methodologies and protocols to assess AI performance, safety, reliability, and alignment with intended use cases.
  • Evaluates the adequacy and sufficiency of existing AI evaluations, identify gaps in coverage or rigor, and recommend targeted improvements.
  • Designs and conducts reproducible experiments to measure AI value, impact, and risk, applying statistical methods and causal inference techniques where appropriate.
  • Develops and maintains automated testing frameworks and evaluation pipelines that scale across the organization’s AI portfolio.
  • Researches and applies novel attack vectors and stress-testing approaches for generative AI (e.g., prompt injection, jailbreaking, hallucination risks) and agentic systems (e.g., autonomy boundary violations, goal misalignment).
  • Creates and curates benchmarks, datasets, and metrics aligned to specific AI capabilities, risk profiles, and governance requirements.
  • Documents evaluation methodologies, findings, and recommendations in clear, governance-ready technical reports for review by governance bodies and cross-functional stakeholders.
  • Partners with product, engineering, and research teams to integrate evaluation and assurance practices into AI design, development, and deployment workflows.
  • Performs other duties as assigned.
  • Complies with all policies and standards.

Benefits

  • competitive pay
  • health insurance
  • 401K
  • stock purchase plans
  • tuition reimbursement
  • paid time off
  • holidays
  • flexible approach to work with remote, hybrid, field or office work schedules

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service