About The Position

Become a part of our caring community and help us put health first The Enterprise AI organization at Humana is a pioneering force, driving AI innovation across our Insurance and CenterWell business segments. By collaborating with world-leading experts, we are at the forefront of delivering cutting-edge AI technologies for improving care quality and experience of millions of consumers. We are actively seeking top talent to develop robust and reusable AI modules and pipelines, ensuring adherence to best practices in accountable AI for effective risk management and measurement. Join us in shaping the future of healthcare through AI excellence. We are seeking a Lead Data Scientist to drive the safety, alignment, and ethical development of Agentic AI systems. You will lead initiatives to ensure our intelligent agents behave reliably, safely, and in accordance with human values across dynamic, multi-agent, and high-stakes environments. This is a cross-functional role bridging technical safety research, systems engineering, governance, and product implementation.

Requirements

  • Proficiency in SQL, Python, and data analysis/data mining tools.
  • Experience with machine learning frameworks like PyTorch, JAX, ReAct, LangChain, LangGraph, or AutoGen
  • Experience with high performance, large-scale ML systems
  • Experience with deploying or auditing LLM-based agents or multi-agent AI systems
  • Experience with large-scale ETL
  • Master's Degree and 4+ years of experience in research/ML engineering or an applied research scientist position preferably with a focus on developing production-ready AI solutions
  • 2+ years of experience leading development of AI/ML systems
  • Deep expertise in AI alignment, multi-agent systems, or reinforcement learning
  • Demonstrated ability to lead research-to-production initiatives or technical governance frameworks
  • Strong publication or contribution record in AI safety, interoperability, or algorithm ethics

Nice To Haves

  • Ph.D. in Computer Science, Data Science, Machine Learning, or a related field
  • Contributions to open-source AI safety tools or benchmarks
  • Understanding of value-sensitive design, constitutional AI, or multi-agent alignment
  • Experience in regulated domains such as healthcare, finance, or defense

Responsibilities

  • Design and implement safety architectures for Agentic AI systems, including guardrails, reward modeling, and self-monitoring capabilities
  • Lead and collaborate on alignment techniques such as inverse reinforcement learning, preference learning, interpretability tools, and human-in-the-loop evaluation
  • Develop continuous monitoring strategies for agent behavior in both simulated and real-world environments
  • Partner with product, legal, Responsible AI, governance, and deployment teams to ensure responsible scaling and deployment
  • Contribute to and publish novel research on alignment of LLM-based agents, multi-agent cooperation/conflict, or value learning
  • Proactively identify and mitigate failure modes, e.g., goal misgeneralization, deceptive behavior, unintended instrumental actions
  • Set safety milestones for autonomous capabilities as part of deployment readiness reviews

Benefits

  • Work-Life Balance
  • Generous PTO package
  • Health benefits effective day 1
  • Annual Incentive Plan
  • 401K - Excellent company match
  • Well-being program
  • Paid Volunteer Time Off
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service