About The Position

McKesson is an impact-driven, Fortune 10 company that touches virtually every aspect of healthcare. We are known for delivering insights, products, and services that make quality care more accessible and affordable. Here, we focus on the health, happiness, and well-being of you and those we serve – we care. What you do at McKesson matters. We foster a culture where you can grow, make an impact, and are empowered to bring new ideas. Together, we thrive as we shape the future of health for patients, our communities, and our people. If you want to be part of tomorrow’s health today, we want to hear from you. Position Summary: The Senior AI/ML Engineer, Responsible AI operates as the enterprise’s hands-on “mechanic” for AI governance, the engineer who gets under the hood of production models and AI solutions to diagnose, remediate, and certify them against responsible AI standards. This role works across business units, embedding into project teams to evaluate models for fairness, explainability, robustness, and compliance, then engineers the fixes needed to bring solutions up to enterprise quality. Think of it as a technical quality inspector with a wrench: you don’t just flag problems, you fix them or build the tooling so teams can fix them at scale.

Requirements

  • Degree or equivalent and typically requires 7+ years of relevant experience.
  • 5+ years in ML Engineering, MLOps, or Applied ML with at least 2 years of direct experience in model evaluation, fairness testing, or AI quality assurance.
  • Strong Python proficiency with production experience in scikit-learn, PyTorch or TensorFlow, and at least two responsible AI toolkits (Fairlearn, AIF360, Evidently AI, SHAP, LIME, Guardrails AI).
  • Hands-on experience with MLOps platforms (AzureML, Databricks, SageMaker, or Vertex AI) including pipeline orchestration, model registry, and monitoring.
  • Demonstrated ability to diagnose and remediate model bias, data quality, or robustness issues in production system, not just detect them.
  • Experience building automated testing and validation frameworks for ML models (CI/CD integration, automated test suites, monitoring dashboards).
  • Working knowledge of responsible AI regulations and frameworks (NIST AI RMF, EU AI Act categories) sufficient to translate policy into code.
  • Bachelor’s degree in Computer Science, AI/ML, Statistics, or related field; Master’s preferred.

Nice To Haves

  • Experience with LLM evaluation and safety testing (prompt injection detection, hallucination measurement, toxicity scoring, RLHF/RLAIF concepts).
  • Familiarity with agentic AI frameworks (LangChain, LangGraph) and the governance challenges of tool-using agents.
  • Healthcare, pharmaceutical, or financial services domain experience where model governance has direct regulatory implications.
  • Experience with data lineage and provenance tooling (Azure Purview, or similar).

Responsibilities

  • Conduct hands-on responsible AI assessments of models and AI solutions across the enterprise, evaluating bias, fairness, explainability, data quality, and robustness against established standards.
  • Engineer remediation solutions when models fail governance checks, rebalancing training data, implementing fairness constraints, adding explainability layers, hardening against adversarial inputs, or restructuring feature pipelines.
  • Build, maintain, and extend the enterprise Responsible AI toolkit: reusable libraries, automated testing harnesses, scanning pipelines, and validation APIs that integrate into the Enterprise MLOps Platform.
  • Partner with Enterprise/BU Data Science and ML Engineering teams as an embedded responsible AI SME during model development, providing real-time guidance and code-level support.
  • Create and maintain model cards, datasheets, and technical documentation for governed models, ensuring traceability from training data through production inference.
  • Investigate production incidents related to model behavior (bias events, unexpected outputs, safety failures) and perform root cause analysis with actionable engineering fixes.
  • Contribute to the enterprise’s red-teaming and adversarial testing program for generative and agentic AI systems.
  • Automate compliance evidence collection for internal audit, external regulators, and customer-facing AI transparency requirements.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

501-1,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service