Director, AI

Edwards LifesciencesIrvine, CA
1d

About The Position

Innovation starts from the heart. At Edwards Lifesciences, we’re dedicated to developing ground-breaking technologies with a genuine impact on patients’ lives. At the core of this commitment is our investment in cutting-edge information technology. This supports our innovation and collaboration on a global scale, enabling our diverse teams to optimize both efficiency and success. As part of our IT team, your expertise and commitment will help facilitate our patient-focused mission by developing and enhancing technological solutions. Lead the enterprise strategy, delivery, and operations of AI solutions spanning traditional machine learning and generative AI. This role blends software engineering leadership with applied AI/ML and platform operations. You will establish common patterns for data and model lifecycle, governance, and security, and scalable MLOps across our cloud environment (primarily AWS) and core data platforms (e.g., Snowflake/Databricks, lakehouse/warehouse technologies, application platforms). You will build and lead a high‑performing team (10+ engineers) to deliver production‑grade AI capabilities, partnering with Enterprise Architecture, Cloud Engineers, InfoSec, and business stakeholders to achieve measurable outcomes. How you’ll make an impact: Platform Strategy & Architecture Partner with IT Leaders and the business to build the AI platform roadmap across AI/ML, and application layers; set standards, guardrails, and reusable templates for rapid, secure delivery for pre and post launch. Create reference architectures for data ingestion, transformation, feature engineering, model serving, and user‑facing experiences; standardize pathways from prototype to production. Partner with Enterprise Architecture, InfoSec, CloudOps to embed governance, reliability, and security into all AI initiatives; align designs with cloud architecture and data strategy. Delivery & MLOps Lead multiple cross‑functional workstreams delivering AI/ML solutions (traditional ML + GenAI) using Foundry; ensure secure‑by‑design pipelines, reproducibility, CI/CD, monitoring, and SLAs. Own the end‑to‑end lifecycle (data acquisition → feature engineering → model training/eval → deployment → post‑prod monitoring) with automated model/data lineage, drift detection, and rollbacks. Data & Connectivity Oversee integrations with Snowflake, Databricks and other core data platforms via recommended connectors and AWS PrivateLink/VPC patterns; ensure performance, cost‑efficiency, and security posture. People Leadership Hire, coach, and develop 10+ AI engineers (Palantir application engineers, data engineers, ML engineers, and DevOps), fostering a culture of speed, high quality delivery, teamwork, and patient focused mentality. Set goals, career paths, and learning plans (Palantir certifications, AWS specialization, security best practices). Stakeholder & Vendor Management Translate business problems into scalable AI solutions; partner with product owners and domain SMEs for outcomes‑focused delivery. Manage relationships with Palantir and other strategic vendors, coordinate architecture reviews, joint workshops, and operational playbooks. Governance & Risk Enforce responsible AI practices, security controls, privacy‑by‑design, and compliance workflows across all Palantir/AWS projects.

Requirements

  • Bachelor’s degree or equivalent work experience based on Edwards criteria.
  • Strong software engineering background.
  • Experience delivering production AI/ML solutions.
  • Hands-on experience with coding (e.g., Python), data engineering frameworks (Spark), and modern CI/CD/MLOps (feature stores, model registry, automated testing). Expertise with any combination of these is welcome.
  • Strong communication skills, with the ability to convey technical concepts to senior business leaders.

Nice To Haves

  • Proven experience designing secure data integrations (e.g., Snowflake/Databricks ↔ Foundry using native connectors, PrivateLink/VPC endpoints).
  • Hands-on experience with Palantir Foundry (covering: Data Ingestion, Transformation, Ontology, AIP, and Workshop/end‑user apps).
  • Extensive experience in AWS or any public cloud services (security, networking, IAM; compute/storage; serverless; observability).
  • Experience in regulated industries (e.g., healthcare/medtech); familiarity with responsible AI and model risk governance.
  • Proven experience leading teams of 10+ engineers; demonstrated ability to scale teams, processes, and platforms.
  • Palantir certifications; expertise with Foundry Code Repositories, Transform pipelines, AIP agents, Ontology operations, Workshop/Slate.
  • AWS networking/security patterns (Transit Gateway, PrivateLink, VPC endpoints, proxy/egress controls); cost governance for compute/storage.
  • Experience with Snowflake performance optimization, virtual tables/ELT, and federated query patterns on Foundry.

Responsibilities

  • Platform Strategy & Architecture Partner with IT Leaders and the business to build the AI platform roadmap across AI/ML, and application layers; set standards, guardrails, and reusable templates for rapid, secure delivery for pre and post launch.
  • Create reference architectures for data ingestion, transformation, feature engineering, model serving, and user‑facing experiences; standardize pathways from prototype to production.
  • Partner with Enterprise Architecture, InfoSec, CloudOps to embed governance, reliability, and security into all AI initiatives; align designs with cloud architecture and data strategy.
  • Delivery & MLOps Lead multiple cross‑functional workstreams delivering AI/ML solutions (traditional ML + GenAI) using Foundry; ensure secure‑by‑design pipelines, reproducibility, CI/CD, monitoring, and SLAs.
  • Own the end‑to‑end lifecycle (data acquisition → feature engineering → model training/eval → deployment → post‑prod monitoring) with automated model/data lineage, drift detection, and rollbacks.
  • Data & Connectivity Oversee integrations with Snowflake, Databricks and other core data platforms via recommended connectors and AWS PrivateLink/VPC patterns; ensure performance, cost‑efficiency, and security posture.
  • People Leadership Hire, coach, and develop 10+ AI engineers (Palantir application engineers, data engineers, ML engineers, and DevOps), fostering a culture of speed, high quality delivery, teamwork, and patient focused mentality.
  • Set goals, career paths, and learning plans (Palantir certifications, AWS specialization, security best practices).
  • Stakeholder & Vendor Management Translate business problems into scalable AI solutions; partner with product owners and domain SMEs for outcomes‑focused delivery.
  • Manage relationships with Palantir and other strategic vendors, coordinate architecture reviews, joint workshops, and operational playbooks.
  • Governance & Risk Enforce responsible AI practices, security controls, privacy‑by‑design, and compliance workflows across all Palantir/AWS projects.

Benefits

  • Aligning our overall business objectives with performance, we offer competitive salaries, performance-based incentives, and a wide variety of benefits programs to address the diverse individual needs of our employees and their families.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service