Innovation starts from the heart. At Edwards Lifesciences, we’re dedicated to developing ground-breaking technologies with a genuine impact on patients’ lives. At the core of this commitment is our investment in cutting-edge information technology. This supports our innovation and collaboration on a global scale, enabling our diverse teams to optimize both efficiency and success. As part of our IT team, your expertise and commitment will help facilitate our patient-focused mission by developing and enhancing technological solutions. Lead the enterprise strategy, delivery, and operations of AI solutions spanning traditional machine learning and generative AI. This role blends software engineering leadership with applied AI/ML and platform operations. You will establish common patterns for data and model lifecycle, governance, and security, and scalable MLOps across our cloud environment (primarily AWS) and core data platforms (e.g., Snowflake/Databricks, lakehouse/warehouse technologies, application platforms). You will build and lead a high‑performing team (10+ engineers) to deliver production‑grade AI capabilities, partnering with Enterprise Architecture, Cloud Engineers, InfoSec, and business stakeholders to achieve measurable outcomes. How you’ll make an impact: Platform Strategy & Architecture Partner with IT Leaders and the business to build the AI platform roadmap across AI/ML, and application layers; set standards, guardrails, and reusable templates for rapid, secure delivery for pre and post launch. Create reference architectures for data ingestion, transformation, feature engineering, model serving, and user‑facing experiences; standardize pathways from prototype to production. Partner with Enterprise Architecture, InfoSec, CloudOps to embed governance, reliability, and security into all AI initiatives; align designs with cloud architecture and data strategy. Delivery & MLOps Lead multiple cross‑functional workstreams delivering AI/ML solutions (traditional ML + GenAI) using Foundry; ensure secure‑by‑design pipelines, reproducibility, CI/CD, monitoring, and SLAs. Own the end‑to‑end lifecycle (data acquisition → feature engineering → model training/eval → deployment → post‑prod monitoring) with automated model/data lineage, drift detection, and rollbacks. Data & Connectivity Oversee integrations with Snowflake, Databricks and other core data platforms via recommended connectors and AWS PrivateLink/VPC patterns; ensure performance, cost‑efficiency, and security posture. People Leadership Hire, coach, and develop 10+ AI engineers (Palantir application engineers, data engineers, ML engineers, and DevOps), fostering a culture of speed, high quality delivery, teamwork, and patient focused mentality. Set goals, career paths, and learning plans (Palantir certifications, AWS specialization, security best practices). Stakeholder & Vendor Management Translate business problems into scalable AI solutions; partner with product owners and domain SMEs for outcomes‑focused delivery. Manage relationships with Palantir and other strategic vendors, coordinate architecture reviews, joint workshops, and operational playbooks. Governance & Risk Enforce responsible AI practices, security controls, privacy‑by‑design, and compliance workflows across all Palantir/AWS projects.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Director
Number of Employees
5,001-10,000 employees