Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. The Enterprise Information Security (EIS) team is responsible for cybersecurity across our organization. We support our business and members by reducing risk, rapidly responding to threats, focusing on business resiliency and securing new acquisitions. The Principal AI / Machine Learning Data Engineer role focuses on designing and building scalable data platforms that enable advanced analytics, machine learning, and AI-driven solutions. This role will support the development of intelligent systems that process large-scale event and operational data, enabling faster insights, automation, and decision-making across the organization. This position sits at the intersection of data engineering, machine learning, and AI, with an emphasis on building modern data pipelines and enabling production-grade AI capabilities. Ideal Candidate Profile: Demonstrated experience building and operating production data platforms and pipelines across batch and streaming workloads Solid hands-on engineering in Python and SQL; familiarity with JVM languages (Java/Scala) in Spark ecosystems is a plus Experience with distributed processing and lakehouse/warehouse patterns (eg, Spark/PySpark, Databricks, Snowflake) Experience building ingestion frameworks for structured and unstructured data, including event/log and semi-structured formats Experience enabling Generative AI solutions in production (eg, RAG-style architectures), including retrieval patterns and evaluation/monitoring practices Familiarity with knowledge-centric data approaches (eg, metadata-driven systems, entity resolution, and/or graph concepts) to improve discoverability and downstream analytics Solid data quality, observability, and monitoring mindset (profiling, validation, alerting, and reliability improvements) Comfort with orchestration, CI/CD, containerization, and infrastructure-as-code (eg, Airflow, GitHub Actions, Docker, Terraform, Kubernetes) Cloud experience (AWS, Azure, and/or GCP), including secure handling of sensitive data (PII/PHI) and collaboration with compliance partners Ability to lead through influence, mentor engineers, and translate ambiguous problems into scalable technical roadmaps You’ll enjoy the flexibility to work remotely from anywhere within the U.S. as you take on some tough challenges. For all hires in the Minneapolis or Washington, D.C. area, you will be required to work in the office a minimum of four days per week.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees