Sr Mgr Data Engineering

UnitedHealth GroupEden Prairie, MN
Remote

About The Position

Optum Tech is a global leader in health care innovation. Our teams develop cutting-edge solutions that help people live healthier lives and help make the health system work better for everyone. From advanced data analytics and AI to cybersecurity, we use innovative approaches to solve some of health care’s most complex challenges. Your contributions here have the potential to change lives. Ready to build the next breakthrough? Join us to start Caring. Connecting. Growing together. Optum AI is UnitedHealth Group’s enterprise AI team. We are AI/ML scientists and engineers with deep expertise in AI/ML engineering for healthcare. We develop AI/ML solutions for the highest impact opportunities across UnitedHealth Group businesses including UnitedHealthcare, Optum Financial, Optum Health, Optum Insight, and Optum Rx. In addition to transforming the healthcare journey through responsible AI/ML innovation, our charter also includes developing and supporting an enterprise AI/ML development platform. As a Sr Mgr Data Engineering, you will be an important part of teams responsible for building and operating scalable machine learning platforms and production ML systems across the enterprise. You will drive the design and implementation of ML infrastructure, model lifecycle management systems, and MLOps platforms that enable reliable experimentation, deployment, monitoring, and governance of machine learning and generative AI models. This role requires deep experience in MLOps and cloud-based ML platforms, and the ability to collaborate closely with data science, engineering, and platform teams. You’ll enjoy the flexibility to work remotely from anywhere within the U.S. as you take on some tough challenges. For all hires in the Minneapolis or Washington, D.C. area, you will be required to work in the office a minimum of four days per week. You’ll be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role as well as provide development for other roles you may be interested in.

Requirements

  • 7+ years of experience in data engineering or data platform engineering including large-scale ETL/ELT pipelines and distributed data processing systems
  • 6+ years of experience programming in Python or Scala for data engineering along with advanced SQL for data transformation and analytics
  • 5+ years of experience designing data models and storage architectures including dimensional modeling, data lakes, or data warehouses
  • 4+ years of experience working with modern data engineering technologies such as Apache Spark, Databricks, Snowflake, or equivalent platforms
  • 4+ years of experience building and operating cloud-based data platforms on AWS, Azure, or GCP
  • 3+ years of experience leading data engineering teams or technical initiatives including mentoring engineers and influencing architecture decisions
  • 1+ years of experience with rapid prototyping and production deployment using vibe coding tools like Claude Code, Cursor, Replit, and GitHub Copilot

Nice To Haves

  • Master’s degree in Computer Science, Engineering, Data Science, or a related technical field
  • Experience implementing data governance frameworks including metadata management, lineage tracking, and cataloging tools
  • Experience implementing DevOps practices for data platforms including CI/CD pipelines and Infrastructure as Code
  • Experience in Healthcare or Life Sciences

Responsibilities

  • Lead and grow data engineering teams by mentoring engineers and fostering a strong engineering culture focused on reliability, innovation, and collaboration
  • Define data platform architecture and strategy supporting enterprise analytics, AI, and machine learning workloads
  • Design and maintain scalable batch and streaming data pipelines for ingestion, transformation, orchestration, and data delivery
  • Develop modern data engineering platforms using technologies such as Apache Spark, Databricks, and Snowflake
  • Enable AI and machine learning workflows by building feature stores, ML data pipelines, and curated data layers
  • Implement data quality, governance, and observability frameworks including lineage, validation, and metadata management
  • Ensure operational excellence including SLAs, SLOs, reliability, scalability, performance optimization, and cost efficiency
  • Collaborate with product, analytics, platform, and security teams to translate business needs into scalable data solutions

Benefits

  • In addition to your salary, we offer benefits such as, a comprehensive benefits package, incentive and recognition programs, equity stock purchase and 401k contribution (all benefits are subject to eligibility requirements).
  • No matter where or when you begin a career with us, you’ll find a far-reaching choice of benefits and incentives.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service