About The Position

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. You’ll enjoy the flexibility to work remotely from anywhere within the U.S. as you take on some tough challenges. For all hires in the Minneapolis or Washington, D.C. area, you will be required to work in the office a minimum of four days per week.

Requirements

  • Undergraduate degree in Computer Science or related field
  • 5+ years of hands-on experience in Data Engineering / Data Warehousing development and operations
  • 5+ years of experience working in Agile delivery models and collaborating with architects, analysts, and upstream/downstream application teams
  • Hands on experience designing and building enterprise-grade ETL/ELT pipelines on modern data platforms
  • Hands on experience with: Spark/Databricks development, SQL-based transformations and performance tuning, Streaming concepts and implementation patterns

Nice To Haves

  • Healthcare domain experience (claims, clinical, eligibility, provider, member, HIPAA-aware handling)
  • Data Streaming (e.g., Spark Structured Streaming and streaming design patterns)
  • Python / PySpark
  • Azure (general platform knowledge)
  • MongoDB

Responsibilities

  • Design and develop Azure cloud-native enterprise data solutions with emphasis on data integration, transformation, and governance
  • Build solution designs and technical designs for data pipelines based on business and technical requirements
  • Develop and standardize reusable plug-and-play components that can be orchestrated into: data ingestion patterns (batch/streaming), data flows across multiple zones (raw/curated/consumption as applicable), data quality frameworks and validation rules
  • Implement and maintain pipelines leveraging Databricks + ADLS, ensuring reliability, scalability, and maintainability
  • Identify opportunities to enhance/streamline existing codebase for: Automation, performance improvements, scalability and reliability, operational efficiency and reduced run-cost
  • Support backlog execution by helping prioritize pipeline development, estimate effort, and create implementation roadmaps
  • Lead a team of data engineers by providing technical direction, code reviews, and design guidance
  • Provide QA, UAT, and implementation support, including deployment readiness, operational handoffs, and production troubleshooting
  • Ensure adherence to engineering standards (documentation, logging/monitoring expectations, and CI/CD practices where applicable)

Benefits

  • comprehensive benefits package
  • incentive and recognition programs
  • equity stock purchase
  • 401k contribution
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service