Data Engineer - Optum Advisory Board - Remote

UnitedHealth GroupEden Prairie, MN
Remote

About The Position

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. At Advisory Board, part of Optum Insight, we’ve helped health care industry leaders work smarter and faster by providing provocative insights, data and analytics, and practical tools to support execution for over 40 years. Our core products include research and analytics membership, events, expert support, custom research, and sponsorship. Learn more about Advisory Board. The Quantitative Insights team powers the industry-leading data and analytics capabilities core to Advisory Board’s value proposition. In this position within Advisory Board's data team, you will design, develop, test, and deploy enterprise ready data pipelines and architectures to support the Advisory Board’s data and analytics research technology projects, ranging from small enhancements to major new projects. You will develop scalable and maintainable ETL processes, ensure data quality/security, and collaborate with data scientists to optimize analytics infrastructure. The position will play a critical role in all data technology initiatives as we continue this amazing journey of modernizing and innovating our data and technology ecosystem in support of our new and exciting business transformation. Data solutions you build will power analytics tools used by providers across the country to improve efficiency and deliver better care. You’ll enjoy the flexibility to work remotely from anywhere within the U.S. as you take on some tough challenges. For all hires in the Minneapolis or Washington, D.C. area, you will be required to work in the office a minimum of four days per week.

Requirements

  • 8+ years of solid experience in data engineering using Python or PySpark
  • 5+ years of experience with AWS including Glue, Lambda, IAM, Redshift
  • 3+ years developing and optimizing PySpark jobs with proven big data experience
  • 3+ years of best practices in data analysis and modeling
  • Experience optimizing the architecture for big data, query performance, ease of use, and data governance
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Experience working with Cloud technologies (AWS preferred)
  • Proven solid expertise in SQL and data modelling with relational databases like SQL Server, Postgres etc.

Nice To Haves

  • Experience in Amazon Quick Suite developing datasets and dashboards with support for web app embeddings
  • Understanding of health care claims data, including Medicare and commercial datasets
  • Proven eagerness and willingness to learn new technologies
  • Proven solid analytical, problem solving and decision-making skill
  • Demonstrated depth of health care knowledge and expertise
  • Proven written and oral communication skills

Responsibilities

  • Design, build, and maintain scalable, reliable, and efficient ETL/ELT pipelines using AWS Glue, Python, and SQL.
  • Automate manual processes and optimize PySpark jobs for big data.
  • Architect and manage data lakes (AWS S3), data warehouses (Redshift, S3 Tables) and relational databases (PostgreSQL, SQL Server).
  • Use data modeling best practices to ensure data is accurate, accessible, and organized for efficient reporting and analysis.
  • Utilize AWS services like ECS and Lambda for data ingestion and orchestration tasks, involving a variety of external systems (Kafka, Snowflake, Databricks, custom API, etc.).
  • Implement monitoring and troubleshooting measures to ensure data integrity and security, including IAM policies and CloudWatch logging.
  • Work closely with data scientists and analysts to understand healthcare data and business requirements, and support data-driven decision-making.
  • Build data sets and dashboards, configure RLS and user permissions, and optimize dashboard performance on Amazon Quick Suite.
  • Configure parameters for interop with web app embedding.
  • Leverage GitHub for version control, code review, and automated deployment of pipelines across environments.

Benefits

  • comprehensive benefits package
  • incentive and recognition programs
  • equity stock purchase
  • 401k contribution
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service