About The Position

Omada Health is dedicated to leveraging data to drive strategic decision-making and operational efficiency. Our team is passionate about harnessing the power of data to solve complex problems and deliver impactful insights. We are seeking a highly skilled and motivated Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining robust data architectures and engineering data models and pipelines. This role will play a critical part in ensuring the integrity, scalability, and performance of our data processing and products.

Requirements

  • 5+ years of experience building, maintaining, and orchestrating scalable data pipelines.
  • 3+ years of experience as a data engineer developing or maintaining integration with software such as Airflow or any Python-based data pipeline codebase.
  • Experience applying a variety of integration patterns for different use cases.
  • Experience in backend software development to contribute to distributed computing development and data technologies, with broad experience across systems, contexts, and ideas.
  • Experience implementing data pipelines and improving the performance of ETL processes and related SQL queries.
  • Experience in data modeling for OLTP and OLAP applications.
  • Experience with Cloud platforms such as Amazon AWS.
  • Familiarity with workflow management tools (Airflow preferred).
  • Familiarity with cloud-based data warehouses (Amazon Redshift preferred).
  • Exceptional Problem solving and analytical skills.
  • Experience working with sensitive data i.e. PHI / PII & security best practices.
  • Familiarity with data governance practices and principles.
  • Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
  • Proficiency in Analytical SQL (e.g. Analytics Queries, Distributed database queries) and experience working with massive parallel processing (MPP) databases (e.g., Redshift, BigQuery, Snowflake).
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Knowledge of data modeling techniques (3NF) and tools (e.g., ER/Studio, ERwin).
  • Software Engineering Mindset: Apply best practices to write elegant, maintainable code and understand automated testing concepts.
  • Familiarity with business intelligence tools and environments.
  • Familiarity with big data technologies (e.g., Lambda, Hadoop, Spark).
  • Software engineering mindset and an ability to write elegant, maintainable code while following engineering best practices.
  • Excellent communication and collaboration skills, both written and verbal, with the ability to convey complex technical concepts to non-technical stakeholders.
  • Self-Directed: Leading projects and tasks effectively with cross functional stakeholders minimal guidance. You care about writing quality software and recognize that there are often many right answers. Ability to lead tasks and projects effectively.
  • Bachelor’s degree in Computer Science or a similar discipline preferred.

Nice To Haves

  • Experience with big data technologies such as Hadoop, Spark, or Kafka.
  • Experience building internal frameworks or development productivity tools would be a big plus.
  • Experience building data infrastructure, frameworks and automation is a big plus.
  • Understanding automated testing concepts and ability to consistently apply those concepts.

Responsibilities

  • Design, develop, and implement scalable, secure, and efficient data solutions that meet the needs of the organization.
  • Create and maintain logical and physical data models to support business intelligence, analytics, and reporting requirements.
  • Design, build, and optimize ETL (Extract, Transform, Load) processes and data pipelines to ensure smooth and efficient data flow from various sources.
  • Integrate diverse data sources, including APIs, databases, and third-party data, into a unified data platform.
  • Monitor and optimize the performance of data systems and pipelines to ensure low latency and high throughput.
  • Implement data quality checks, validation processes, and governance frameworks to ensure the accuracy and reliability of data.
  • Partner closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
  • Maintain comprehensive documentation of data architectures, models, and pipelines for ongoing maintenance and knowledge sharing.
  • Train and collaborate with teammates effectively in data engineering best practices.
  • Recommends policy changes and establishes department-wide procedures. Uses extensive experience and knowledge to resolve complex problems.
  • Monitor and manage production environment to deliver data within defined SLAs.
  • Evaluate, benchmark, and improve the scalability, robustness, and performance of our data platform and applications.
  • Make significant contributions to the architecture and design of our data processing platform.
  • Implement scalable, fault tolerant, and accurate ETLs frameworks.
  • Gather and process raw data at scale from diverse sources.
  • Collaborate with product management, data scientists, analysts, and other engineers on technical vision, design, and planning.
  • Implement and maintain a high level of data quality monitoring in our analytics & ML ecosystem.
  • Lead, document, and collaborate across teams for technical projects.

Benefits

  • Competitive salary with generous annual cash bonus
  • Equity grants
  • Remote first work from home culture
  • Flexible Time Off to help you rest, recharge, and connect with loved ones
  • Generous parental leave
  • Health, dental, and vision insurance (and above market employer contributions)
  • 401k retirement savings plan
  • Lifestyle Spending Account (LSA)
  • Mental Health Support Solutions
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service