Data Engineer - Databricks

McKessonAtlanta, GA
Hybrid

About The Position

The Data Engineer - Databricks with CoverMyMeds will support and expand the data platforms that power our commercial data products and analytics offerings, with a focus on building and maintaining scalable data pipelines. This role will contribute to the design and delivery of reliable, reusable data assets that enable both internal teams and external partners to derive value from our data. You will work across proprietary and third-party data sources to build well-structured, high-quality datasets and pipelines that support commercialization efforts. This role partners closely with Data Systems Analysts, Product, and Analytics teams to translate evolving business concepts into scalable, production-ready data solutions. Our preferred candidate would reside in the Columbus, OH area to support a hybrid work schedule, but we may consider a well-qualified full-remote candidate. This position requires current, unrestricted authorization to work in the United States on a permanent basis. We are unable to support or consider candidates on temporary work authorization, including but not limited to F-1 OPT, STEM OPT, CPT, or any status that requires employer sponsorship now or in the future.

Requirements

  • Bachelor’s degree in Computer Science, Information Systems or related field, or equivalent experience, and typically requires 4+ years of experience in data engineering, analytics engineering, or modern data platform environments
  • Hands-on experience building and maintaining data pipelines in cloud environments (Databricks, Snowflake, or similar)
  • Strong SQL skills and experience transforming data for analytical, reporting, or product-oriented use cases
  • Experience integrating data from multiple internal and third-party systems
  • Experience working with structured and semi-structured data in batch and/or streaming environments
  • Working knowledge of data modeling principles and data quality practices
  • Experience supporting analytics, reporting, or externally facing data use cases

Nice To Haves

  • Experience with Databricks (Delta Lake, Spark, workflows, or pipeline orchestration)
  • Experience with or interest in data commercialization, data products, or externally facing analytics solutions
  • Experience building production-ready data pipelines in iterative environments
  • Comfort working with evolving requirements and ambiguity
  • Ability to translate loosely defined business ideas into structured data outputs
  • Strong collaboration skills across product, analytics, and technical teams
  • Ownership mindset with a bias toward execution

Responsibilities

  • Design and develop data pipelines that integrate proprietary and third-party data sources to support commercial data products and proof-of-concept initiatives
  • Build, optimize, and maintain data transformation pipelines with a focus on scalability, reliability, and performance
  • Work with structured and unstructured data to prepare enhanced datasets for internal stakeholders and external use cases
  • Write SQL and/or use cloud-based tools such as Databricks (preferred) or Snowflake to cleanse, standardize, and transform data aligned to business use cases
  • Collaborate with Product, Analytics, and external-facing teams to translate commercialization objectives into scalable data solutions
  • Contribute to data models and reusable pipeline patterns that support future data product expansion
  • Partner with application and platform teams to understand upstream data flows and implement efficient pipeline solutions
  • Monitor and support data quality, pipeline performance, and reliability for production data assets

Benefits

  • competitive compensation package
  • annual bonus
  • long-term incentive opportunities
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service