Senior Data Engineer

CareScoutRichmond, VA
Remote

About The Position

We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data and machine learning organization and help build the pipelines, models, and infrastructure that power our analytics, machine learning, and operational data needs. In this role, you will work closely with analysts, data scientists, ML/AI engineers, and product teams to design and deliver reliable, scalable data workflows on our Databricks Lakehouse platform. A successful candidate has strong engineering fundamentals, deep knowledge of modern data architectures, and experience transforming complex datasets into high-quality, well-modeled information that drives business impact. You’re comfortable owning end-to-end pipelines, improving data quality and reliability, and collaborating across teams. You thrive in environments where you can raise the bar on engineering excellence, build repeatable processes, and mentor others.

Requirements

  • 7+ years of experience in data engineering or related roles.
  • Strong expertise with Python, SQL, Spark, and distributed data processing.
  • Hands-on experience with Databricks, Delta Lake, and Lakehouse architectures.
  • Deep understanding of ETL/ELT design, data modeling, and data quality practices.
  • Experience building scalable, production-grade data pipelines.
  • Experience collaborating with analytics, ML, and product teams.
  • Strong communication skills with the ability to clarify data requirements and explain technical decisions.

Responsibilities

  • Design, build, and maintain scalable ETL/ELT pipelines using Spark, Python, SQL, and Databricks.
  • Implement reliable ingestion frameworks for batch and streaming data sources.
  • Ensure pipelines meet SLAs, data quality standards, and production-grade reliability.
  • Develop robust data models across raw, curated, and semantic layers using Delta Lake.
  • Create dimensional models, star schemas, and domain-layer datasets for analytics and ML.
  • Establish and maintain standards for schema design, metadata, and lineage.
  • Implement data validation, anomaly detection, SLAs, and documentation across pipelines.
  • Build automated tests, monitoring, and alerting for freshness, completeness, and accuracy.
  • Partner with platform teams to enhance observability and operational tooling.
  • Work closely with analysts to understand business KPIs and deliver high-quality curated datasets.
  • Partner with ML engineers and data scientists to build reusable feature pipelines.
  • Collaborate with data platform engineers to optimize compute, governance, and orchestration.
  • Optimize Spark jobs, SQL queries, cluster configurations, and storage patterns for performance and cost.
  • Improve reliability, reduce technical debt, and simplify complex pipelines.
  • Apply best practices for RBAC, data privacy, and PII handling using Unity Catalog.
  • Ensure adherence to compliance frameworks and documentation standards.
  • Stay current on modern data engineering patterns, Lakehouse architecture, orchestration, and best practices.
  • Explore new technologies that improve reliability, scalability, and developer productivity.

Benefits

  • Competitive Compensation & Total Rewards Incentives
  • Comprehensive Healthcare Coverage
  • Multiple 401(k) Savings Plan Options
  • Auto Enrollment in Employer-Directed Retirement Account Feature (100% employer-funded!)
  • Generous Paid Time Off – Including 12 Paid Holidays, Volunteer Time Off and Paid Family Leave
  • Disability, Life, and Long Term Care Insurance
  • Tuition Reimbursement, Student Loan Repayment and Training & Certification Support
  • Wellness support including gym membership reimbursement and Employee Assistance Program resources (work/life support, financial & legal management)
  • Caregiver and Mental Health Support Services
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service