Care Access-posted 2 months ago

We are seeking an experienced and detail-oriented professional to join our team as a Sr. Data Engineer. In this pivotal role, you will be responsible for designing, developing, and maintaining robust data pipelines that ensure the reliable ingestion, transformation, and delivery of complex data (demographics, medical, financial, marketing, etc.) across systems. The ideal candidate will bring deep expertise in Databricks, SQL, and modern data engineering practices, along with strong collaboration skills to help drive excellence across our data infrastructure.

  • Design and implement scalable, reliable, and efficient data pipelines to support clinical, operational, and business needs.
  • Develop and maintain architecture standards, reusable frameworks, and best practices across data engineering workflows.
  • Build automated systems for data ingestion, transformation, and orchestration leveraging cloud-native and open-source tools.
  • Optimize data storage and processing in data lakes and cloud data warehouses (Azure, Databricks).
  • Develop and monitor batch and streaming data processes to ensure data accuracy, consistency, and timeliness.
  • Maintain documentation and lineage tracking across datasets and pipelines to support transparency and governance.
  • Work cross-functionally with analysts, data scientists, software engineers, and business stakeholders to understand data requirements and deliver fit-for-purpose data solutions.
  • Review and refine work completed by other team members, ensuring quality and performance standards are met.
  • Provide technical mentorship to junior team members and collaborate with contractors and third-party vendors to extend engineering capacity.
  • Use Databricks, DBT, Azure Data Factory, and SQL to architect and deploy robust data engineering solutions.
  • Integrate APIs, structured/unstructured data sources, and third-party systems into centralized data platforms.
  • Evaluate and implement new technologies to enhance the scalability, observability, and automation of data operations.
  • Proactively suggest improvements to infrastructure, processes, and automation to improve system efficiency, reduce costs, and enhance performance.
  • Strong expertise in Databricks, SQL, dbt, Python, and cloud data ecosystems such as Azure.
  • Experience working with structured and semi-structured data from diverse domains.
  • Familiarity with CI/CD pipelines, orchestration tools (e.g., Airflow, Azure Data Factory), and modern software engineering practices.
  • Strong analytical and problem-solving skills, with the ability to address complex data challenges and drive toward scalable solutions.
  • Bachelor’s or master’s degree in computer science, Information Systems, Engineering, or a related field.
  • 5+ years of experience in data engineering with a proven track record of building cloud-based, production-grade data pipelines.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service