Data Engineer

MIGx AG
Hybrid

About The Position

We’re looking for a Data Engineer to join our growing Data and AI Engineering team of professionals who thrive at the intersection of data, technology, and healthcare. Whether you're early in your career or already have hands-on experience, we welcome curious minds, team players, and problem-solvers eager to build high-quality data solutions for the life sciences industry. At MIGx, you’ll contribute to modern data mesh and data fabric architectures, develop cloud-native pipelines, and help implement DataOps practices that ensure our systems are robust, observable, and production-ready.

Requirements

  • Hands-on experience delivering production-grade solutions in Databricks, ideally on Azure.
  • Strong practical knowledge of Unity Catalog (governance, permissions, catalog/schema design, lineage).
  • Solid Python + PySpark + SQL skills for transformation, automation, and troubleshooting.
  • Working knowledge of data quality, validation frameworks, and test-driven data development.
  • Experience with Managed Tables and Lakehouse best practices.
  • Experience building Databricks pipelines using Jobs/Workflows/DLT.
  • Proven experience with Databricks Asset Bundles (DABs) for packaging and deployments.
  • Understanding of DataOps concepts, including reproducibility, automation, and collaboration.
  • Team-first mindset and experience in agile environments (Scrum or Kanban).
  • Professional working proficiency in English (our internal and client-facing working language).

Nice To Haves

  • Infrastructure automation using Terraform, Bash, or PowerShell.
  • Experience in the clinical data domain (clinical trial data, clinical datasets, standards/terminology).
  • Familiarity with data testing tools.
  • Understanding of GxP or other healthcare data regulations.
  • Experience with non-relational data systems (e.g., MongoDB, CosmosDB).
  • Spanish and/or Catalan language skills.

Responsibilities

  • Build and manage ETL/ELT pipelines using tools like Databricks, dbt, PySpark, and SQL.
  • Contribute to scalable data platforms across cloud environments (Azure, AWS, GCP).
  • Implement and maintain CI/CD workflows using tools such as GitHub Actions and Azure DevOps.
  • Apply DataOps principles: pipeline versioning, testing, lineage, deployment automation, and monitoring.
  • Integrate automated data quality checks, profiling, and validation into pipelines.
  • Ensure strong data observability via logging, metrics, and alerting tools.
  • Collaborate on infrastructure as code for data environments using Terraform or similar tools.
  • Connect and orchestrate ingestion from APIs, relational databases, and file systems.
  • Work in agile teams, contributing to standups, retrospectives, and continuous improvement.

Benefits

  • Hybrid work model and flexible working schedule that would suit night owls and early birds
  • 25 holiday days per year.
  • Possibilities of career development and the opportunity to shape the company future.
  • An employee-centric culture directly inspired by employee feedback - your voice is heard, and your perspective encouraged.
  • Different training programs to support your personal and professional development.
  • Work in a fast growing, international company.
  • Friendly atmosphere and supportive Management team.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service