About The Position

The Senior Professional, Data Engineering job designs, builds and maintains complex data systems that enable data analysis and reporting. With minimal supervision, this job ensures that large sets of data are efficiently processed and made accessible for decision making.

Requirements

  • Minimum requirement of 4 years of relevant work experience. Typically reflects 5 years or more of relevant experience.
  • Advanced programming skills in Python, Java, Scala, or similar languages.
  • Expert-level proficiency in SQL for data manipulation and optimization.
  • Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies.
  • Strong background in data governance principles, including data quality, privacy, and security considerations for data product development and consumption.

Nice To Haves

  • Experience developing data systems on major cloud platforms (AWS, GCP, Azure).
  • Hands-on experience building modern data architectures, including data lakes, data lakehouses, and data hubs, along with related capabilities such as ingestion, governance, modeling, and observability.
  • Demonstrated proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet).
  • Experience developing data pipelines with streaming architectures and tools (Kafka, Flink).
  • Expertise in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow). Deep experience with modeling concepts like SCD and schema evolution.
  • Strong background with using Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI.

Responsibilities

  • Prepares data infrastructure to support the efficient storage and retrieval of data.
  • Examines and resolves appropriate data formats to improve data usability and accessibility across the organization.
  • Develops complex data products and solutions using advanced engineering and cloud-based technologies, ensuring they are designed and built to be scalable, sustainable and robust.
  • Develops and maintains streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
  • Reviews existing data systems and architectures to identify areas for improvement and optimization.
  • Collaborates with multi-functional data and advanced analytic teams to gain requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
  • Builds complex prototypes to test new concepts and implements data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
  • Develops automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
  • Performs complex data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service