About The Position

We are building a large-scale data platform that transforms raw system logs into high-quality, structured datasets used for experimentation and analytics. The platform processes terabytes to petabytes of data daily and serves as a foundational asset for multiple teams. This Senior Data Engineer - AI Infrastrucute role focuses on designing and implementing data pipelines, ensuring correctness, and building scalable data models. You will work closely with data scientists and platform engineers to ensure that data is accurate, reliable, and usable for downstream decision-making. We are looking for engineers who care deeply about data correctness, understand how systems behave at scale, and can translate complex data into well-structured, reliable datasets.

Requirements

  • Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 3+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter.

Nice To Haves

  • Experience with Azure technologies such as: ADLS Gen2 (Blob Storage) Synapse Spark Azure Data Explorer (ADX)
  • Experience working with structured and semi-structured data (e.g., JSON logs)
  • Familiarity with experimentation and analytics workflows
  • Experience with orchestration tools (e.g., Airflow)
  • Exposure to privacy, compliance, and secure data handling practices
  • 5+ years of experience in data engineering or software engineering with a strong focus on data systems
  • Strong experience with PySpark or similar distributed data processing frameworks
  • Experience building and operating large-scale data pipelines
  • Strong understanding of data modeling and schema design
  • Experience ensuring data quality and correctness in production systems
  • Proficiency in Python
  • Experience working with cloud-based data platforms (Azure, AWS, or GCP)
  • Ability to reason about data at scale, including performance and failure modes

Responsibilities

  • Design and implement large-scale data pipelines using PySpark and distributed processing frameworks
  • Build and maintain data models that accurately represent underlying system behavior and business logic
  • Ensure high standards of data correctness, completeness, and consistency across datasets
  • Develop validation, monitoring, and alerting mechanisms to detect data quality issues
  • Partner with data scientists to support experimentation and analytics use cases
  • Collaborate with platform engineers to ensure efficient data ingestion, processing, and storage
  • Optimize pipelines for performance, scalability, and cost efficiency
  • Define and enforce best practices for schema design, data transformations, and pipeline reliability
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service