About The Position

We love technology, and we enjoy what we do. We are always looking for innovation. We have social awareness and strive to improve it daily. We make things happen. You can trust us. Our Enrouters are always up for a challenge. We ask questions, and we love to learn. We pride ourselves on having great benefits and compensations, a fantastic work environment, flexible schedules, and policies that positively impact the balance of work and life outside of it. At Enroute, we are looking for a Senior Data Engineer to join a growing Data team responsible for designing, building, and evolving scalable data platforms and cloud-native pipelines that support business intelligence, analytics, and operational workloads. The ideal candidate is highly hands-on with Python, Spark/PySpark, Snowflake, and cloud-based data architectures , with strong experience building reliable, production-grade ETL/ELT pipelines and modern data warehousing solutions. This role is ideal for someone who enjoys solving complex data challenges, optimizing performance at scale, and collaborating closely with data scientists, analysts, and engineering teams.

Requirements

  • 5+ years of professional experience in Data Engineering or related fields
  • Strong experience designing and maintaining scalable data pipelines
  • Deep understanding of ETL / ELT best practices
  • Strong experience with large-scale data processing architectures
  • Proven experience with batch data processing
  • Strong experience with data warehousing concepts
  • Advanced Python
  • Strong hands-on experience with Apache Spark / PySpark
  • Advanced SQL (complex queries, optimization, transformations)
  • Strong experience processing large structured and unstructured datasets
  • Hands-on experience with AWS or Azure
  • Experience building cloud-native data solutions
  • Experience with Docker
  • Experience with CI/CD pipelines
  • Strong knowledge of Git / version control
  • Strong hands-on experience with Apache Airflow
  • Experience designing workflow orchestration pipelines
  • Scheduling, monitoring, and failure recovery strategies
  • Strong expertise in Snowflake (MUST HAVE)
  • Snowflake data warehouse design
  • Snowflake development
  • Query and warehouse optimization
  • Performance tuning and cost efficiency
  • Cloud data warehouse architecture best practices

Responsibilities

  • Design, build, and maintain scalable, reliable, and high-performance data pipelines
  • Develop end-to-end ETL / ELT workflows
  • Process large-scale datasets using Spark / PySpark
  • Build and orchestrate cloud-native pipelines in AWS and/or Azure
  • Design and optimize Snowflake data warehouse solutions
  • Ensure performance, scalability, governance, and cost optimization
  • Write and optimize advanced SQL queries
  • Collaborate with Data Scientists, Analysts, and Software Engineers
  • Translate business requirements into production-ready data solutions
  • Ensure data consistency, availability, and quality
  • Implement CI/CD, Git workflows, and Dockerized deployments
  • Improve reliability and observability of data platforms

Benefits

  • Monetary compensation
  • Year-end Bonus
  • IMSS, AFORE, INFONAVIT
  • Major Medical Expenses Insurance
  • Minor Medical Expenses Insurance
  • Life Insurance
  • Funeral Expenses Insurance
  • Preferential rates for car insurance
  • TDU Membership
  • Holidays and Vacations
  • Sick days
  • Bereavement days
  • Civil Marriage days
  • Maternity & Paternity leave
  • English and Spanish classes
  • Performance Management Framework
  • Certifications
  • TALISIS Agreement: Discounts at ADVENIO, Harmon Hall, U-ERRE, UNID
  • Taquitos Rewards
  • Amazon Gift Card on your Birthday
  • Work-from-home Bonus
  • Laptop Policy
  • Equal employment
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service