About the Opportunity We are seeking a Data Engineer to help design, build, and scale modern data solutions that power analytics, operational reporting, and business decision-making across the company. You will work closely with our data, analytics, and business teams to develop high-quality data pipelines, models, and integrations that deliver clean, reliable, and actionable data. You will join a small, cross-geography engineering team operating with a product mindset - delivering iterative value, partnering closely with business and analytics stakeholders, and owning both build and run responsibilities of a mission-critical platform. What to expect? Design, build, and maintain scalable data pipelines and transformations across multiple systems and sources. Develop high-quality data models that support analytics, reporting, and operational use cases. Partner with analytics, product, and business stakeholders to understand data needs and translate them into technical solutions. Implement strong data quality, validation, and monitoring processes to ensure reliability and trust in the data. Optimize data storage, processing, and performance within cloud data warehousing environments. Contribute to the development and evolution of our modern data platform, including tooling, standards, and best practices. Support data platform operations—troubleshooting issues, improving reliability, and ensuring SLAs are met. Collaborate with cross-functional partners on governance, documentation, definitions, and data stewardship. Implement CI/CD practices and Infrastructure-as-Code (IaC) to automate deployment, testing, and environment management. Participate in code reviews, design discussions, and operational on-call rotations as needed. What do you need to be successful? Required 3–6+ years of experience in Data Engineering or a similar technical role. Strong SQL skills and experience working with cloud data warehouses (e.g., Snowflake, Redshift, BigQuery). Experience building and maintaining ETL/ELT pipelines using tools such as dbt, Airflow, or similar frameworks. Proficiency in Python or another scripting language. Strong understanding of data modeling, data structures, and modern data architecture patterns. Experience with CI/CD workflows, version control (Git), and Infrastructure-as-Code tools (Terraform a plus). Familiarity with data governance, quality, lineage, and cataloging concepts and tools. Demonstrated ability to collaborate with analysts, engineers, and business partners in a cross-functional environment. Excellent communication skills and a product-focused mindset.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
501-1,000 employees