Sidley Austin Llp-posted 4 months ago
$148,000 - $164,000/Yr
Senior
Chicago, IL
Professional, Scientific, and Technical Services

The Senior Data Engineer will design, build, and maintain the scalable data pipelines, models, and infrastructure that power analytics, business intelligence, and machine‑learning products across the company. Partnering closely with business, product, and analytics teams, you will translate complex requirements into elegant, reliable data solutions and help drive the delivery of innovative data products. This role reports to the Senior Manager, Data Engineering.

  • Design, develop, and maintain robust, scalable data pipelines and ETL processes, ensuring efficient ingestion, transformation, and storage of data.
  • Build and optimize data models and schemas for analytics, reporting, and operational data stores.
  • Implement and maintain data quality frameworks, including data validation, monitoring, and alerting mechanisms.
  • Collaborate closely with data architects, analysts, data scientists, and product teams to align data engineering activities with business goals.
  • Leverage cloud data platforms (AWS, Azure, GCP) to build and optimize data storage solutions, including data warehouses, data lakehouses, and real-time data processing.
  • Develop automation processes and frameworks for CI/CD supported by version control, linting, automated testing, security scanning, and monitoring.
  • Contribute to the maintenance and improvement of data governance practices, helping to ensure data integrity, accessibility, and compliance with regulations such as GDPR.
  • Provide technical mentorship and guidance to junior team members, promoting best practices in software engineering, data engineering, and agile development.
  • Troubleshoot and resolve complex data infrastructure and pipeline issues, ensuring minimal downtime and optimal performance.
  • Bachelor's degree in Computer Science, Engineering, Data Science, or a related field.
  • A minimum of 5 years of hands-on experience in data engineering, building scalable data pipelines, ETL/ELT processes.
  • Extensive experience with cloud data platforms in Azure, AWS, Google.
  • Strong proficiency with Python, SQL, and Apache Spark for data processing.
  • Hands-on experience with modern data-platform components (object storage, Lakehouse engines, orchestration tools, columnar warehouses, streaming services).
  • Proven experience with data modeling, schema design, and performance tuning of large-scale data systems.
  • Deep understanding of data engineering best practices: code repositories, CI/CD pipelines, test automation, monitoring, and alerting systems.
  • Skilled at crafting compelling data narratives through tables, reports, dashboards, and other visualization tools.
  • Strong problem-solving and analytical skills with excellent attention to detail.
  • Excellent communication skills and experience collaborating with technical and business stakeholders.
  • Master's degree in Computer Science, Engineering.
  • Experience building data pipelines in an Azure Databricks environment.
  • Experience migrating to—or building—data platforms from the ground up.
  • Experience with Infrastructure as Code (IAC) and Governance as Code.
  • Familiarity with machine-learning workloads and partnering on feature engineering.
  • Experience working in an Agile delivery model.
  • Bonus eligibility.
  • Comprehensive benefits program.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service