Senior Data Engineer

AllstateMcCullom Lake, IL

About The Position

As a senior Data Engineer, you will design, build, and support high-quality data solutions and pipelines using modern engineering practices and tools. In this role, you will partner closely with product managers, software engineers, and data analysts to deliver scalable, reliable data capabilities that power digital products and business insights. You will be accountable for the end to end lifecycle of your data products, from design and implementation through production support, data quality, and performance against key metrics.

Requirements

  • Strong proficiency with data pipeline development in Python, Java, or Scala.
  • Experience with modern data frameworks (Spark, Kafka, Flink, dbt, or equivalent).
  • Solid understanding of SQL and NoSQL databases and data modeling principles.
  • Ability to optimize SQL, pipelines and storage for performance and cost.
  • Experience building batch and/or streaming data solutions.
  • Experience using Microsoft Fabric Notebooks to develop, debug, and operationalize data engineering workflows.
  • Experience with containerization and orchestration tools (Docker, Kubernetes).
  • Experience with observability and monitoring tools (Datadog preferred).
  • Ability to work in collaborative, iterative, productcentric team environments.
  • Strong communication and problemsolving skills.
  • 4 year Bachelors Degree
  • 3 or more years of experience

Nice To Haves

  • Experience with cloud data services (Azure, AWS).
  • Handson experience with Microsoft Fabric components, including Lakehouse, Warehouse, Data Pipelines, Dataflows Gen2, and Semantic Models.
  • Familiarity with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, etc.).
  • Performance tuning for data pipelines, databases, and queries.
  • Familiarity with generative and agentic AI tools to improve engineering productivity.

Responsibilities

  • Design, develop, and enhance scalable data pipelines and data processing systems.
  • Build reusable ingestion, transformation, and storage patterns to support product and analytical needs.
  • Integrate data from diverse sources including APIs, databases, event streams, and thirdparty systems.
  • Implement data models and schemas that create unified, consistent views of business operations.
  • Apply modern engineering practices, including version control, testing, CI/CD, and automated deployments.
  • Ensure data quality, integrity, and reliability through validation, monitoring, and observability tools.
  • Optimize data workflows for performance, cost efficiency, and operational resilience.
  • Document data flows, lineage, and technical components in support of transparency and maintainability.
  • Collaborate closely with product managers, engineers, and analysts to understand data requirements.
  • Participate in iteration planning, ensuring shared understanding of backlog items and technical needs.
  • Engage in daily standups, retrospectives, and product ceremonies as an active member of the team.
  • Contribute to data governance practices such as metadata management and data lineage.
  • Ensure compliance with data privacy, security, and regulatory requirements.
  • Evaluate new technologies, frameworks, and patterns to improve data infrastructure and engineering capabilities.
  • Share knowledge and mentor less experienced engineers, contributing to team growth and best practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service