Sr Staff Data Engineer

The HartfordColumbus, OH
1dHybrid

About The Position

We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future. T07 A new role can expand your knowledge and your network and help you learn more about our business. If you think this opportunity is a fit for your career you should apply. If you are not sure you can have a conversation with your manager. The Enterprise Data Services Department’s Personal Insurance team is seeking a hands-on Sr Staff Data Engineer to build and scale its Data assets on Snowflake and AWS platform. The role focuses on technical leadership, integrating data from new data sources, curating and transforming it into high-quality data products for actionable insights, using a mix of solutions spanning AI and Cloud technologies. Ideal candidates bring deep expertise in data engineering frameworks and tools, proficiency in programming languages, and experience with DevOps / DataOps pipelines, cloud platforms, and agile methodologies. Strong problem-solving, communication, and collaboration skills are essential, along with a proactive mindset and the ability to thrive in complex, fast-paced environment. This role can have a Hybrid or Remote work schedule. Candidates who live near one of our locations will have the expectation of working in an office 3 days a week (Tuesday through Thursday). Candidates who do not live near an office should maintain their current work arrangement with the expectation of coming into the office as business needs arise.

Requirements

  • Candidates must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline.
  • 8+ years of progressive experience in data engineering, with significant hands‑on expertise developing and deploying large-scale data and analytics applications on cloud platforms such as AWS and Snowflake.
  • 5+ years of hands-on experience with Python and PySpark / Spark for data ingestion, transformation, and pipeline development.
  • Deep hands‑on experience with Snowflake, including SQL development, ELT design, performance optimization, and semi‑structured data handling.
  • Good experience working with disparate data sources- Structured, semi-structured data (Flat files, XML, JSON, Parquet) and unstructured data.
  • Good experience with version control tools, CI/CD pipelines and DevOps tools like GitHub, Jenkins, Nexus and uDeploy.
  • Strong background in Data Profiling, Data Modeling and Data governance concepts is key to this role.

Nice To Haves

  • Certifications in AWS Data & Analytics Services, AI and Snowflake.
  • Experience using AI‑assisted development tools to improve productivity in SQL development, data pipeline creation, testing, and documentation.
  • Experience in the Insurance industry and policy administration data environments
  • Experience with Informatica Data Management Cloud

Responsibilities

  • Design, develop, and optimize highly scalable batch and near‑real‑time data pipelines supporting structured and unstructured data sources (XML, JSON, Parquet).
  • Lead the delivery of curated, analytics‑ready data products supporting reporting, advanced analytics, regulatory, and machine learning use cases.
  • Implement robust error handling, reconciliation, restartability, and performance optimization to ensure platform reliability and data integrity.
  • Partner with Data Governance team for metadata management, data lineage, data quality monitoring and data privacy controls.
  • Evaluate and apply AI‑assisted engineering tools to improve developer productivity, accelerate delivery, and enhance data solutions.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service