Sr Staff Data Engineer - Hybrid

The HartfordHartford, CT
49dHybrid

About The Position

We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future. Sr Staff AI Data Engineer is responsible for Implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions. This Includes pre-processing with extraction, chunking, embedding and grounding strategies to get the data ready. This role will have a Hybrid work schedule, with the expectation of working in an office location (Hartford, CT; Chicago, IL; Columbus, OH; and Charlotte, NC) 3 days a week (Tuesday through Thursday).

Requirements

  • Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field.
  • 8+ years of strong hands-on data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark, Datamesh, Datalake or Data Fabric.
  • Strong programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow.
  • Experience in implementing data governance practices, including Data Quality, Lineage, Data Catalogue capture, holistically, strategically, and dynamically on a large-scale data platform.
  • Experience with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes).
  • Strong written and verbal communication skills and ability to explain technical concepts to various stakeholders.

Nice To Haves

  • Experience in multi cloud hybrid AI solutions.
  • AI Certifications
  • Experience in Employee Benefits industry
  • Knowledge of natural language processing (NLP) and computer vision technologies.
  • Contributions to open-source AI projects or research publications in the field of Generative AI.
  • Experience with building AI pipelines that bring together structured, semistructured and unstructured data. This includes pre-processing with extraction, chunking, embedding and grounding strategies, semantic modeling, and getting the data ready for Models and Agentic solutions.
  • Experience in vector databases, graph databases, NoSQL, Document DBs, including design, implementation, and optimization. (e.g., AWS open search, GCP Vertex AI, Neo4j, Spanner Graph, Neptune, Mongo, DynamoDB etc.).
  • 3+ years of AI/ML experience, with 1+ years of data engineering experience focused on supporting Generative AI technologies.
  • Hands-on experience implementing production ready enterprise grade AI data solutions.
  • Experience with prompt engineering techniques for large language models.
  • Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models.
  • Experience of vector databases and graph databases, including implementation and optimization.
  • Experience in processing and leveraging unstructured data for AI applications.
  • Proficiency in implementing scalable AI driven data systems supporting agentic solution (AWS Lambda, S3, EC2, Langchain, Langgraph).

Responsibilities

  • AI Data Engineering lead responsible for Implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions. This Includes pre-processing with extraction, chunking, embedding and grounding strategies to get the data ready.
  • Develop AI-driven systems to improve data capabilities, ensuring compliance with industry’s best practices.
  • Implement efficient Retrieval-Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
  • Collaborate with cross-functional teams to integrate solutions into operational processes and systems supporting various functions.
  • Stay up to date with industry advancements in AI and apply modern technologies and methodologies to our systems.
  • Design, build and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming, or similar.
  • Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
  • Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
  • Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.
  • Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
  • Mentor junior team members and engage in communities of practice to deliver high-quality data and AI solutions while promoting best practices, standards, and adoption of reusable patterns.
  • Develop graph database solutions for complex data relationships supporting AI systems.
  • Apply AI solutions to insurance-specific data use cases and challenges.
  • Partner with architects and stakeholders to influence and implement the vision of the AI and data pipelines while safeguarding the integrity and scalability of the environment.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service