Data Engineer

Eli Lilly and CompanyIndianapolis, IN
Hybrid

About The Position

As Data Engineer located in Indianapolis, IN you will be responsible for designing, developing, and maintaining the data solutions that ensure the availability and quality of data for analysis and/or business transactions. They design and implement efficient data storage, processing and retrieval solutions for datasets and build data pipelines, optimize database designs, and work closely with data scientists, architects, and analysts to ensure data quality and accessibility. Data engineers require strong skillsets in data integration, acquisition, cleansing, harmonization, and transforming data. They play a crucial role in transforming raw data into datasets designed for analysis which enable organizations to unlock valuable insights for decision making.

Requirements

  • Bachelor’s degree in Computer Science or similar STEM degree
  • At least 2 years of experience in data engineering using SQL, Python, PySpark, and AWS services including Lambda, Glue, S3, Redshift, Athena, and IAM roles/policies.
  • 1+ years of experience with hands-on experience using GitHub and CI/CD pipelines for code deployment.
  • Experience in building data pipelines following Data Lakehouse, Data Warehouse, and Data Mart standards.
  • Experience in data modelling (both OLTP and OLAP), managing large datasets, and implementing secure, compliant data governance practices.
  • Hands-on experience with Databricks, including cluster management, workspace configuration, notebook development, and performance optimization.
  • Qualified applicants must be authorized to work in the United States on a full-time basis.
  • Strong proficiency in SQL and Python.
  • Hands-on experience with cloud platforms (AWS, Azure, or GCP) and tools like Glue, EMR, Redshift, Lambda, or Databricks.
  • Deep understanding of ETL/ELT workflows, data modelling, and data warehousing concepts.
  • Knowledge of data governance, security, and quality practices.
  • Working knowledge of Databricks for building and optimizing scalable data pipelines and analytics workflows.
  • A problem-solving mindset, attention to detail, and a passion for clean, maintainable code.
  • Strong communication and collaboration skills to work with both technical and non-technical stakeholders.

Nice To Haves

  • Experience with Azure or GCP is a plus.
  • Experience with orchestration tools like Airflow for workflow automation.
  • Familiarity with big data and streaming frameworks (e.g., Apache Spark, Kafka, Flink).
  • Experience with CI/CD, version control (Git), and infrastructure-as-code tools is a plus.
  • AWS Certified Data Engineer
  • Databricks Certified Data Engineer (Associate)
  • Familiarity with AI/ML workflows and integrating machine learning models into data pipelines
  • Ability to collaborate with business stakeholders to translate key business requirements into scalable technical solutions.
  • Familiarity with security models and developing solutions on large-scale, distributed data systems.

Responsibilities

  • Design, build, and maintain scalable and reliable data pipelines for batch and real-time processing.
  • Own incident response and resolution, including root cause analysis and post-mortem reporting for data failures and performance issues.
  • Develop and optimize data models, ETL/ELT workflows, and data integration across multiple systems and platforms.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
  • Implement data governance, security, and quality standards across data assets.
  • Lead end-to-end data engineering projects and contribute to architectural decisions.
  • Design and implement cloud-native solutions on AWS (preferred) using tools such as AWS Glue, EMR, and Databricks. Experience with Azure or GCP is a plus.
  • Promote best practices in coding, testing, and deployment.
  • Monitor, troubleshoot, and improve performance and reliability of data infrastructure.
  • Automate manual processes and identify opportunities to optimize data workflows and reduce costs.

Benefits

  • company bonus (depending, in part, on company and individual performance)
  • company-sponsored 401(k)
  • pension
  • vacation benefits
  • medical, dental, vision and prescription drug benefits
  • flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts)
  • life insurance and death benefits
  • certain time off and leave of absence benefits
  • well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities)

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service