Data Engineer

Abbott
$61,300 - $122,700Remote

About The Position

We’re focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions. We’re revolutionizing the way people monitor their glucose levels with our new sensing technology. Working at Abbott At Abbott, you can do work that matters, grow, and learn, care for yourself and family, be your true self and live a full life. You’ll also have access to: Career development with an international company where you can grow the career you dream of. Employees can qualify for free medical coverage in our Health Investment Plan (HIP) PPO medical plan in the next calendar year An excellent retirement savings plan with high employer contribution Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit - an affordable and convenient path to getting a bachelor’s degree. A company recognized as a great place to work in dozens of countries around the world and named one of the most admired companies in the world by Fortune. A company that is recognized as one of the best big companies to work for as well as a best place to work for diversity, working mothers, female executives, and scientists. THE OPPORTUNITY This Data Engineer position can work out remotely within the U.S. As a Data Engineer, you will contribute to the design, development, and maintenance of data pipelines and cloud-based data solutions that support analytics and reporting needs. You’ll work with scalable data platforms to ingest, transform, and model data while collaborating closely with engineering, product, and analytics teams to deliver reliable data solutions. This role focuses on hands-on development and technical execution, with opportunities to learn modern data technologies and grow your impact over time. The technology stack includes Databricks, AWS (Redshift, S3, Lambda, DynamoDB), Spark, and Python. We’re looking for someone curious, collaborative, and passionate about building quality data solutions that help improve patient outcomes.

Requirements

  • Bachelors Degree in Computer Science, Information Technology or other relevant field.
  • At least 1-3 years of recent experience in Software Engineering, Data Engineering or Big Data
  • Ability to work effectively within a team in a fast-paced changing environment
  • Knowledge of or direct experience with Databricks and/or Spark.
  • Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives.
  • Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources
  • Knowledge of data cleaning, wrangling, visualization and reporting
  • Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience
  • Familiarity of databases, BI applications, data quality and performance tuning
  • Excellent written, verbal and listening communication skills
  • Comfortable working asynchronously with a distributed team

Nice To Haves

  • Knowledge of or direct experience with the following AWS Services desired S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda.
  • Experience working in an agile environment
  • Practical Knowledge of Linux

Responsibilities

  • Design, build, and maintain data pipelines to support analytics, reporting, and downstream applications
  • Develop and maintain data ingestion solutions on AWS using AWS-native services
  • Build and optimize data models using Databricks and AWS data stores such as Redshift, RDS, and S3
  • Integrate and assemble large datasets to meet business and analytical requirements
  • Extract, transform, and load (ETL/ELT) data into approved tools and frameworks
  • Configure and maintain integration tools, databases, data warehouses, and analytical systems
  • Process structured and unstructured data into formats suitable for analysis, partnering with analysts as needed
  • Collaborate with engineering and technology teams to align data solutions with business needs
  • Monitor and improve data pipeline performance, reliability, and scalability
  • Write clean, well-documented code and follow established engineering best practices
  • Contribute to technical documentation and support existing architecture patterns
  • Participate in peer code reviews and team design discussions
  • Work cross-functionally with Engineering, Product, QA, and Analytics teams
  • Stay current with data engineering tools and industry trends and share learnings with the team

Benefits

  • Career development with an international company where you can grow the career you dream of.
  • Employees can qualify for free medical coverage in our Health Investment Plan (HIP) PPO medical plan in the next calendar year
  • An excellent retirement savings plan with high employer contribution
  • Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit - an affordable and convenient path to getting a bachelor’s degree.
  • A company recognized as a great place to work in dozens of countries around the world and named one of the most admired companies in the world by Fortune.
  • A company that is recognized as one of the best big companies to work for as well as a best place to work for diversity, working mothers, female executives, and scientists.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service