About The Position

We’re building a world of health around every individual — shaping a more connected, convenient and compassionate health experience. At CVS Health®, you’ll be surrounded by passionate colleagues who care deeply, innovate with purpose, hold ourselves accountable and prioritize safety and quality in everything we do. Join us and be part of something bigger – helping to simplify health care one person, one family and one community at a time. Position Summary Participates in the design, build, manage and responsible for successful delivery of large-scale data structures and Pipelines and efficient Extract/Load/Transform (ETL) workflows. Acts as the data engineer for large and complex projects involving multiple resources and tasks, providing individual mentoring in support of company objectives. Healthcare analytics knowledge would be beneficial Applies understanding of key business drivers to accomplish own work. Uses expertise, judgment, and precedents to contribute to the resolution of moderately complex problems. Leads portions of initiatives of limited scope, with guidance and direction. Deep knowledge of architecture, frameworks, and methodologies for working with and modelling large data sets, such as HDFS, YARN, Spark, Hive and edge nodes and NoSQL databases. Strong SQL skills with commensurate experience in a large database platform. Able to perform code reviews to ensure the code meets the acceptance criteria. Familiar with Data Modelling/Mapping and implementing them in development. Collaborates with client team to transform data and integrate algorithms and models into automated processes. Uses programming skills in Scala, Python, Java, or any of the major languages to build robust data pipelines and dynamic systems. Builds data marts and data models to support clients and other internal customers. Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards. Complete SDLC process and Agile Methodology (Scrum/SAFe) Google Cloud technology knowledge a must

Requirements

  • 5+ years of progressively complex related experience.
  • Strong problem-solving skills and critical thinking ability.
  • Strong collaboration and communication skills within and across teams.
  • Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources.
  • Ability to understand complex systems and solve challenging analytical problems.
  • Experience with bash shell scripts, UNIX utilities & UNIX Commands.
  • Knowledge in Scala, Java, Python, Hive, MySQL, or NoSQL or similar.
  • Knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environment.
  • Expert high-level coding skills such as SQL/PL-SQL and scripting languages (UNIX) required.
  • Experience with source code control systems (GIT) and CI/CD process (Jenkins/Team City/Octopus)
  • Requires significant knowledge across multiple areas and applications, has a significant healthcare business knowledge and impact to numerous applications.

Nice To Haves

  • Healthcare analytics knowledge would be beneficial
  • Deep knowledge of architecture, frameworks, and methodologies for working with and modelling large data sets, such as HDFS, YARN, Spark, Hive and edge nodes and NoSQL databases.
  • Strong SQL skills with commensurate experience in a large database platform.
  • Able to perform code reviews to ensure the code meets the acceptance criteria.
  • Familiar with Data Modelling/Mapping and implementing them in development.
  • Google Cloud technology knowledge a must
  • AWS knowledge (S3/Redshift/Lambda/Data Pipeline).
  • Working experience in Exasol/Redshift/Snowflake is a plus.
  • Experience building data transformation and processing solutions.
  • Scala is a plus.

Responsibilities

  • Participates in the design, build, manage and responsible for successful delivery of large-scale data structures and Pipelines and efficient Extract/Load/Transform (ETL) workflows.
  • Acts as the data engineer for large and complex projects involving multiple resources and tasks, providing individual mentoring in support of company objectives.
  • Applies understanding of key business drivers to accomplish own work.
  • Uses expertise, judgment, and precedents to contribute to the resolution of moderately complex problems.
  • Leads portions of initiatives of limited scope, with guidance and direction.
  • Collaborates with client team to transform data and integrate algorithms and models into automated processes.
  • Uses programming skills in Scala, Python, Java, or any of the major languages to build robust data pipelines and dynamic systems.
  • Builds data marts and data models to support clients and other internal customers.
  • Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards.
  • Complete SDLC process and Agile Methodology (Scrum/SAFe)

Benefits

  • Affordable medical plan options, a 401(k) plan (including matching company contributions), and an employee stock purchase plan.
  • No-cost programs for all colleagues including wellness screenings, tobacco cessation and weight management programs, confidential counseling and financial coaching.
  • Benefit solutions that address the different needs and preferences of our colleagues including paid time off, flexible work schedules, family leave, dependent care resources, colleague assistance programs, tuition assistance, retiree medical access and many other benefits depending on eligibility.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service