Data Engineer II

McKinseyAtlanta, GA
270d

About The Position

As a Data Engineer II, you will be based in Atlanta (US), Gurgaon (IN), or San Jose (CR) as part of our Learning and Development Solutions Team. This team is responsible for technically managing learning platforms that empower our colleagues' growth and development. You will be a leading contributor in the creation and maintenance of data pipelines and database views to represent required information sets. You will build and test more sophisticated end-to-end data transformation pipelines solving for specific challenges of different kinds of data sources and types of data (e.g., master, transaction, reference, and metadata). You will be technically hands-on and comfortable writing code to cater to business requirements. You will design solutions and use SQL, Python, PySpark, or other programming tools to consume, transform, and write data according to processing requirements for the data. You will follow and help to enforce coding and data best practices with the team. You will develop, promote, and use reusable patterns for consuming, transforming, and storing different kinds of data from diverse sources. You will always use a quality and security-first mindset and ensure principles are met through leading by example. You will keep aware of the newest technologies and trends and provide meaningful investigations into their potential. You will act as thought leader in the team to assess the technical feasibility of developing solutions around a conceptual idea. You will consistently be seen as an enabler to working with distributed development teams.

Requirements

  • Undergraduate degree; Advanced graduate degree (e.g., MBA, PhD, etc.) or equivalent work experience preferred.
  • Years of corporate and/or professional services experience.
  • Excellent organization capabilities, including the ability to initiate tasks independently and see them through to completion.
  • Strong communication skills, both verbal and written, in English and local office language(s).
  • Proficient in rational decision-making based on data, facts, and logical reasoning.
  • Technical skills with hands-on experience in AWS Glue, Snowflake, Python, PySpark, Git/GitHub, Terraform, and designing and implementing ETL pipelines.

Nice To Haves

  • AWS - RDS, DynamoDB, Lambda, API-GW, EC, Sagemaker.
  • Airflow.
  • Databricks.
  • Kafka.
  • Iceberg, Flink.
  • Snowflake Cortex.
  • Warehousing.
  • Data lakes/lake houses.
  • Data modeling.

Responsibilities

  • Create and maintain data pipelines and database views.
  • Build and test end-to-end data transformation pipelines.
  • Write code to cater to business requirements using SQL, Python, PySpark, or other programming tools.
  • Enforce coding and data best practices within the team.
  • Develop reusable patterns for consuming, transforming, and storing data.
  • Maintain a quality and security-first mindset.
  • Investigate new technologies and trends.
  • Assess technical feasibility of developing solutions.

Benefits

  • Comprehensive benefits package including medical, dental, mental health, and vision coverage for you, your spouse/partner, and children.
  • Continuous learning and apprenticeship culture.
  • Opportunity to make a tangible impact with innovative ideas and practical solutions.
  • Diverse global community with colleagues across countries.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Career Level

Mid Level

Industry

Professional, Scientific, and Technical Services

Education Level

Bachelor's degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service