Senior Data Engineer - Contractor

Wavicle Data SolutionsChicago, IL
11d$60 - $70

About The Position

A BIT ABOUT WAVICLE Wavicle Data Solutions is a founder-led, high-growth consulting firm helping organizations unlock the full potential of cloud, data, and AI. We’re known for delivering real business results through intelligent transformation—modernizing data platforms, enabling AI-driven decision-making, and accelerating time-to-value across industries. At the heart of our approach is WIT —the Wavicle Intelligence Framework. WIT brings together our proprietary accelerators, delivery models, and partner expertise into one powerful engine for transformation. It’s how we help clients move faster, reduce costs, and create lasting impact—and it’s where your ideas, skills, and contributions can make a real difference. Our work is deeply rooted in strong partnerships with AWS, Databricks, Google Cloud, and Azure , enabling us to deliver cutting-edge solutions built on the best technologies the industry has to offer. With over 500 team members across 42 cities in the U.S., Canada, and India, Wavicle offers a flexible, digitally connected work environment built on collaboration and growth. We invest in our people through: -Competitive compensation and bonuses -Unlimited paid time off -Health, retirement, and life insurance plans -Long-term incentive programs -Meaningful work that blends innovation and purpose If you’re passionate about solving complex problems, exploring what’s next in AI, and being part of a team that values delivery excellence and career development—you’ll feel right at home here. THE OPPORTUNITY Wavicle is hiring a Senior Data Engineer with strong real-life experience in building data pipelines using emerging technologies.

Requirements

  • Bachelor or Master's degree in Computer Science, or related field is required.
  • 8+ years of hands-on professional work experience with AWS and Python programming, experience with Python frameworks is required
  • Hands-on expertise with cloud platforms including AWS and GCP
  • Expert level knowledge of using SQL to write complex, highly-optimized queries across large volumes of data.
  • Working experience on ETL pipeline implementation using AWS services such as Glue, Lambda, EMR, Shell, S3, SNS, Pyspark, etc. is required.
  • Strong knowledge of data warehousing solutions, particularly Amazon Redshift
  • Strong problem solving and troubleshooting skills with the ability to exercise mature judgement.

Nice To Haves

  • Hands-on professional work experience using emerging technologies (Snowflake, Talend, and/or Databricks) is highly desirable.
  • Proficiency in DBT for data transformation and modeling
  • Experience with automation of data workflows and processes
  • Experience with AWS Cloud on Data Integration with Apache Spark, EMR, Glue, Kafka, Kinesis and Lambda in S3, Redshift, RDS, and MongoDB/DynamoDB ecosystems.
  • Strong real-life experience in Python development, especially in PySpark in AWS Cloud environment.

Responsibilities

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of sources like Hadoop, Spark, AWS Lambda, etc.
  • Design, develop, test, deploy, maintain and improve data integration pipeline.
  • Develop pipeline objects using Apache Spark / Pyspark / Python or Scala.
  • Design and develop data pipeline architectures using Hadoop, Spark and related AWS Services.
  • Load and performance test data pipelines built using the above-mentioned technologies.

Benefits

  • Competitive compensation and bonuses
  • Unlimited paid time off
  • Health, retirement, and life insurance plans
  • Long-term incentive programs
  • Meaningful work that blends innovation and purpose
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service