Senior Data Engineer

Wavicle Data SolutionsOak Brook, IL
Hybrid

About The Position

Wavicle Data Solutions is a founder-led, high-growth consulting firm helping organizations unlock the full potential of cloud, data, and AI. We’re known for delivering real business results through intelligent transformation—modernizing data platforms, enabling AI-driven decision-making, and accelerating time-to-value across industries. At the heart of our approach is WIT—the Wavicle Intelligence Framework. WIT brings together our proprietary accelerators, delivery models, and partner expertise into one powerful engine for transformation. It’s how we help clients move faster, reduce costs, and create lasting impact—and it’s where your ideas, skills, and contributions can make a real difference. Our work is deeply rooted in strong partnerships with AWS, Databricks, Google Cloud, and Azure, enabling us to deliver cutting-edge solutions built on the best technologies the industry has to offer. With over 500 team members across 42 cities in the U.S., Canada, and India, Wavicle offers a flexible, digitally connected work environment built on collaboration and growth. We invest in our people through: Competitive compensation and bonuses, Unlimited paid time off, Health, retirement, and life insurance plans, Long-term incentive programs, Meaningful work that blends innovation and purpose. If you’re passionate about solving complex problems, exploring what’s next in AI, and being part of a team that values delivery excellence and career development—you’ll feel right at home here.

Requirements

  • Bachelor's degree in Computer Science, Data Engineering, Electronics Engineering, Data Science or related field of study plus 5 years of experience in related occupations is required.
  • 5 years of experience in the following: Designing and implementing data pipelines in a cloud environment; Object-oriented programming using Scala, Python, R, or Java.
  • 4 years of experience in the following: Using SQL to write complex, highly-optimized queries across large volumes of data.
  • 3 years of experience in the following: ETL pipeline implementation using AWS, Azure or GCP services; Designing/implementing solutions using one or more of the following databases: Snowflake, AWS Redshift, Synapse, Big Query, Oracle, SQL Server, Teradata, Netezza, Hadoop, Mongo DB or Cassandra.
  • 2 years of experience in the following: Cloud platforms: AWS, Azure, or GCP.
  • Must also have authority to work permanently in the U.S.

Responsibilities

  • Create the conceptual, logical and physical data models.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of sources like Hadoop, Spark, AWS Lambda, etc.
  • Lead and/or mentor a small team of data engineers.
  • Design, develop, test, deploy, maintain and improve data integration pipelines.
  • Develop pipeline objects using Apache Spark / Pyspark / Python or Scala.
  • Design and develop data pipeline architectures using Hadoop, Spark and related AWS Services.
  • Load and performance test data pipelines built using the above-mentioned technologies.
  • Communicate effectively with client leadership and business stakeholders.
  • Participate in proposal and/or SOW development

Benefits

  • Competitive compensation and bonuses
  • Unlimited paid time off
  • Health, retirement, and life insurance plans
  • Long-term incentive programs
  • Meaningful work that blends innovation and purpose
  • Health Care Plan (Medical, Dental & Vision)
  • Retirement Plan (401k, IRA)
  • Life Insurance (Basic, Voluntary & AD&D)
  • Unlimited Paid Time Off (Vacation, Sick & Public Holidays)
  • Short Term & Long-Term Disability
  • Employee Assistance Program
  • Training & Development
  • Work From Home
  • Bonus Pro-gram
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service