Intern- Engineering (Data Engineering)

Microchip Technology Inc.Chandler, AZ

About The Position

Are you looking for a unique opportunity to be a part of something great? Want to join a 17,000-member team that works on the technology that powers the world around us? Looking for an atmosphere of trust, empowerment, respect, diversity, and communication? How about an opportunity to own a piece of a multi-billion dollar (with a B!) global organization? We offer all that and more at Microchip Technology Inc. People come to work at Microchip because we help design the technology that runs the world. They stay because our culture supports their growth and stability. They are challenged and driven by an incredible array of products and solutions with unlimited career potential. Microchip’s nationally-recognized Leadership Passage Programs support career growth where we proudly enroll over a thousand people annually. We take pride in our commitment to employee development, values-based decision making, and strong sense of community, driven by our Vision, Mission, and 11 Guiding Values; we affectionately refer to it as the Aggregate System and it’s won us countless awards for diversity and workplace excellence. Our company is built by dedicated team players who love to challenge the status quo; we did not achieve record revenue and over 30 years of quarterly profitability without a great team dedicated to empowering innovation. People like you. Visit our careers page to see what exciting opportunities and company perks await! Job Description: Overview We are looking for a Data Engineering Intern with a strong foundation in SQL and data modeling, along with exposure to modern data platforms. This role is suited for candidates with hands-on experience (projects, coursework, or internships) who want to work on real production data systems. The primary focus of this role is building reliable, high-quality data models that support analytics and downstream data use cases.

Requirements

  • Qualifications
  • Currently enrolled in bachelor or masters program
  • Strong SQL fundamentals (required)
  • Basic understanding of data modeling concepts
  • Exposure to Python and data processing tools (Spark is a plus)
  • Familiarity with dbt, Databricks, or AWS is a plus
  • Ability to learn quickly and work with structured guidance

Responsibilities

  • dbt & Data Modeling (Primary Focus)
  • Build and maintain dbt models used for analytics and downstream workflows
  • Develop incremental models and snapshots for scalable data processing
  • Apply dbt tests and documentation to ensure data quality and reliability
  • Understand and work with data lineage from raw to curated layers
  • Follow established SQL and dbt standards and participate in code reviews
  • SQL & Data Foundations
  • Write and optimize SQL queries for data transformation
  • Debug data issues using SQL and dbt test results
  • Work with joins, aggregations, and window functions to ensure correctness and performance
  • Understand how data design impacts downstream systems and reporting
  • AWS & Cloud Basics
  • Work with data stored in Amazon S3 within a lakehouse architecture
  • Understand IAM roles, permissions, and secure access patterns
  • Assist in troubleshooting and improving cloud-based data workflows
  • Databricks & Lakehouse
  • Use Databricks to execute and monitor data pipelines
  • Work with Delta Lake tables across bronze, silver, and gold layers
  • Support batch and micro-batch processing using Spark
  • Gain exposure to governance concepts such as Unity Catalog
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service