Senior Engineer

MicrochipChandler, AZ
Onsite

About The Position

Are you looking for a unique opportunity to be a part of something great? Want to join a 17,000-member team that works on the technology that powers the world around us? Looking for an atmosphere of trust, empowerment, respect, diversity, and communication? How about an opportunity to own a piece of a multi-billion dollar (with a B!) global organization? We offer all that and more at Microchip Technology Inc. People come to work at Microchip because we help design the technology that runs the world. They stay because our culture supports their growth and stability. They are challenged and driven by an incredible array of products and solutions with unlimited career potential. Microchip’s nationally-recognized Leadership Passage Programs support career growth where we proudly enroll over a thousand people annually. We take pride in our commitment to employee development, values-based decision making, and strong sense of community, driven by our Vision, Mission, and 11 Guiding Values ; we affectionately refer to it as the Aggregate System and it’s won us countless awards for diversity and workplace excellence. Our company is built by dedicated team players who love to challenge the status quo; we did not achieve record revenue and over 30 years of quarterly profitability without a great team dedicated to empowering innovation. People like you. Visit our careers page to see what exciting opportunities and company perks await!

Requirements

  • Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field (or equivalent practical experience)
  • 5+ years of experience in data engineering, data warehousing, or software engineering
  • Expert-level in SQL skills with experience transforming analytical datasets.
  • Hands-on and extensive experience with cloud platforms (AWS, Azure)
  • Hands-on and extensive experience with Databricks on AWS (academic, internship, or project-based)
  • Working knowledge of dbt Core, including models, tests, and documentation
  • Familiarity with Python for data processing or ML workflows
  • Experience using Git or another version control system
  • Good communication and interpersonal skills
  • Creativity and problem solving
  • Identifying continuous improvement opportunities
  • Study current practices, think outside the box, and foster creative analysis to complex problems
  • High learning agility and adapt quickly to priority changes
  • Be self-driven
  • Work independently and in a team to develop innovation

Responsibilities

  • Perform regular data validation and cleansing to ensure the accuracy, integrity, and reliability of datasets
  • Identify and resolve data pipeline failures (debug data anomalies and issues using SQL and dbt test results)
  • Build and maintain ETL/ELT processes to move data from various sources into data warehouses or lakes
  • Write and optimize SQL transformations that support feature engineering and model training
  • Setup data catalog, execute and monitor data and ML workloads using Databricks
  • On-board data product owners to Data Universe platform
  • Support AWS-based lakehouse architectures, primarily using Amazon S3
  • Setup IAM (Identity and Access Management) roles, permissions, and secure access patterns
  • Troubleshoot and optimize cloud-based AI and data workflows
  • Support batch and micro-batch processing using Spark
  • Manage data governance and security access and discovery using Databricks Unity Catalog
  • Design and maintain high-performance Delta Lake pipelines using the Medallion Architecture (Bronze, Silver, and Gold)
  • Apply dbt tests and documentation to ensure data quality for AI consumption
  • Architect curated datasets that maintain strict alignment with upstream raw sources, ensuring a seamless and transparent flow of information from ingestion to consumption
  • Execute code reviews and follow established dbt and SQL standards
  • Build and maintain training and inference workflows on Databricks
  • Prepare and validate feature datasets used by ML models, ensuring correctness, consistency, and timeliness
  • Support LLM-enabled use cases, such as: embedding generation, semantic search, retrieval-augmented generation (RAG)
  • Monitor model inputs and outputs for data quality issues and unexpected behavior
  • Understand how upstream data changes affect model performance, stability, and bias.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service