Apple is where extraordinary people do their best work. If making a real impact excites you, a career here might be your dream — just be prepared to dream big. Apple’s growing supply chain complexity demands innovative approaches beyond traditional data engineering. You’ll join a team designing and building modern, scalable data infrastructure that powers analytics, machine learning, and AI-driven decision-making across Operations. You’re passionate about building reliable data systems, staying ahead of technology trends, and thrive navigating ambiguity in a fast-paced environment. If this sounds like you, we’d love to talk. DESCRIPTION Engage with business and analytics teams to deeply understand data needs and translate requirements into robust, scalable engineering solutions that directly impact Operations decisions Design and implement end-to-end data pipelines and architectures from ingestion and transformation to delivery across batch and real-time streaming workloads Build and maintain high-quality data models (dimensional, relational, or knowledge graph-based) using modern transformation frameworks such as dbt, powering analytics and AIML use cases at scale Architect and operate data workflows using orchestration tools (e.g., Apache Airflow, etc) with built-in monitoring, alerting, and SLA management Implement data observability, lineage tracking, and validation frameworks to uphold data integrity and trustworthiness across the platform Collaborate with Data Scientists, ML Engineers, Software Engineers and Analysts to operationalize models and ensure data infrastructure supports production AIML workflows Partner with infrastructure and platform teams to manage cloud-native data environments (Snowflake, Spark, Delta Lake / Apache Iceberg) with a focus on performance, cost efficiency, and scalability Leverage AI-assisted development tools (e.g., GitHub, Claude) and LLM-powered agents to accelerate pipeline authoring, code review, documentation, and transformation logic generation from natural language specifications Apply DataOps principles including CI/CD pipelines, version control, automated testing, and containerization (Docker, Kubernetes) to deliver reliable, production-grade data products Champion a data product mindset, enabling self-serve analytics and reducing bottlenecks for downstream consumers Tune query performance, partitioning strategies, and storage optimization for data at scale in cloud warehouses and lakehouses Develop and maintain clear technical documentation including data dictionaries, lineage diagrams, and architecture decision records Present data infrastructure capabilities, health metrics, and architectural recommendations to senior leadership in clear, non-technical terms Research and evaluate emerging data engineering technologies including streaming architectures, GenAI-powered data tooling, and next-generation warehousing to expand the team’s capabilities and accelerate innovation
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees