About The Position

Join the AI and Data Platforms team at Apple, where we build and manage cloud-based data platforms handling petabytes of data at scale. We are looking for a passionate and independent Software Engineer specializing in reliability engineering for data platforms, with a strong understanding of data and ML systems. If you thrive in a fast-paced environment, love crafting solutions that don't yet exist, and possess excellent communication skills to collaborate across diverse teams, we invite you to contribute to Apple's high standards in an exciting and dynamic setting. As part of our team, you will be responsible for developing and operating our big data platform using open source or other solutions to aid critical applications, such as analytics, reporting, and AI/ML apps. This includes working to optimize performance and cost, automate operations, and identifying and resolving production errors and issues to ensure the best data platform experience.

Requirements

  • 3+ years of professional software engineering experience with large-scale big data platforms, including strong programming skills in Java, Scala, Python, or Go.
  • Proven expertise in designing, building, and operating large-scale distributed data processing systems with a strong focus on Apache Spark.
  • Hands-on experience with table formats and data lake technologies such as Apache Iceberg, ensuring scalability, reliability, and optimized query performance.
  • Skilled at coding for distributed systems and developing resilient data pipelines.
  • Strong background in incident management, including troubleshooting, root cause analysis, and performance optimization in complex production environments.
  • Proficient with Unix/Linux systems and command-line tools for debugging and operational support.

Nice To Haves

  • Expertise in designing, building, and operating critical, large-scale distributed systems with a focus on low latency, fault-tolerance, and high availability.
  • Experience with contribution to Open Source projects is a plus.
  • Experience with multiple public cloud infrastructure, managing multi-tenant Kubernetes clusters at scale and debugging Kubernetes/Spark issues.
  • Experience with workflow and data pipeline orchestration tools (e.g., Airflow, DBT).
  • Understanding of data modeling and data warehousing concepts.
  • Familiarity with the AI/ML stack, including GPUs, MLFlow, or Large Language Models (LLMs).
  • A learning attitude to continuously improve the self, team, and the organization.
  • Solid understanding of software engineering best practices, including the full development lifecycle, secure coding, and experience building reusable frameworks or libraries.

Responsibilities

  • Develop and operate large-scale big data platforms using open source and other solutions.
  • Support critical applications including analytics, reporting, and AI/ML apps.
  • Optimize platform performance and cost efficiency.
  • Automate operational tasks for big data systems.
  • Identify and resolve production errors and issues to ensure platform reliability and user experience

Benefits

  • Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition.
  • Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service