About The Position

The Engineering Operations and Planning team within Software Engineering (SWE) is looking for a data engineer to continue scaling and improving our existing data infrastructure, while also innovating on new tooling. This role is crafted to bring efficiency, process, and standardization to the organization by building scalable data pipelines and repositories that process, clean, and validate the integrity of the data from raw sources and prepare data for analysis and visualization with particular focus on historical repositories and change control. The successful candidate will have a proactive approach with the ability to work independently and collaboratively on a wide range of projects. In this role, you will work alongside a small but impactful team, collaborating with data analysts, software developers, project managers and other teams at Apple to understand requirements and translate them into scalable, reliable, and efficient data pipelines, and data processing workflows.

Requirements

  • Experience with Ruby, Rails, PostgreSQL, Snowflake, Kubernetes, Tableau or similar technologies.
  • Experience with enterprise resource planning (ERP) systems such as human resources information, real estate and development and/or finance systems.
  • Demonstrates a strong sense of professionalism; able to translate business requirements into forward-looking technical solutions and work autonomously on assignments while handling sensitive and confidential information with integrity and discretion.
  • Proficiency in various data modeling techniques, such as ER, Hierarchical, Relational, or NoSQL modeling.
  • Excellent design and development experience with SQL and NoSQL database, OLTP and OLAP databases.
  • Expertise in Java, Scala, Python, or Unix Shell scripting and dependency-driven job schedulers.
  • Experience working in a complex, matrixed organization involving cross-functional, and/or cross-business projects.

Nice To Haves

  • Familiarity with Apple technologies such as Xcode and Swift.
  • Familiarity with other related fields, such as data science, machine learning, and artificial intelligence, to design solutions that can accommodate advanced analytics.

Responsibilities

  • Architect and implement large scale systems and data pipelines with a focus on agility, interoperability, simplicity, and reusability.
  • Demonstrate business acumen, expertise and strategies into data modeling solutions.
  • Utilize deep knowledge in infrastructure, warehousing, data protection, security, data collection, processing, modeling, and metadata management to build end-to-end solutions that also support metadata logging, anomaly detection, data cleaning, and transformation.
  • Identify process improvements opportunities (tools, work streams, systems) and drive solutions from conception to implementation.
  • Demonstrate and explain complex business processes, systems, and/or tools with a focus on the upstream/downstream impact and relationship between multiple functions and/or decisions.
  • Identify and address issues in data design or integration.
  • Discuss technical tradeoffs across the stack, including: system architecture, database design, API design and infrastructure.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service