DataOps Engineer, Info Apps

AppleCupertino, CA
2d

About The Position

Apple Info Apps' Data Engineering team is seeking a DataOps Engineer to support the reliability and operational excellence of our large-scale data platform. Our team provides data services for 20+ iOS and macOS applications, including News, Stocks, Weather, and Books. You'll work on the data platform that powers products used by millions of Apple customers every day and support data engineers, application engineers, data analysts, and ML engineers across the organization. DESCRIPTION In this role, you will support the operation of critical data pipelines and data services, ensuring their reliability, performance, and scalability. You'll develop monitor system health, automation tooling, and learn best practices for operational excellence while working closely with partner teams.

Requirements

  • BS in Computer Science, related fields or equivalent experience
  • Strong foundation in Python programming for automation and scripting
  • Solid understanding and hands-on experience with container basics : Docker, Kubernetes fundamentals
  • Solid understanding of AWS infrastructure concepts and internals: compute services (EC2, EKS), storage (S3), databases (RDS), data services (Glue, Athena), networking (VPCs, subnets), and identity management (IAM) etc.
  • Familiarity with monitoring and logging tools (Splunk, Prometheus, Grafana)
  • Familiarity with distributed data platforms and modern data storage formats (Parquet, Iceberg)
  • Web development experience with React and Django frameworks
  • Strong troubleshooting and communication skills

Nice To Haves

  • Hands-on ability to apply this AWS knowledge by configuring, managing, and troubleshooting infrastructure using infrastructure-as-code tools (Terraform, CloudFormation) and AWS CLI for deployment automation and disaster recovery.
  • Experience with job orchestration tools (Airflow, Argo Workflows)
  • Hands-on observability stack experience (PagerDuty, Splunk, Prometheus, Grafana, OpenTelemetry)
  • Familiarity with SQL and big data ecosystems (Kafka, data warehouse concepts)
  • Experience with Apache Spark job tuning and Flink stream processing
  • Hands-on experience with Apache Iceberg or similar open-table formats
  • Experience with incident management and postmortem processes
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service