This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

Zscalerposted 26 days ago
$175,000 - $250,000/Yr
Full-time • Senior
San Jose, CA
Resume Match Score

About the position

Zscaler is looking for an experienced Principal Software Engineer to join our ZFabric Infrastructure team. Reporting to the VP, Product & Engineering, you will be responsible for carrying out DevOps and SRE duties for Big Data on various open-source platforms such as Hadoop, Spark, Trino, and Snowflake. Your role will involve monitoring platforms and adhering to runbooks/SOPs to manage platform and application problems, familiarizing yourself with cluster maintenance processes, and implementing changes as per documented installation and validation plans. You will showcase robust troubleshooting and debugging skills, aiming to pinpoint and rectify issues while also offering advice on how to prevent such problems in the future. Additionally, you will deploy new cloud-based environments (FedRamp) to support new and existing workloads.

Responsibilities

  • Carrying out DevOps and SRE duties for Big Data on various open-source platforms such as Hadoop, Spark, Trino, and Snowflake
  • Monitoring platforms and adhering to runbooks/SOPs to manage platform and application problems
  • Familiarizing with cluster maintenance processes and implementing changes as per documented installation and validation plans
  • Showcasing robust troubleshooting (root cause analysis) and debugging skills
  • Deploying new cloud-based environments (FedRamp) to support new and existing workloads

Requirements

  • U.S. citizenship is required for this position due to the nature of the customers assigned to this role
  • 10+ years in DevOps, Software Engineering, or related experience
  • Experience with Linux administration command line programs and ability to create/edit scripts, Chef, Ansible, puppet
  • Knowledge of one or more of the tools – IAC, Containerization and orchestration (Terraform, Docker & Kubernetes) and knowledge of one of the cloud infrastructure providers – AWS, GCP and Azure
  • Ability to work on multiple projects and general understanding of software environments and network topologies

Nice-to-haves

  • Experience with Big Data related infrastructure administration, including installation, configuration, maintenance, and upgrades
  • Experience with architecting and building Big Data on Prem and in-cloud (including FedRamp) utilizing IAC principles
  • Ability to work different schedules as part of an on-call rotation

Benefits

  • Various health plans
  • Time off plans for vacation and sick time
  • Parental leave options
  • Retirement options
  • Education reimbursement
  • In-office perks, and more!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service