Staff Engineer- DevOps - Hadoop Big Data - Federal

ServiceNowSanta Clara, CA
108d$158,500 - $277,500

About The Position

As a Staff DevOps Engineer on our Big Data Federal Team, you will help deliver 24x7 support for our Government Cloud infrastructure. The Federal Big Data Team has 3 shifts that provide 24x7 production support for our Big Data Government cloud infrastructure. This is a 2nd Shift Position - Sunday to Wednesday with work hours from 3 pm Pacific Time to 2 am Pacific Time. The Big Data team plays a critical and strategic role in ensuring that ServiceNow can exceed the availability and performance SLAs of the ServiceNow Platform powered Customer instances - deployed across the ServiceNow cloud and Azure cloud. Our mission is to deliver state-of-the-art Monitoring, Analytics and Actionable Business Insights by employing new tools, Big Data systems, Enterprise Data Lake, AI, and Machine Learning methodologies that improve efficiencies across a variety of functions in the company.

Requirements

  • Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving.
  • Deep understanding of Hadoop / Big Data Ecosystem.
  • 6+ Experience working with systems such as HDFS, Yarn, Hive, HBase, Kafka, RabbitMQ, Impala, Kudu, Redis, MariaDB, and PostgreSQL.
  • Hands-on experience with Kubernetes in a production environment.
  • Deep understanding of Kubernetes architecture, concepts, and operations.
  • Strong knowledge in querying and analyzing large-scale data using VictoriaMetrics, Prometheus, Spark, Flink, and Grafana.
  • Experience supporting CI/CD pipelines for automated applications deployment to Kubernetes.
  • Strong Linux Systems Administration skills.
  • Strong scripting skills in Bash, Python for automation and task management.
  • Proficient with Git and version control systems.
  • Familiarity with Cloudera Data Platform (CDP) and its ecosystem.
  • Ability to learn quickly in a fast-paced, dynamic team environment.

Responsibilities

  • Responsible for deploying, production monitoring, maintaining and supporting Big Data infrastructure, Applications on ServiceNow Cloud and Azure environments.
  • Deploy, scale, and manage containerized applications using Kubernetes, docker, and other related tools.
  • Automate Continuous Integration / Continuous Deployment (CI/CD) data pipelines for applications leveraging tools such as Jenkins, Ansible, and Docker.
  • Proactively identify and resolve issues within Kubernetes clusters, containerized applications, and CI/CD pipelines.
  • Provide expert-level support for incidents and perform root cause analysis.
  • Understanding of networking concepts related to containerized environments.
  • Provide production support to resolve critical Big Data pipelines and application issues and mitigating or minimizing any impact on Big Data applications.
  • Collaborate closely with Site Reliability Engineers (SRE), Customer Support (CS), Developers, QA and System engineering teams in replicating complex issues leveraging broad experience with UI, SQL, Full-stack and Big Data technologies.
  • Responsible for enforcing data governance policies in Commercial and Regulated Big Data environments.

Benefits

  • Health plans, including flexible spending accounts.
  • 401(k) Plan with company match.
  • Employee Stock Purchase Plan (ESPP).
  • Matching donations.
  • Flexible time away plan.
  • Family leave programs.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Professional, Scientific, and Technical Services

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service