Observability Pipeline Engineer

Charles Schwab CorporationOmaha, NE
42dOnsite

About The Position

At Schwab, you're empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us "challenge the status quo" and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s). This role is responsible for supporting and maintaining enterprise monitoring and telemetry platforms; Confluent Enterprise Platform (i.e., Kafka), ITRS Geneos, and OpenTelemetry telemetry pipeline as a member of the Enterprise Telemetry team. Activities include supporting Kafka producers and consumers, ITRS agent administration, OTEL pipeline management, troubleshooting and resolving issues, identifying opportunities for improvement, and creating reference and run-book documentation. May also participate in developing observability dashboards and configure monitoring and alerting as needed. Must be able to plan, coordinate and implement changes and use tools to troubleshoot incidents. Strong verbal and written communication skills are required. This position will help monitor the health of these environments and address issues in a timely manner. Duties will also include on-boarding new producer and consumer use cases, performing software upgrades, process improvement, and additional platform support roles. It will also include contributing to the build and support of the enterprise telemetry pipeline. Proficient with Monitoring Tools, Linux administration; Proficient in Kafka administration, including installing software, modifying configuration files, and agent management. Highly efficient multi-tasker and great organization skills. Splunk, Grafana, and Datadog experience a plus.

Requirements

  • Deep understanding of the Confluent Enterprise Platform component: Brokers, Topics, Partitions, Producers, Consumers, Zookeeper, KRaft.
  • Ability to setup and configure on-prem Kafka components, replication factors, and partitioning.
  • Experience engineering logging platforms
  • Understanding of telemetry monitoring platforms and concepts, like ITRS Geneos, OpenTelemetry agents like Grafana Alloy. Grafana Cloud and Datadog.
  • Deep understanding of security protocols: SSL/TLS, SASL, LDAP, etc. and role-based authentication.
  • Experience working in telemetry monitoring (alerts, events, logs, metrics, and traces).
  • Experience working in Linux/Unix, Windows, and virtualized environment.
  • Understanding of cloud environments (AWS, Azure, GCP, and PCF)
  • Familiarity with DNS, Load balancing, and firewalls.
  • Ability to analyze logs to diagnose issues.
  • Experience using other monitoring or analytics tools such as Splunk or Prometheus)

Nice To Haves

  • Scripting experience with Python, Bash, Powershell or similar.
  • Knowledge or experience in high level languages such as Java or Go.

Responsibilities

  • On-boarding new Kafka producer and consumer use cases.
  • Engineering and supporting the enterprise telemetry pipeline
  • Testing and deploying software upgrades.
  • Managing and supporting telemetry agents.
  • Support of OpenTelemetry collectors
  • Issue troubleshooting and resolution.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Securities, Commodity Contracts, and Other Financial Investments and Related Activities

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service