About The Position

At Bayer we’re visionaries, driven to solve the world’s toughest challenges and striving for a world where 'Health for all Hunger for none’ is no longer a dream, but a real possibility. We’re doing it with energy, curiosity and sheer dedication, always learning from unique perspectives of those around us, expanding our thinking, growing our capabilities and redefining ‘impossible’. There are so many reasons to join us. If you’re hungry to build a varied and meaningful career in a community of brilliant and diverse minds to make a real difference, there’s only one choice. Senior Cloud Engineer, Observability As Bayer Crop Science’s digital farming arm, we advance regenerative agriculture and technology breakthroughs using agronomic science, data science, engineering, and real-world farming experience. Our shared AWS platform enables hundreds of engineers to ship secure, reliable software faster. We’re looking for an entrepreneurial builder who treats observability as a product: paved roads for telemetry, opinionated patterns for dashboards/alerts, and a relentless focus on improving signal quality and reducing time-to-detect/time-to-recover. You’ll partner with delivery teams, Security, and Data to standardize how we instrument services, monitor reliability, and learn from production.

Requirements

  • Bachelor’s in computer science/engineering or equivalent experience.
  • 5+ years hands-on AWS experience operating production workloads.
  • Deep practical experience with observability in production, including:
  • Datadog and/or CloudWatch (dashboards, monitors/alerts, log search, correlation)
  • Designing actionable alerts (noise reduction, ownership, runbook-first alerts)
  • Defining/using SLIs/SLOs and reliability metrics to drive behavior
  • Strong proficiency with Infrastructure as Code (Terraform; CloudFormation a plus).
  • Strong programming for automation/tooling (Python, Go, or similar).
  • Solid grasp of cloud architecture, networking, and security fundamentals.

Nice To Haves

  • Experience productizing observability enablement (templates, golden paths, standards, onboarding workflows).
  • CI/CD at scale (GitLab pipelines), including integrating reliability/telemetry guardrails into delivery workflows.
  • Logging/telemetry platforms beyond CloudWatch/Datadog (e.g., ELK/OpenSearch) and experience managing scale concerns (volume, retention, cardinality).
  • Container platforms (ECS/EKS) and common AWS data services (RDS/Aurora, S3/lake patterns, MSK/Kinesis).
  • FinOps experience related to observability (tagging, allocation, optimizing telemetry cost).
  • Relevant AWS certifications and excellent communication skills.

Responsibilities

  • Observability Enablement & Support (Primary Focus)
  • Be the hands-on SME for our observability toolchain (e.g., Datadog, CloudWatch, OpenSearch), including log pipelines, tracing/telemetry standards, and platform templates.
  • Run office hours, produce exemplars, and pair with teams to implement “known-good” instrumentation and alerting.
  • Triage and resolve observability-related platform requests (new service onboarding, log/metric gaps, noisy alerts, dashboard standards) with clear ownership and measurable outcomes.
  • Establish and operationalize SLIs/SLOs for key platform components and enable teams to define service SLOs without reinventing the wheel.
  • Own Observability Paved Roads & Golden Paths
  • Maintain opinionated “golden paths” for:
  • Logging (standard fields/tags, retention, routing, searchability)
  • Metrics (naming conventions, cardinality guardrails, standard RED/USE views)
  • Tracing (service maps, critical spans, propagation standards)
  • Dashboards (starter dashboards by service type + curated views for platform reliability)
  • Provide reusable templates for alerting patterns (latency, error-rate, saturation, dependency failures), tuned for actionable paging vs. noise.
  • Reliability Outcomes (Through Signals, Not Heroics)
  • Reduce MTTR by improving detection, triage paths, runbooks, and “what changed” visibility.
  • Drive reliability reviews focused on observability gaps: missing signals, unclear ownership, bad alerts, and uninstrumented failure modes.
  • Partner with delivery teams to turn recurring incidents into durable fixes (instrumentation + alerting + automation + documentation).
  • Observability + DevSecOps Integration
  • Embed observability checks into CI/CD and platform workflows (e.g., telemetry guardrails, dashboard/monitor templates, logging standards checks).
  • Partner with Security/Compliance to ensure telemetry supports auditability and incident investigation without ad-hoc effort.
  • Measure, Learn, Iterate (Ownership Mindset)
  • Define and report platform observability KPIs: alert noise rate, % actionable alerts, MTTA/MTTR trends, onboarding time to “fully observable,” runbook coverage, incident recurrence.
  • Run lightweight experiments to improve signal quality (threshold tuning, monitor redesign, dashboard UX), and ship improvements like a product owner.
  • Cost Stewardship for Telemetry (FinOps-Aware Observability)
  • Create cost-aware telemetry standards (log volume controls, metric cardinality guidance, sampling strategies, retention tiers).
  • Help teams optimize spend while improving reliability outcomes (“cheaper + better” logging/metrics patterns).
  • Collaboration & Mentorship
  • Serve as a trusted partner to delivery units, Security, and Data—turning pain points into paved-road improvements.
  • Mentor engineers and uplift organizational practices for incident response, reliability signals, and operational excellence.

Benefits

  • health care
  • vision
  • dental
  • retirement
  • PTO
  • sick leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service