About The Position

Are you ready to be part of something big? We’re hiring for the Senior Integration Platform Engineer on our Sales Team! In this role, you’ll engage with key decision-makers, forge impactful relationships with some of the world’s most influential organizations, and directly contribute to the growth of Semrush. Why Semrush? We are a global leader in online marketing technology, meeting market demand with rapid scaling. Don’t miss the chance to join our unstoppable momentum and make history with us! Some highlights of our success include: Semrush named a Leader in The Forrester Wave™: Search Engine Optimization Solutions, Q3 2025 $400M+ Annual Recurring Revenue 118,000+ paying customers worldwide 1M+ freemium users Exceptional demand for our new Enterprise platform, with deals secured from global giants like P&G, Tesla, FedEx, Samsung, Amazon, and others. If you're looking for a role where your impact will be visible and meaningful, we’d love to hear from you. Tasks in the role We are looking for a Senior Integration Platform Engineer to own and evolve the core integration and data backbone that underpins GTM Analytics, Enterprise-grade AI tooling, and Business-critical data flows across Semrush. This is a global, high-trust engineering role with real ownership. You will design and operate secure, scalable cloud services and data integrations used across regions and teams, including regulated, SOX-relevant data flows (e.g. global compensation feeds). You will also play a key role in supporting post-acquisition integration work with Adobe.This role partners closely with Analytics, Go-To-Market Engineering, Security, and Finance - without owning GTM systems directly.

Requirements

  • Core engineering experience
  • Senior backend / platform engineer with strong systems thinking
  • Proven experience owning production, business-critical systems end-to-end
  • Strong Python, Go, or JavaScript engineering background
  • Hands-on experience building and operating production data warehouse tables (BigQuery or similar), with strong SQL and a focus on reliability, performance, and AI-ready data design
  • API design experience with attention to contracts, versioning, and backward compatibility
  • Experience with event-driven and asynchronous architectures
  • Data engineering fundamentals (required)
  • Strong grounding in data engineering principles, including: schema evolution and data contracts idempotent ingestion, replayability, and backfills batching and late-arriving data protecting downstream analytics and reporting consumers
  • Experience operating data pipelines that support executive, financial, or compensation reporting
  • Comfort working in environments with auditability, controls, and change discipline (SOX familiarity is a plus)
  • Hands-on experience with GCP (Cloud Run, Cloud Functions, Pub/Sub)
  • Hands-on experience with Google Cloud Container tools (Cloud Run, GKE, Artifact Registry, Docker)
  • Experience with IAM, least-privilege access, and secrets management
  • Infrastructure-as-code (Terraform or equivalent)
  • Observability: logs, metrics, alerts, SLIs/SLOs
  • Native, daily use of AI coding tools such as Claude Code, Codex, Cursor, or equivalent
  • Experience applying AI tools to: production code development and refactoring debugging and incident analysis architectural trade-off evaluation
  • Experience assessing LLM cost economics, including: model selection trade-offs (latency, quality, cost) batching and token-efficiency strategies build vs buy decisions for AI-powered workflows
  • Proven judgment shipping LLM-powered functionality in production-safe, cost-aware systems
  • Demonstrated ability to reason about business impact, not just technical correctness
  • Experience working with or adjacent to Sales, RevOps, or Finance teams, where data quality or availability affected revenue, incentives, or compensation
  • Comfort translating business requirements into robust, auditable technical systems
  • Strong judgment balancing speed, cost, risk, and correctness

Nice To Haves

  • Experience with orchestration tools such as Apache Airflow is a plus.

Responsibilities

  • Design, build, and operate production APIs and cloud services (REST and/or gRPC) on Google Cloud Platform
  • Own Cloud Deployed Solutions/Scripts (Cloud Run, Docker etc..) that power integration, provisioning, and internal enablement platforms
  • Build and maintain secure, event-driven integration patterns (Pub/Sub, async workflows)
  • Own business-critical data integrations, including global compensation and finance-adjacent feeds with SOX relevance
  • Design and enforce security and access boundaries (IAM, secrets, service-to-service auth, cloud ↔ on-prem connectivity)
  • Drive cost-efficient cloud execution, including batching, async processing, and pricing-aware architecture decisions
  • Build and operate enterprise-grade AI services, with clear cost, latency, and quality trade-offs
  • Design reliable data ingestion patterns that support analytics, executive reporting, and downstream consumers
  • Act as primary owner for integration infrastructure and enterprise AI tooling for the RevOps team
  • Act as secondary owner for analytics pipelines to ensure coverage and eliminate single points of failure
  • Lead operational ownership: monitoring, alerting, incident response, root-cause analysis, and audit readiness

Benefits

  • Unlimited PTO
  • Health insurance
  • Travel insurance
  • Flexible working hours
  • Employee Assistance Program
  • Employee Resource Groups
  • Paid parental leave
  • Relief Fund
  • Corporate events, teambuildings
  • Snacks, drinks at the office
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service