Software Engineer II - Platform Anti-Abuse

KlaviyoBoston, MA
Onsite

About The Position

The Core Infrastructure – Identity & Organizations (Core IO) pillar owns the foundational substrate for identity, access, organizations, and platform integrity at Klaviyo. We manage the critical path of the user journey from login, to enforcing permissions, to operating within the correct organization and regional context so that the rest of the platform can move fast and stay secure. Within Core IO, the Platform Anti-Abuse (PAA) team defends Klaviyo's customers, their recipients, and Klaviyo's brand by preventing fraud, abuse, and violations of our Acceptable Use Policy and Terms of Service across all channels. We build the automated detection systems, rules-driven enforcement services, and shared platform tooling that keep Klaviyo's platform safe; and we partner closely with Compliance, Security, and product teams to make "doing the safe thing" the default for every new surface and API. Why this role is exciting: Fight real adversaries at scale: Abuse patterns evolve constantly;bad actors adapt, and so must your systems. You will build detection and enforcement capabilities that respond to emerging fraud patterns across email, SMS, and other channels, combining rules, signals, and ML to stay one step ahead. Your work is the platform's safety net: The rules engine and enforcement services your team owns are consumed by messaging, campaigns, flows, and nearly every sending channel at Klaviyo. Improvements you make protect millions of email and SMS recipients and directly affect platform trust and deliverability. Work at the intersection of rules, ML, and systems engineering: This role is unusually broad technically. You will build high-throughput Go services, Python rules pipelines, and integrate ML-based classifiers for content and URL abuse. Few platform roles combine adversarial problem-solving with this range of systems work. Automate policy enforcement at scale: Manual compliance review doesn't scale. We are building systems that automatically evaluate accounts against our Acceptable Use Policy and trigger the right enforcement actions; reducing human bottlenecks and making enforcement consistent across account types.

Requirements

  • You are a mid-level software engineer who has shipped and supported production systems, and who is motivated by building defenses against real-world fraud and abuse.
  • Experienced systems builder: You have 2-5+ years of professional software engineering experience, including building and operating backend or full-stack services in production.
  • Strong fundamentals and debugging skills: You are comfortable reasoning about data models, API design, concurrency, and failure modes, and you can dig through logs, metrics, and traces to identify root causes and implement systemic fixes.
  • Security and abuse motivated: You are energized by adversarial problem spaces; fraud detection, policy enforcement, content moderation and want to build systems that anticipate bad actors rather than just react to them.
  • Platform and signal mindset: You think about detection not just as individual rules but as a system of signals, coverage, and feedback loops. You like building shared enforcement APIs and detection pipelines that product teams plug into making abuse prevention a reusable capability rather than something each team reimplements.
  • Ownership and collaboration: You take responsibility for outcomes, not just code. You are comfortable driving a project end-to-end, coordinating with Compliance and Security stakeholders, and communicating trade-offs clearly in design docs and pull requests.
  • You've already experimented with AI in work or personal projects, and you're excited to dive in and learn fast. You're hungry to responsibly explore new AI tools and workflows.
  • Minimum qualifications 2-5+ years of professional software engineering experience.
  • Proficiency in at least one of Python or Go , and comfort working on backend and/or service-oriented systems, including web services or APIs backed by relational databases and/or caches (e.g., MySQL, Postgres, Redis).
  • Comfort reasoning about detection, classification, or enforcement systems ;whether rules engines, content evaluation, or risk signals and the trade-offs between precision, recall, and system performance.
  • Exposure to CI/CD pipelines and modern development workflows (code review, testing, deployments, on-call participation or support).

Nice To Haves

  • Experience building detection or classification systems ; rules engines, content classifiers, anomaly detection, or fraud/risk scoring and reasoning about signal quality, false positive rates, and coverage.
  • Familiarity with ML model integration in production systems: calling model endpoints, processing outputs in rules pipelines, and monitoring model-driven decisions.
  • Working with cloud-native infrastructure (AWS, Kubernetes, Terraform, or similar) and building services designed to run at scale.
  • Exposure to compliance or policy enforcement domains (AUP/TOS enforcement, content moderation, fraud detection) and how platform decisions in these areas are made and measured.
  • Familiarity with observability stacks (Grafana, Datadog, Sentry, Splunk) and using them to drive reliability and detection-quality improvements.
  • Interest or experience in adjacent Core IO domains like Identity & Access Management or Organizations, especially where they intersect with account security, fraud signals, and org lifecycle.

Responsibilities

  • Own features end-to-end across design, implementation, rollout, and observability for abuse detection and enforcement capabilities; rules, classifiers, enforcement pipelines, and the platform services that other teams plug into.
  • Extend abuse enforcement to new product surfaces: Help bring anti-abuse rules and enforcement to product areas where coverage is currently limited, ensuring consistent policy application across channels.
  • Build content and link abuse detection systems: Help design and implement detection capabilities for malicious URLs, abusive image content, and other content-level signals;combining perceptual hashing, ML model integrations, and rule-based approaches.
  • Contribute to AUP automation and enforcement pipelines: Help build and scale the systems that automatically evaluate accounts against our Acceptable Use Policy, reducing reliance on manual compliance review.
  • Improve platform anti-abuse infrastructure: Evolve microservices, rules orchestration layer, and abuse observability pipelines; making them faster, more reliable, and easier for the team and compliance stakeholders to operate.
  • Help define and refine standards for how other teams integrate with PAA's detection and enforcement APIs; so product teams can add abuse coverage to new areas without reinventing detection logic per service.
  • Collaborate closely with partner teams: Work with Identity & Access Management (IAM) on authentication & authorization signals, with Organizations on account lifecycle and policy enforcement, and with Security and Compliance on AUP/TOS enforcement patterns.
  • Improve reliability and observability of anti-abuse services by instrumenting metrics and alerts, maintaining abuse impact dashboards, and contributing to on-call rotations and incident reviews.

Benefits

  • We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service