About The Position

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About the Role: You'll work closely with engineering teams to expand test coverage across unit, integration, contract, and end-to-end layers while modernizing our test infrastructure. This role combines deep technical expertise in test automation with strategic thinking about quality at scale. You'll be building the testing foundation that ensures reliability and performance as we deploy AI security controls across browser extensions, API gateways, cloud platforms, and agentic systems. PLEASE NOTE: This role is hybrid, requiring 2-3 days per week on-site at one of the posted locations. Success Means: Contract Testing Excellence: Implement and scale contract testing across all services and API boundaries, reducing integration testing dependencies. Coverage Expansion: Work with service owners and independently to improve test coverage from current baseline - unit, integration, and E2E tests. Team Velocity: Onboard rapidly and contribute meaningfully within the first 60 days to an already fast-paced engineering environment. AI Quality Framework: Establish and/or contribute to efficacy testing of LLM models with measurable accuracy metrics.

Requirements

  • 10+ years combined software development and test automation experience with a proven track record of establishing quality frameworks and delivering reliable, scalable test coverage for enterprise-grade cloud SaaS products at scale.
  • Experience building testing frameworks and tooling for Cloud SaaS products
  • Strong computer science fundamentals (algorithms, data structures, distributed systems)
  • Expertise at programming languages: Python, Go, Javascript
  • Deep understanding of: Cloud architectures and microservices
  • Web Services: REST, gRPC, Protocol Buffers or similar API technologies
  • Data storage systems: PostgreSQL, Redis or similar databases
  • Container technologies: Docker, Kubernetes or similar orchestration platforms
  • Experience with CI/CD pipelines and testing in cloud environments (AWS/Azure/GCP)
  • Strong debugging skills with ability to troubleshoot complex distributed systems
  • Experience with cloud performance testing and monitoring tools (e.g., JMeter, Gatling, New Relic, Datadog, Prometheus/Grafana)
  • Performance and Scale Testing
  • Efficacy Testing
  • Test Data Set generation (synthetic or AI generated)
  • Exposure using LLMs

Nice To Haves

  • Experience testing AI/ML systems, LLM applications, or prompt engineering security
  • Performance testing experience with large-scale SaaS products handling high throughput, concurrent users, and distributed architectures
  • Familiarity with ML development lifecycle, model training, evaluation, and deployment
  • Hands-on experience with AI efficacy testing, adversarial testing, or red teaming
  • Experience with browser extension testing frameworks
  • Background in API gateway testing (Kong, Envoy, LiteLLM, etc.)
  • Experience with Model Context Protocol (MCP) or agentic AI systems
  • Experience scaling test infrastructure to support hundreds of engineers

Responsibilities

  • Design and implement contract testing framework to verify API contracts between various system components.
  • Contribute comprehensive test strategies spanning unit, integration, contract testing layers
  • Establish testing patterns and best practices for AI-powered detection capabilities and models efficacy
  • Develop comprehensive E2E test scenarios covering multi-collector deployments and policy enforcement workflows.
  • Implement quality telemetry through observability with detailed reporting.
  • Create metrics dashboards, and alerting.
  • Work with SRE and developers to create playbooks and support documentation.

Benefits

  • Market leader in compensation and equity awards
  • Comprehensive physical and mental wellness programs
  • Competitive vacation and holidays for recharge
  • Paid parental and adoption leaves
  • Professional development opportunities for all employees regardless of level or role
  • Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections
  • Vibrant office culture with world class amenities
  • Great Place to Work Certified™ across the globe
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service