Contract Software Engineer (Test Architect)

Stable KernelAtlanta, GA
11hHybrid

About The Position

As a Stable Kernel Contract Software Engineer, you play an essential role in setting our portfolio of world-class clients up for success through the development and delivery of their most innovative, transformational initiatives. You will collaborate daily with engineers and product team members, make decisions that influence the path of a product roadmap, and leverage software development best practices to drive quality engineering excellence. Your knowledgeable practice, reliability, and consultative nature make you an engineer whom stakeholders and teammates trust. You will lead our Shift Left philosophy, architecting intelligent and resilient test platforms that scale across cloud-native systems, serve enterprise clients, and power AI-driven digital experiences at scale. You'll also help define, instrument, and improve DORA metrics by enabling the organization to measure and continuously optimize delivery performance, reliability, and recovery across all engineering teams.

Requirements

  • 10+ years of experience in software engineering and/or test architecture, with strong full-stack development expertise
  • Proven experience designing test strategies and frameworks for complex, distributed systems
  • Mastery of modern CI/CD tools (GitHub Actions, Jenkins, ArgoCD, etc.) and cloud platforms (AWS, Azure, GCP)
  • Hands-on experience with performance benchmarking and automated testing in cloud environments
  • Practical knowledge of Blue/Green, Canary, Dark Launch, and Feature Flag strategies
  • Proficiency with test frameworks and tools: Cypress, Playwright, XCTest, Espresso, Appium, Selenium, JUnit/TestNG, RESTAssured
  • Strong data strategy mindset using RDBMS, synthetic data generation, service virtualization, and test data management systems
  • Familiarity with AI/ML systems testing, and curiosity for leveraging AI in test automation and software development
  • Excellent communicator and mentor, able to inspire technical excellence and continuous learning

Nice To Haves

  • Familiarity with service mesh, observability stacks (OpenTelemetry, Datadog, Grafana), and chaos testing frameworks
  • Cloud certifications (AWS, Azure, GCP) or Kubernetes certification (CKA, CKAD)
  • Experience with data observability, model validation for AI/ML systems, or testing recommendation engines
  • Design and implement AI-augmented testing pipelines that shift from probabilistic detection to deterministic validation using agentic workflows to trigger test suites against runtime environments with synthetic data
  • Experience with GILT testing complexities: globalization, internationalization, localization, and translation testing across multiple markets
  • Contributions to open-source testing or DevOps communities

Responsibilities

  • Define and lead end-to-end testing strategy across frontend (mobile & Web), integrations and, backend platforms
  • Establish reliable multi-tiered quality gates to prevent defect propagation to higher environments
  • Develop blueprints that incorporate test feedback as a function of system architecture and application logic
  • Partner with Engineering and Product leaders to make testing and observability first-class citizens in product development
  • Implement TestOps practices by integrating test execution, monitoring, and analytics into CI/CD workflows
  • Leverage cloud-based device farms (e.g., AWS Device Farm, BrowserStack, Sauce Labs) to run mobile/web tests across real devices as part of release pipelines
  • Drive deployment strategies using Blue/Green, Canary, and Feature Flags to minimize risk and improve delivery velocity
  • Enable Infrastructure as Code (IaC) test validation using containers, orchestration systems, and cloud-native pipelines
  • Instrument KPI data sources: unit coverage, execution time, flaky rate, and mean time to fault location
  • Architect and implement performance testing frameworks that validate load, stress, scalability, and reliability under production-like conditions
  • Define and monitor SLOs, SLIs, and release health KPIs to proactively detect degradation and optimize service availability
  • Incorporate performance, chaos, and resilience testing into pre-release and continuous delivery stages
  • Design atomic, scalable, and self-contained automated tests that are integrated at each layer of the testing pyramid
  • Build test SDKs and APIs that allow product and engineering teams to trigger tests on-demand or integrate them into their services.
  • Create and maintain Test Data Management solutions to generate mock, synthetic and bulk data on-demand at runtime
  • Design test data strategies that support GILT requirements: regional formats, currencies, tax regimes, and locale-specific data across international markets
  • Apply AI and LLMs to testing workflows: test generation, coverage analysis, anomaly detection, root cause acceleration, and spec framework generation
  • Evaluate and integrate AI observability tooling (LangFuse, LangSmith, or equivalent) for monitoring AI-assisted test generation quality, prompt effectiveness, and model behavior in testing contexts
  • Promote the use of AI and LLMs in developer workflows to boost productivity and build smart automation strategies
  • Collaborate across cross-functional teams to evangelize shift left principles
  • Review test design and implementation to reinforce best practices, scalability, and security
  • Mentor engineers in test-driven development, CI/CD automation, data-driven quality metrics, and platform thinking
  • Facilitate knowledge sharing and learning community initiatives to upskill teams in modern testing paradigms
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service