QA / DevOps Engineer

NetSpeek
Remote

About The Position

NetSpeek is the agentic control plane for enterprise physical infrastructure. We govern how AI agents reason about, decide on, and execute actions across enterprise endpoints. Our reasoning and execution layer — Lena — sits in customer production environments, where reliability, observability, and auditability are non-negotiable. The QA / DevOps Engineer owns platform quality, release reliability, and the AI-assisted engineering workflows our team uses to ship safely.

Requirements

  • 3+ years of combined QA and DevOps experience.
  • Owned a CI/CD pipeline end-to-end in production.
  • Hands-on production experience with AWS (preferred), Azure, or GCP.
  • Use AI-assisted engineering tools day-to-day (Cursor, GitHub Copilot, Claude Code, or similar).
  • Familiarity with at least one observability stack (Elastic, Datadog, Splunk, Grafana, or similar).
  • API testing experience (Postman, REST-assured, or custom tooling).
  • Direct experience in software QA and modern testing workflows, including automation.
  • Working exposure to DevOps practices and CI/CD systems.
  • Familiarity with cloud platforms (AWS preferred; Azure or GCP also welcome) and deployment pipelines.
  • API testing and debugging experience (Postman, REST-assured, or custom tooling).
  • Strong troubleshooting and analytical skills.
  • Daily comfort with AI-assisted engineering tools (Cursor, GitHub Copilot, Claude Code, or similar) in QA and DevOps workflows.

Nice To Haves

  • GitHub Actions, Azure DevOps, or comparable CI/CD tooling experience.
  • Observability and logging system exposure (Elastic, Datadog, Splunk, Grafana, or similar).
  • Docker and containerized environment familiarity.
  • Experience designing test or evaluation workflows for AI systems (LLM output validation, RAG pipeline testing, prompt-based test orchestration, or comparable).
  • Startup or SaaS environment experience where QA also touched operations.

Responsibilities

  • Owning CI/CD pipelines that gate releases against the AI failure modes that matter (eval regressions, groundedness drift, performance regressions).
  • Building observability around AI behavior in production — beyond standard infra metrics.
  • Designing test strategies for workflows that interact with physical hardware (simulator-first, hardware-on-demand).
  • Driving AI-assisted engineering practice: tooling, prompt review, and evaluation of AI code suggestions.

Benefits

  • Flexible / unlimited time off
  • Health insurance
  • Equity participation, discussed at offer
  • Fully remote
  • AI-assisted tooling licensed by NetSpeek (Cursor, Claude Code, GitHub Copilot, or comparable)
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service