About The Position

We are looking for a Performance QA Engineer to specialize in benchmarking and optimizing our Agentic AI platform. You will be the gatekeeper of the "User Experience of Thought," ensuring that as our AI agents plan, reason, and execute tasks, they do so within acceptable timeframes and cost-efficiency boundaries. Your mission is to stress-test the entire AI pipeline—from the initial prompt to the final autonomous action—identifying bottlenecks in LLM response times, RAG (Retrieval-Augmented Generation) retrieval speeds, and third-party API orchestration.

Requirements

  • Experience: 8+ years in Performance Engineering, with a specific focus on AI/ML applications or high-concurrency distributed systems.
  • Tooling Proficiency: Expert-level experience with performance testing tools like Locust, JMeter, or k6, specifically customized for Python-based AI backends.
  • Python Mastery: Strong ability to write custom scripts to simulate complex, multi-step user/agent interactions.
  • AI Infrastructure Knowledge: Understanding of LLM-specific performance factors, such as quantization, KV caching, and the impact of different model architectures on latency.
  • Observability Expertise: Experience with tools like Prometheus, Grafana, LangSmith, or Weights & Biases to monitor system health and AI-specific metrics.
  • Database Performance: Experience testing the query latency of Vector Databases under heavy load.

Responsibilities

  • Latency Benchmarking: Measure and optimize TTFT (Time to First Token) and Total Request Latency for complex agentic workflows that involve multiple reasoning steps.
  • Agentic Loop Stress Testing: Simulate high-concurrency environments to see how the system handles hundreds of autonomous agents running simultaneously, particularly focusing on API rate limits and GPU/compute bottlenecks.
  • RAG Performance Analysis: Test the speed and efficiency of the vector database retrieval process. Identify how increasing the "context window" size impacts overall system performance.
  • Token Throughput Monitoring: Analyze the "tokens per second" (TPS) metrics and identify when model-switching (e.g., from a large model to a smaller one) is necessary to maintain performance.
  • Cost vs. Performance Optimization: Create reports that balance performance gains against token costs, helping the team find the "sweet spot" for production-grade agents.
  • Orchestration Bottleneck Identification: Use profiling tools to find delays in the "hand-off" between different agents or between the agent and external tools (APIs, databases).
  • Automated Performance Regressions: Integrate performance testing into the CI/CD pipeline to ensure that new prompt versions or architectural changes don't degrade the agent's speed.

Benefits

  • Medical, vision, and dental benefits
  • 401k retirement plan
  • variable pay/incentives
  • paid time off
  • paid holidays are available for full time employees

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service