About The Position

We are looking for a hands on DevOps / Software Automation Engineer to design, build, and operate an end to end automated CPU performance benchmarking platform. This role will work closely with CPU performance engineers to automate manual benchmarking workflows, enable repeatable and scalable performance runs, and deliver fast, reliable performance insights across multiple benchmark suites. You will be a critical force multiplier for performance engineers—owning automation, CI/CD, infrastructure, execution workflows, monitoring, and troubleshooting—so performance teams can focus on analysis rather than operational overhead. Performance Benchmarking Automation Design and implement fully automated workflows for CPU performance benchmarks (setup, execution, data collection, validation, and reporting). Translate manual performance engineering processes into scalable automation pipelines. Enable one click or CI triggered benchmark execution with standardized, repeatable results. Automate log parsing, metrics extraction, and data structuring for downstream analysis. CI/CD & Execution Orchestration Build and maintain CI/CD pipelines (Jenkins/GitHub) for benchmark execution and infrastructure workflows. Integrate automation with versioned benchmark configurations, scripts, and artifacts. Ensure reproducibility, traceability, and auditability of performance runs. Infrastructure & Platform Engineering Automate bare metal and virtual server provisioning, OS deployment, and system configuration at scale. Manage Linux-based environments optimized for CPU performance testing. Containerize services (Docker) and orchestrate where applicable (Kubernetes). Reliability, Monitoring & Support Monitor platform health, benchmark execution, and infrastructure using observability tools. Actively unblock performance engineers during automated runs by debugging failures, identifying root causes, and applying quick fixes or workarounds. Perform capacity planning and scale systems to support increasing benchmark demand. Data & Insights Enablement Process and structure benchmark data using Python, Spark, or Databricks. Support dashboards and reporting (e.g., Power BI) that provide quick performance insights to engineering stakeholders. Collaboration & Documentation Work day to day with CPU performance engineers to understand workflows and continuously improve automation. Document architectures, workflows, execution guides, and troubleshooting procedures. Partner with internal IT teams as needed for networking, hardware, and security alignment.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
  • Strong Python and Linux shell scripting skills.
  • Hands on experience with Jenkins, CI/CD pipelines, and GitHub.
  • Solid understanding of Linux systems, OS tuning, and server environments.
  • Experience automating infrastructure using Ansible or similar tools.
  • Ability to debug complex system, automation, or execution issues independently.
  • Strong communication skills—able to work closely with non software performance engineers.

Nice To Haves

  • Experience with CPU or system performance benchmarking (SPEC, internal benchmarks, stress tools, etc.).
  • Familiarity with Spark, Kafka, Databricks, or large scale log processing.
  • Experience with Docker and Kubernetes.
  • Knowledge of monitoring and observability tools (Prometheus, Grafana, Zabbix, New Relic).
  • Exposure to data visualization and reporting tools (Power BI).

Responsibilities

  • Design and implement fully automated workflows for CPU performance benchmarks (setup, execution, data collection, validation, and reporting).
  • Translate manual performance engineering processes into scalable automation pipelines.
  • Enable one click or CI triggered benchmark execution with standardized, repeatable results.
  • Automate log parsing, metrics extraction, and data structuring for downstream analysis.
  • Build and maintain CI/CD pipelines (Jenkins/GitHub) for benchmark execution and infrastructure workflows.
  • Integrate automation with versioned benchmark configurations, scripts, and artifacts.
  • Ensure reproducibility, traceability, and auditability of performance runs.
  • Automate bare metal and virtual server provisioning, OS deployment, and system configuration at scale.
  • Manage Linux-based environments optimized for CPU performance testing.
  • Containerize services (Docker) and orchestrate where applicable (Kubernetes).
  • Monitor platform health, benchmark execution, and infrastructure using observability tools.
  • Actively unblock performance engineers during automated runs by debugging failures, identifying root causes, and applying quick fixes or workarounds.
  • Perform capacity planning and scale systems to support increasing benchmark demand.
  • Process and structure benchmark data using Python, Spark, or Databricks.
  • Support dashboards and reporting (e.g., Power BI) that provide quick performance insights to engineering stakeholders.
  • Work day to day with CPU performance engineers to understand workflows and continuously improve automation.
  • Document architectures, workflows, execution guides, and troubleshooting procedures.
  • Partner with internal IT teams as needed for networking, hardware, and security alignment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service