AI Platform Systems Software Engineer

eBayAustin, TX
8dHybrid

About The Position

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. About the team & role : At eBay, we are building the next-generation AI platform to power experiences for millions of users worldwide. Our AI Platform (AIP) provides the scalable, secure, and efficient foundation for deploying and optimizing advanced machine learning and large language model (LLM) workloads at production scale. We enable teams across eBay to move from experimentation to global deployment with speed, reliability, and efficiency. We are seeking an experienced AI Platform Systems Software Engineer (Infrastructure) to join our AI Platform team. In this role, you will design, implement, and optimize the core infrastructure that powers AI/ML workloads across eBay. You will work on highly distributed systems, cloud-native services, and performance-critical components that make large-scale inference and training possible. You will be part of the team responsible for both the control plane (cluster management, scheduling, user access) and the data plane (execution, resource allocation, accelerator integration). Your work will directly impact the scalability, performance, and reliability of AI applications that serve eBay’s global marketplace.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience).
  • 8-10 years of experience building and maintaining infrastructure for highly available, scalable, and performant distributed systems.
  • Proven expertise with cloud-native technologies (AWS, GCP, Azure) and Kubernetes-based deployments .
  • Hands-on experience running ML training and inference with Ray ( ray.io ) —e.g., Ray Train/Tune for distributed training and Ray Serve for production inference—covering autoscaling, fault tolerance, observability and multi-tenant operations.
  • Deep understanding of networking, security, authentication, and identity management in distributed/cloud environments.
  • Hands-on experience with observability stacks (Prometheus, Grafana, OpenTelemetry, etc.).
  • Strong coding skills in Go and/or Python ; familiarity with other systems-level languages is a plus.
  • Knowledge of Linux internals, containers, and storage systems .

Nice To Haves

  • Experience optimizing for GPU/accelerator integration (NVIDIA, AMD, TPU, etc.) is highly desirable.

Responsibilities

  • Design and scale services to orchestrate AI/ML clusters across cloud and on-prem environments, supporting VM and Kubernetes-based deployments, including Ray ( ray.io ) clusters for distributed training and online inference.
  • Develop and optimize intelligent scheduling and resource management systems for heterogeneous compute clusters (CPU, GPU, accelerators).
  • Integrate Ray Train/Tune for large-scale distributed training workflows and Ray Serve for low-latency, autoscaled inference; build platform hooks for observability, canary/A-B rollouts, and fault tolerance.
  • Build features to improve reliability, performance, observability, and cost-efficiency of AI workloads at scale.
  • Enhance the control plane to support secure multi-tenancy and enterprise-grade governance.
  • Implement systems for container management, dependency resolution, and large-scale model distribution.
  • Collaborate with ML researchers, applied scientists, and distributed systems engineers to drive platform innovation.
  • Provide production support and work closely with field teams to resolve infrastructure issues.

Benefits

  • The total compensation package for this position may also include other elements, including a target bonus and restricted stock units (as applicable) in addition to a full range of medical, financial, and/or other benefits (including 401(k) eligibility and various paid time off benefits, such as PTO and parental leave).
  • Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service