About The Position

We are looking for a Senior Engineer to play a key technical leadership role in designing and advancing Wind River’s next‑generation intelligent systems platform. This position is ideal for an engineer who thrives at the intersection of cloud‑native infrastructure, edge computing, and AI/ML systems—and who wants to shape the architecture and implementation of distributed AI platforms used in mission‑critical environments.

Requirements

  • 2-5+ years of experience in software engineering, distributed systems, cloud platforms, or embedded systems.
  • Strong hands‑on experience with cloud‑native technologies (Kubernetes, containers, microservices).
  • Solid understanding of AI/ML infrastructure, model deployment, and edge inference optimization.
  • Proficiency in C/C++, Python, and modern DevOps practices (CI/CD, GitOps).
  • Experience with Linux‑based systems, real‑time environments, or embedded platforms.
  • Familiarity with cloud platforms (AWS, Azure, GCP) and hybrid cloud architectures.

Nice To Haves

  • Experience in mission‑critical or safety‑critical domains (automotive, aerospace, industrial, medical).
  • Knowledge of virtualization technologies (KVM, ACRN, hypervisors) and secure partitioning.
  • Background in distributed AI, federated learning, or edge‑to‑cloud orchestration frameworks.
  • Contributions to open‑source cloud, AI, or embedded systems projects.

Responsibilities

  • Design and implement core components of Wind River’s cloud‑to‑edge AI platform, including orchestration layers, data pipelines, and model lifecycle management.
  • Develop scalable, modular, and secure software architectures for distributed AI workloads across heterogeneous edge environments.
  • Build cloud‑native services and APIs that integrate seamlessly with Wind River Studio and edge operating systems (VxWorks, Linux).
  • Contribute to architectural decisions involving microservices, containerization, service mesh, and hybrid cloud deployments.
  • Implement systems for model deployment, versioning, CI/CD for AI, and real‑time inference pipelines.
  • Integrate AI frameworks (TensorFlow, PyTorch, ONNX Runtime, TensorRT) into cloud-edge workflows.
  • Optimize inference performance across diverse hardware accelerators (GPU, NPU, FPGA, VPU).
  • Ensure platform components meet stringent requirements for determinism, safety, and reliability in mission‑critical industries.
  • Implement security best practices for distributed AI, including model protection, secure communication, and data integrity.
  • Profile, tune, and optimize system performance across cloud and edge environments.
  • Work closely with architects, senior engineers, and product managers to translate platform vision into robust implementations.
  • Mentor junior engineers and contribute to engineering best practices, design reviews, and technical roadmaps.
  • Engage with customers and partners to understand requirements and support advanced solution development.
  • Contribute to open‑source initiatives and represent Wind River in technical communities when appropriate.

Benefits

  • Competitive Salary: Attractive base salary with performance-based incentives - UNCAPPED Plan.
  • Benefits: Comprehensive benefits package including health, dental, vision, and retirement plans.
  • Growth Opportunities: Opportunities for professional development and career advancement within a leading technology company.
  • Innovative Environment: Work with cutting-edge technology and a team of passionate professionals dedicated to driving innovation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service