About The Position

About the Job: The Red Hat Ecosystems Engineering group is seeking a Senior Integration Engineer to join our growing team. In this role, you will work with a diverse team of highly motivated engineers on designing and implementing and productizing new AI products, focused on deep integration of AI stack, hardware accelerators and leading OEMs and CCSPs. You will also be working closely with the product management, other engineering groups within Nvidia and with Red Hat partners and lighthouse customers. What you will do: Play an active role applying RHEL, Kubernetes and Red Hat OpenShift to customer use cases, mostly focusing on AI and Edge Work closely with partners and key customers to integrate their workloads on Red Hat’s platforms Integrating software that leverages hardware accelerators (e.g., DPUs, GPUs, AIUs..) Collaborate with hardware and software engineers to ensure optimal integration between Red Hat portfolio and accelerators Contribute to the design and implementation of new features Perform performance analysis and optimization of AI workloads with accelerators Stay up-to-date on the latest advancements in AI frameworks, hardware accelerators and more Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.

Requirements

  • 5+ year of relevant technical experience
  • Bachelor’s degree in Computer Science or equivalent professional experience.
  • Strong experience with RHEL or other Linux distributions
  • Advanced level of experience with Kubernetes
  • Experience with AI/ML technologies and frameworks (classifiers, pytorch, tensorflow etc)
  • Technical leadership acumen in a global team environment
  • Excellent written and verbal communication skills; fluent English language skills

Nice To Haves

  • Recent hands on experience with distributed computation, either at the end-user or infrastructure provider level
  • Experience with hardware accelerators (e.g., GPUs, FPGAs) for AI workloads is a plus
  • Background in DevOps or site reliability engineering (SRE)
  • Experience with performance analysis tools
  • Experience with Linux kernel development

Responsibilities

  • Play an active role applying RHEL, Kubernetes and Red Hat OpenShift to customer use cases, mostly focusing on AI and Edge
  • Work closely with partners and key customers to integrate their workloads on Red Hat’s platforms
  • Integrating software that leverages hardware accelerators (e.g., DPUs, GPUs, AIUs..)
  • Collaborate with hardware and software engineers to ensure optimal integration between Red Hat portfolio and accelerators
  • Contribute to the design and implementation of new features
  • Perform performance analysis and optimization of AI workloads with accelerators
  • Stay up-to-date on the latest advancements in AI frameworks, hardware accelerators and more
  • Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.

Benefits

  • Comprehensive medical, dental, and vision coverage
  • Flexible Spending Account - healthcare and dependent care
  • Health Savings Account - high deductible medical plan
  • Retirement 401(k) with employer match
  • Paid time off and holidays
  • Paid parental leave plans for all new parents
  • Leave benefits including disability, paid family medical leave, and paid military leave
  • Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service