About The Position

As a Software Engineer on the team the common tasks of the job would include, but not be limited to: Identify and drive improvements to end-to-end inference performance of OpenAI and other state-of-the-art LLMs Optimize and monitor performance of LLMs and build SW tooling to enable insights into performance opportunities ranging from the model level to the systems and silicon level to improve customer experience and reduce the footprint of the computing fleet Enable fast time to market of LLMs/models and their deployments at scale by building SW tools that afford velocity in porting models on new Nvidia and AMD GPUs Design, implement, and test functions or components for our AI/DNN/LLM frameworks and tools Speeding up/reducing complexity of key components/pipelines to improve performance and/or efficiency of our systems Communicate and collaborate with our partners both internal and external

Requirements

  • Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Python OR equivalent experience. These requirements include but are not limited to the following specialized security screenings: Master's Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, or Python OR Bachelor's Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
  • Technical background and solid foundation in software engineering principles, computer architecture, GPU architecture, HW neural net acceleration
  • Experience in end-to-end performance analysis and optimization of state of the art LLMs, including proficiency using GPU profiling tools
  • Experience in DNN/LLM inference and experience in one or more DL frameworks such as PyTorch, Tensorflow, or ONNX Runtime and familiarity with CUDA, ROCm, Triton
  • Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers

Responsibilities

  • Identify and drive improvements to end-to-end inference performance of OpenAI and other state-of-the-art LLMs
  • Optimize and monitor performance of LLMs and build SW tooling to enable insights into performance opportunities ranging from the model level to the systems and silicon level to improve customer experience and reduce the footprint of the computing fleet
  • Enable fast time to market of LLMs/models and their deployments at scale by building SW tools that afford velocity in porting models on new Nvidia and AMD GPUs
  • Design, implement, and test functions or components for our AI/DNN/LLM frameworks and tools
  • Speeding up/reducing complexity of key components/pipelines to improve performance and/or efficiency of our systems
  • Communicate and collaborate with our partners both internal and external
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service