Senior Staff Machine Learning Engineer -Frameworks

d-MatrixSanta Clara, CA
20hHybrid

About The Position

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI. d-Matrix is a pioneering company specializing in data center AI inferencing solutions. Utilizing innovative in-memory computing techniques, d-Matrix develops cutting-edge hardware and software platforms designed to enhance the efficiency and scalability of generative AI applications. The Model Factory team at d-Matrix is at the heart of cutting-edge AI and ML model development and deployment. We focus on building, optimizing, and deploying large-scale machine learning models with a deep emphasis on efficiency, automation, and scalability for the d-Matrix hardware. If you’re excited about working on state-of-the-art AI architectures, model deployment, and optimization, this is the perfect opportunity for you!

Requirements

  • BS in Computer Science with 7+ years of strong programming skills in Python and experience with ML frameworks like PyTorch, TensorFlow, or JAX
  • Hands-on experience with model optimization, quantization, and inference acceleration
  • Deep understanding of transformer architectures, attention mechanisms, and distributed inference (tensor parallel, pipeline parallel, sequence parallel)
  • Knowledge of quantization (INT8, BF16, FP16) and memory-efficient inference techniques
  • Solid grasp of software engineering best practices, including CI/CD, containerization (Docker, Kubernetes), and MLOps
  • Strong problem-solving skills and ability to work in a fast-paced, iterative development environment

Nice To Haves

  • Experience working with cloud-based ML pipelines (AWS, GCP, or Azure)
  • Experience with LLM fine-tuning, LoRA, PEFT, and KV cache optimizations
  • Contributions to open-source ML projects or research publications
  • Experience with low-level optimizations using CUDA, Triton, or XLA

Responsibilities

  • Design, build, and optimize machine learning deployment pipelines for large-scale models
  • Implement and enhance model inference frameworks
  • Develop automated workflows for model development, experimentation, and deployment
  • Collaborate with research, architecture, and engineering teams to improve model performance and efficiency
  • Work with distributed computing frameworks (e.g., PyTorch/XLA, JAX, TensorFlow, Ray) to optimize model parallelism and deployment
  • Implement scalable KV caching and memory-efficient inference techniques for transformer-based models
  • Monitor and optimize infrastructure performance across different levels of custom hardware hierarchy—cards, servers, and racks which are powered by the d-Matrix custom AI chips
  • Ensure best practices in ML model versioning, evaluation, and monitoring

Benefits

  • Competitive compensation, benefits, and opportunities for career growth
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service