Sr. Staff Edge AI Applied Machine Learning Engineer

Ambiq Micro, Inc.Austin, TX
20hOnsite

About The Position

Ambiq's mission is to enable intelligence everywhere by delivering the lowest power semiconductor solutions. Ambiq is a pioneer and a leading provider of ultra-low-power semiconductor solutions based on our proprietary and patented sub- and near-threshold technologies. With the increasing power requirements of artificial intelligence (AI) computing, our customers are relying on our solutions to deliver AI to edge environments. Our hardware and software innovations fundamentally deliver a multi-fold improvement in power consumption over traditional semiconductor designs without expensive process geometry scaling. We began in 2010 by addressing the power consumption challenges of battery-powered devices at the edge, where they were most pronounced. As of the beginning of 2025, we've shipped more than 280+ million units worldwide. Our innovative and fast-moving teams of design, research, development, production, marketing, sales, and operations are spread across several continents, including the US (Austin), Taiwan (Hsinchu), China (Shanghai and Shenzhen), and Singapore. We value relentless technology innovation, a deep commitment to customer success, collaborative problem-solving, and an enthusiastic pursuit of energy efficiency. We embrace candidates who also share these same values. The successful candidate must be self-motivated, creative, and comfortable learning and driving exciting new technologies. We encourage and nurture an environment that fosters growth and opportunities to work on complex, meaningful, and challenging projects, creating a lasting impact and shaping the future of technology. Join us on our quest for enabling billions of intelligent devices. The intelligence everywhere revolution starts here. This role will be on-site 5 days a week in NW Austin. Ambiq is seeking an experienced Edge AI Applied ML Engineer with deep experience in audio and computer vision. In this role, you will design, train, optimize, and deploy highly efficient on-device AI models—from ultra-small (tens of KB) to larger (hundreds of MB) footprints—targeting resource-constrained, real-time, battery-powered devices. While the cloud has been the default home for AI, the next frontier is distributing intelligence everywhere—directly onto real-world devices. Edge AI enables real-time responsiveness, stronger privacy, lower bandwidth cost, and reliable operation even without connectivity. This role will help accelerate the shift to on-device intelligence across a rapidly growing ecosystem of health and fitness wearables, smart glasses, industrial IoT, and always-on sensors. You’ll also help evolve our award-winning open-source AI Development Kits (ADKs): modular tooling that enables developers to mix-and-match datasets, model architectures, tasks, training recipes, and deployment targets. You will bridge cutting-edge research and practical productization by building production-grade demos, reference applications, and customer-facing tooling that accelerates real-world adoption.

Requirements

  • BS in Computer Science or related field + 5+ years of relevant experience (or equivalent practical experience). MS or PhD in related disciplines (ML, EE, signal processing, computer vision, robotics) is highly desirable.
  • Strong proficiency in Python; working proficiency in C/C++ and/or Rust for performance and runtime integration.
  • Domain expertise in audio (KWS, speech enhancement, SLM, TTS) and/or vision (classify/detect/segment/pose/OBB/track), with DSP fundamentals (e.g., FFT).
  • Comfortable in Linux development with Docker/dev containers (able to work across Mac/Windows as needed).
  • Experience with one or more training frameworks: PyTorch, TensorFlow, JAX, Keras.
  • Strong ML engineering fundamentals: data pipelines, augmentation, metrics, experiment reproducibility, and failure analysis.
  • Familiarity with edge deployment stacks such as ONNX, LiteRT, ExecuTorch.
  • Hands-on with edge optimization: quantization (PTQ/QAT), compression, and (structured) sparsification, plus profiling for latency/memory/energy tradeoffs.
  • Efficient use of AI-assisted development tools while maintaining rigor (testing, review, reproducibility).
  • Must be currently authorized to work in the United States for any employer. We do not sponsor or take over sponsorship of employment visas (now or in the future) for this role.

Responsibilities

  • Develop and optimize on-device ML models for constrained, real-time, battery-powered products, balancing accuracy with latency, memory, and energy.
  • Build and maintain Ambiq’s open-source ADKs for modular datasets, models, tasks, and training recipes.
  • Translate cutting-edge research into production-grade demos and reference implementations.
  • Apply model efficiency techniques: quantization, compression, pruning, and structured sparsification.
  • Serve as a domain expert in audio and vision (data strategy, evaluation, and failure analysis).
  • Port and optimize customer models to Ambiq edge runtimes, ensuring correctness, performance, and usability.
  • Deliver and promote customer-ready assets: docs, tutorials, examples, benchmarks, plus white papers and conference representation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service