Staff Embedded ML Engineer, Edge AI

SimpliSafeBoston, MA
16hHybrid

About The Position

We are seeking a highly motivated and experienced Embedded Machine Learning Engineer to join our growing Edge AI team. As a key contributor, you will lead the on-device inference and performance optimization of ML models powering outdoor monitoring in the home security space. This role is less about inventing new CV architectures and more about making models fast, power-efficient, stable, and shippable on real embedded hardware (outdoor cameras and doorbells). You will operate across the stack (from model runtime integration down to kernel/operator optimization, memory movement, scheduling, and accelerator utilization) to deliver reliable real-time behavior under tight compute, memory, bandwidth, and thermal constraints across device tiers.

Requirements

  • 8+ years of experience in embedded systems and/or performance engineering, with experience shipping production software on constrained devices.
  • Strong C/C++ expertise with deep knowledge of low-level performance topics: CPU architecture, memory hierarchy, concurrency, and real-time considerations.
  • Demonstrated experience optimizing ML inference on embedded targets, including operator/kernel tuning and end-to-end pipeline optimization.
  • Familiarity with modern vision model families (transformer-based detectors such as DEIM/DFINE/RT-DETR series and CNN-based detectors such as YOLO family or similar) sufficient to optimize their execution characteristics (tensor shapes, attention/conv patterns, post-processing).
  • Experience with on-device inference runtimes and deployment workflows (e.g., TFLite, ONNX Runtime, TensorRT or vendor runtimes), including operator support constraints and graph-level transformations.
  • Strong debugging and profiling skills (perf, flame graphs, hardware counters, tracing) and ability to drive performance investigations to closure.
  • Ability to lead cross-functionally across ML, firmware, and hardware teams; comfortable defining benchmarks/KPIs and making tradeoffs.

Nice To Haves

  • Experience with embedded accelerators and vendor toolchains (DSP/NPU compilers, delegates, GPU compute, custom runtimes).
  • SIMD expertise (ARM NEON/SVE), hand-tuned kernels, or experience with libraries like XNNPACK/QNNPACK/oneDNN/CMSIS-NN (or equivalents).
  • Experience with quantized inference (INT8) at scale: calibration strategies, numerical debugging, overflow/underflow handling, and accuracy-performance tradeoffs.
  • Experience with camera/doorbell pipelines: ISP/video decode/encode, DMA/zero-copy buffers, multi-threaded real-time streaming.
  • Exposure to OS/firmware constraints (embedded Linux, RTOS), power management, thermal throttling behavior, and performance under sustained load.
  • Security/privacy experience for edge devices (secure boot/TEE boundaries, model protection, safe telemetry).
  • Experience building performance regression systems and device-lab automation for continuous benchmarking.

Responsibilities

  • Own the embedded deployment and performance of on-device ML inference for outdoor monitoring workloads (real-time video/event pipelines).
  • Optimize end-to-end inference performance across CPU/DSP/NPU/GPU (as applicable): latency, throughput (FPS), memory footprint, power, thermals, startup time, and stability.
  • Perform kernel/operator-level optimization: vectorization (e.g., SIMD/NEON), tiling, cache-friendly memory layouts reducing bandwidth and memory copies, optimizing post-processing fusing ops, minimizing synchronization/overhead, thread scheduling
  • Integrate and maintain ML models within embedded pipelines: model import/export validation, operator compatibility, graph transforms runtime integration in C/C++ (including pre/post-processing) robust error handling, watchdogs, and safe fallback behavior
  • Drive quantization and deployment readiness from an embedded perspective: validate INT8/FP16 paths, calibration flows, numerical accuracy checks debug quantization edge cases and operator mismatches on target runtimes
  • Build tooling for profiling, benchmarking, and regression tracking on devices: per-layer timing, memory tracking, thermal/perf tests, CI gating automated performance regression gating across device tiers and firmware versions
  • Partner closely with ML engineers to translate model changes into deployment impact; provide constraints and design guidance that improve deployability and performance.
  • Provide Staff-level leadership: set performance standards, lead technical reviews, mentor engineers, and influence platform roadmap for on-device ML.

Benefits

  • A mission- and values-driven culture and a safe, inclusive environment where you can build, grow and thrive
  • A comprehensive total rewards package that supports your wellness and provides security for SimpliSafers and their families (For more information on our total rewards please click here )
  • Free SimpliSafe system and professional monitoring for your home.
  • Employee Resource Groups (ERGs) that bring people together, give opportunities to network, mentor and develop, and advocate for change.
  • The target annual base pay range for this role is $183,300 to $268,800. This target annual base pay range represents our good-faith estimate of what we expect to pay for this role. We use a market-based compensation approach to set our target annual base pay ranges and make adjustments annually. We carefully tailor individual compensation packages, including base pay, taking into consideration employees’ job-related skills, experience, qualifications, work location, and other relevant business factors. Beyond base pay, we offer a Total Rewards package that may include participation in our annual bonus program, equity, and other forms of compensation, in addition to a full range of medical, retirement, and lifestyle benefits. More details can be found here .
  • We’re committed to fair and equitable pay practices, as well as pay transparency.
  • We regularly review our programs to ensure they remain competitive and aligned with our values.
  • We wholeheartedly embrace and actively seek applications from all individuals, no matter how they identify. We are committed to cultivating a diverse and inclusive workplace, and we believe our work is enriched when we incorporate a multitude of perspectives, backgrounds, and experiences. We want everyone who works here to thrive and contribute to not only our mission of keeping every home secure, but also to making our workplace safe and supportive for others.
  • If a reasonable accommodation may be needed to fully participate in the job application or interview process, to perform the essential functions of a position, or to receive other benefits and privileges of employment, please contact [email protected] .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service