About The Position

We are deploying machine learning directly onto custom hardware – and we want you to help shape it from the ground up. This is an initiative where you'll have the rare opportunity to architect solutions from scratch, influence technical direction, and see your work drive real impact in one of the most demanding computing environments in the world. We already have one of the most competitive low latency setups in trading. Now, we're expanding our use cases to bring the power of neural networks and ML algorithms onto our custom hardware infrastructure. If you've ever wanted to push the boundaries of what's computationally possible, this role is for you. Your Core Responsibilities Design, implement, and deploy machine learning engines on custom hardware, achieving latency that software alone cannot match Through HW/SW co-design, you’ll collaborate closely with traders, quantitative researchers, and software engineers to translate ML models into efficient implementations – combining the strengths of software with the strengths of hardware Shape a greenfield initiative from the ground up, with the freedom to explore novel approaches and set technical direction Research and build cutting-edge techniques for neural network quantization, compression, and tools that bridge high-level ML frameworks to RTL See the results of your work deployed in production within days, not months – our collaborative culture and unified global codebase enable rapid iteration

Requirements

  • Extensive experience with FPGA or ASIC technologies, including proficiency in either VHDL, Verilog, or SystemVerilog
  • Solid understanding of digital design principles, including pipelining, flow control, and clock domain crossing
  • Experience with FPGA development tools and toolchains (Vivado, Quartus, Synplify, etc.)
  • Understanding of machine learning fundamentals – neural network architectures, inference optimization, quantization techniques
  • Experience optimizing inference for temporal or sequential ML models (RNNs, Transformers, state-space models) on resource-constrained or latency-sensitive platforms
  • Proficiency in Python, C++, or similar languages for tooling, testing, and simulation
  • Strong communication skills and ability to work collaboratively across disciplines with both technical and non-technical teams

Nice To Haves

  • Experience with High-Level Synthesis (HLS) or other hardware description languages beyond traditional RTL
  • Familiarity with ML-to-hardware frameworks such as hls4ml, FINN, or Vitis AI
  • Experience with ML-relevant compiler intermediate representations and optimization passes such as MLIR, LLVM, polyhedral analysis, or production ML compilers (TVM, XLA, IREE)
  • Background in ultra-low-latency systems – whether in high-frequency trading, particle accelerator data acquisition, real-time signal processing, or similar applications
  • Experience with functional verification methodologies (SystemVerilog, UVM, Cocotb)
  • Advanced degree (MS or PhD) in Electrical Engineering, Computer Science, Physics, or related field

Responsibilities

  • Design, implement, and deploy machine learning engines on custom hardware, achieving latency that software alone cannot match
  • Through HW/SW co-design, you’ll collaborate closely with traders, quantitative researchers, and software engineers to translate ML models into efficient implementations – combining the strengths of software with the strengths of hardware
  • Shape a greenfield initiative from the ground up, with the freedom to explore novel approaches and set technical direction
  • Research and build cutting-edge techniques for neural network quantization, compression, and tools that bridge high-level ML frameworks to RTL
  • See the results of your work deployed in production within days, not months – our collaborative culture and unified global codebase enable rapid iteration

Benefits

  • Base salary is only one component of total compensation; all full-time, permanent positions are eligible for a discretionary bonus and benefits, including paid leave and insurance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service