We are looking for a Principal Engineer to architect, build, and own the end-to-end data pipeline that drives our high-throughput diagnostic instrument platform — from real-time image acquisition on the instrument, through GPU-accelerated signal processing, to offloading for secondary and tertiary analysis on local HPC clusters and cloud infrastructure. This is a technical leadership role for an engineer who can design and deliver industrial-grade data processing infrastructure that operates reliably at sustained high throughput. You will be responsible for the full data path: acquiring raw image data from sensors, processing it through GPU pipelines, orchestrating job distribution across local HPC and cloud compute, and ensuring the entire system handles errors, backpressure, and recovery gracefully. The scope spans instrument- embedded software, on-premises Linux HPC infrastructure, and cloud- based compute and storage. The central challenge of this role is not raw compute optimization — GPU and CPU resources will have adequate headroom. The challenge is building a pipeline architecture that is robust, scalable, and evolvable as instrument throughput increases with each generation, the number of instruments grows, and data volumes scale accordingly. You will design systems that keep a complex multi-stage pipeline running continuously and reliably in a production lab environment, and that can be evolved without wholesale re-architecture as requirements intensify.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level