Data Engineer - Training Pipelines & Inference

Howard Hughes Medical InstituteAshburn, VA
1d$98,039 - $159,314

About The Position

AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The Foundational Microscopy Image Analysis (MIA) project sits at the heart of AI@HHMI. Our ambition is big: to create one of the world’s most comprehensive, multimodal 3D/4D microscopy datasets and use it to power a vision foundation model capable of accelerating discovery across the life sciences. We're seeking a skilled Data Engineer to drive scientific innovation through robust data infrastructure, model training, and inference systems. You'll design, develop, and optimize scalable data pipelines and build multi-node GPU training and inference pipelines for foundational models. You'll also develop tools for ingesting, transforming, and integrating large, heterogeneous microscopy image datasets—including writing production-quality Python code to parse, validate, and transform microscopy data from published research papers, public databases, and internal repositories. This role requires technical excellence in data engineering and the ability to understand biological research contexts to ensure data integrity and scientific validity. Your work will directly support computational research initiatives, including machine learning and AI applications. You'll collaborate closely with multidisciplinary teams of computational and experimental scientists to define and implement best practices in data engineering, ensuring data quality, accessibility, and reproducibility. You'll maintain detailed documentation, potentially mentor junior engineers, and automate workflows to streamline the path from raw data to scientific insight.

Requirements

  • Bachelor’s degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field with 3+ years of experience applying and customizing data mining, model training & inference methods and techniques. An equivalent combination of education and relevant experience will be considered.
  • Experience with data formats such as Zarr, Parquet, HDF5, and efficient IO (e.g., webdataset).
  • Experience with volumetric 3D/4D microscopy data analysis tools.
  • Experience with high performance compute environments (cloud-based and slurm/lsf clusters) and model deployment platforms (e.g., Kubernetes, AWS SageMaker, Google Vertex AI, HF Inference).
  • Experience with distributed data processing, Multi-node GPU processing and ML development frameworks such as PyTorch and/or JAX
  • Excellent technical documentation and communication skills
  • Experience in building scalable data solutions, working with big data technologies, and ensuring data quality and accessibility.
  • Expertise in utilizing data visualization libraries and software (e.g., Matplotlib, R, Jupyter notebooks).
  • Detail-oriented, creative, and organized team player with strong communication skills and a collaborative mindset.
  • Able to effectively manage time, prioritize tasks, and clearly convey complex data concepts to technical and non-technical audiences.

Responsibilities

  • Design and implement scalable, robust data, model training and inference pipelines for foundational microscopy datasets & vision foundation models.
  • Deploy such pipelines on multi-node GPU environments and make data & trained models publicly available.
  • Stay up to date with scientific literature to understand data context and processing requirements
  • Document data provenance and transformation steps comprehensively
  • Apply statistical tools and programming languages (e.g., Python, R) to analyze large datasets, develop custom functions, and extract actionable insights through effective visualization.
  • Establish and maintain data standards, formats, workflows, and documentation to ensure data quality, accessibility, and reproducibility across projects.
  • Collaborate with interdisciplinary teams, potentially mentor junior engineers, and direct or assist in directing the work of others to meet project goals while advising stakeholders on data strategies and best practices.

Benefits

  • A competitive compensation package, with comprehensive health and welfare benefits.
  • A supportive team environment that promotes collaboration and knowledge sharing.
  • The opportunity to engage with world-class researchers, software engineers and AI/ML experts, contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.
  • Opportunity to partner with frontier AI labs on scientific applications of AI (see https://www.anthropic.com/news/anthropic-partners-with-allen-institute-and-howard-hughes-medical-institute).
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service