Scientific Data Infrastructure Engineer

IDEXXWestbrook, ME
1d$110,000Onsite

About The Position

We're proud to be a global leader in pet healthcare innovation. Our diagnostic instruments, software, tests, and services help veterinarians around the world advance medical care, improve staff efficiency, and build more economically successful practices. At IDEXX, you'll be part of a team that's passionate about making a difference in the lives of pets, people, and our planet. We are seeking a Scientific Data Infrastructure Engineer to join our R&D Discovery and Technology Futures team. In this role, you'll be the technical architect enabling rapid development and deployment of data pipelines and scientific computing infrastructure that support our biomarker discovery and diagnostic development programs. You'll work embedded within our LCMS research team, bridging cloud infrastructure, database architecture, and scientific computing—helping transform raw analytical data into production-ready diagnostic solutions. This role is onsite in Westbrook, Maine.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience)
  • 7-10+ years of experience in DevOps, Database Architecture, or related fields
  • Proven track record of leading complex infrastructure projects, preferably in research or data-intensive environments
  • Strong experience with CI/CD tools (GitHub Actions, GitLab CI/CD, AWS CodePipeline, Google Cloud Build, Jenkins, ArgoCD)
  • Proficiency in infrastructure-as-code (Terraform, CloudFormation)
  • Advanced Python programming and scripting capabilities (Bash, PowerShell)
  • Experience with container orchestration (Kubernetes, Docker)
  • Cloud platform expertise (AWS, Google Cloud) with focus on serverless computing and batch processing systems
  • Strong database administration and architecture skills including:
  • Snowflake data warehouse design, optimization, and administration
  • SQL databases (PostgreSQL, MySQL, SQL Server)
  • NoSQL databases (MongoDB, DynamoDB, Cassandra)
  • Database performance tuning and ETL/ELT pipeline development

Nice To Haves

  • Experience in life sciences, biotechnology, diagnostics, or other research-intensive industries
  • Familiarity with scientific data workflows, laboratory informatics, or instrument data pipelines
  • Knowledge of LCMS, mass spectrometry, or other analytical chemistry data formats and processing
  • Understanding of bioinformatics file formats and scientific data standards
  • Understanding of regulatory requirements for diagnostic software (ISO 13485, FDA 21 CFR Part 11)
  • Experience with Atlassian suite administration (Jira, Confluence, Bitbucket)
  • Familiarity with Active Directory and identity management systems
  • Snowflake SnowPro certification

Responsibilities

  • Design and implement CI/CD pipelines using GitHub Actions, GitLab CI/CD, AWS CodePipeline, and Google Cloud Build to streamline deployment of mass spectrometry-based data processing systems and proteomic computing workloads
  • Develop and maintain infrastructure-as-code solutions using Terraform for AWS and Google Cloud environments
  • Build automated deployment systems for serverless functions using AWS Lambda and Google Cloud Run
  • Orchestrate large-scale batch processing jobs using AWS Batch and Google Cloud Batch
  • Design and implement scalable database solutions for proteomic, metabolomic and genomic data storage and retrieval
  • Architect and optimize Snowflake data warehouses for large-scale multi-omic datasets
  • Build ETL/ELT workflows for instrument data ingestion, including metadata capture and provenance tracking
  • Manage both SQL and NoSQL database systems supporting research applications
  • Implement data governance, backup, disaster recovery, and audit trail strategies
  • Create and manage computing infrastructure for mass spectrometry-based data processing
  • Implement scalable solutions for high-throughput multi-omic data pipelines from analytical instruments
  • Deploy and maintain data annotation platforms and curation systems
  • Build monitoring and alerting systems that track pipeline health, processing backlogs, and system performance
  • Partner with research scientists, bioinformaticians, and software engineers to understand computational requirements and translate scientific needs into technical solutions
  • Provide technical leadership to implement modern DevOps practices across research workflows
  • Develop documentation, playbooks, and training materials to enable self-service capabilities for research teams
  • Mentor team members and drive adoption of DevOps best practices

Benefits

  • Salary range starting at $110,000 based on experience
  • Opportunity for annual cash bonus
  • Health / Dental / Vision Benefits Day-One
  • 5% matching 401k
  • Additional benefits including but not limited to financial support, pet insurance, mental health resources, volunteer paid days off, employee stock program, foundation donation matching, and much more
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service