Pr Eng Platform Eng

Johnson & JohnsonSpring House, PA
4d

About The Position

At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow and profoundly impact health for humanity. Learn more at https://www.jnj.com . About Innovative Medicine Our expertise in Innovative Medicine is informed and inspired by patients, whose insights fuel our science-based advancements. Visionaries like you work on teams that save lives by developing the medicines of tomorrow. Join us in developing treatments, finding cures, and pioneering the path from lab to life while championing patients every step of the way. Learn more at https://www.jnj.com/innovative-medicine We are searching for the best talent for High-Dimensional Data Flow Orchestration Engineer, R&D Therapeutics Discovery in Spring House, PA or Beerse, Belgium. The High-Dimensional Biology Data Flow Orchestration Engineer acts as the keystone of data flow automation projects for high-dimensional biology teams including, among others, Transcriptomics, Proteomics and High-Content Imaging. The candidate will ensure that scientific data is automatically transferred, processed and analyzed with minimal user intervention. This role requires strong expert-level capabilities in data flow automation, solution implementation, to deliver efficient, scalable, and compliant data flow automation solutions across modality agnostic Therapeutics Discovery.

Requirements

  • Degree in Computer Science, Data Engineering, or related field; advanced degree (MS/PhD) preferred.
  • 5+ years building production-grade data platforms and automated workflows.
  • Hands-on experience with workflow languages and orchestration tools (Nextflow, Argo, Airflow) and scalable compute environments (eg Kubernetes).
  • Strong scripting and programming skills (Python, Bash, SQL), proficiency in cloud ecosystems (eg AWS) and distributed frameworks (Spark, Dask).
  • Proven leadership in complex, multi-stakeholder projects.
  • Strong analytical mindset, problem-solving agility, and a collaborative leadership style.
  • Architecture & Design: Design scalable, resilient data flow architectures; create reusable pipeline templates and IaC modules (Terraform).
  • Systems & Integration: Skilled in container orchestration (Kubernetes, Docker), CI/CD (GitHub Actions, GitLab CI), and workflow engines (Nextflow, Argo). Experienced with cloud storage (S3), data warehouses (Snowflake), and integration tools (Kafka, APIs, metadata catalogs).
  • Leadership & Delivery: Lead cross-functional matrix teams, set roadmaps, manage vendors, and drive complex projects from concept to production.
  • Communication: Translate technical trade-offs into business impact; produce clear documentation and onboarding guides.
  • Accessible Design
  • Agility Jumps
  • Business Alignment
  • Cloud Computing
  • Cloud Migrations
  • Cloud Security
  • Coaching
  • Critical Thinking
  • Human-Computer Interaction (HCI)
  • Hybrid Clouds
  • Multi Cloud Models
  • Organizing
  • Presentation Design
  • Software Development Life Cycle (SDLC)
  • Software Development Management
  • Tactical Planning
  • Technical Credibility
  • Technical Writing

Nice To Haves

  • Experience with domain‑specific pipelines (bioinformatics, genomics, imaging) or production ML platforms (model registries, feature stores).
  • Industry exposure : Experience in pharmaceutical research, biotech, or medical device environments.
  • Data governance : Strong understanding of data integrity, lineage, security frameworks, and scalable data architectures.
  • Data Reliability & Observability : Implement automated data quality checks, lineage tracking, and schema evolution. Strong background in monitoring, metrics, and capacity planning.

Responsibilities

  • Architect and deploy automated data pipelines for scientific workflows.
  • Lead end-to-end projects from concept to production in a global, matrixed environment.
  • Integrate advanced tools and platforms across cloud, on-prem, and container ecosystems.
  • Collaborate with cross-functional teams and mentor/train computational biologists in the use of the infrastructure.
  • Explore emerging technologies in AI/ML and automation to accelerate innovation.
  • Ensure data quality, lineage, security, and compliance at scale.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service