Senior Data Product Engineer, Commercial

GenmabPrinceton, TX
$131,600 - $197,400Hybrid

About The Position

At Genmab, we are dedicated to building extra[not]ordinary® futures, together, by developing antibody products and groundbreaking, knock-your-socks-off KYSO antibody medicines® that change lives and the future of cancer treatment and serious diseases. We strive to create, champion and maintain a global workplace where individuals’ unique contributions are valued and drive innovative solutions to meet the needs of our patients, care partners, families and employees. Our people are compassionate, candid, and purposeful, and our business is innovative and rooted in science. We believe that being proudly authentic and determined to be our best is essential to fulfilling our purpose. Yes, our work is incredibly serious and impactful, but we have big ambitions, bring a ton of care to pursuing them, and have a lot of fun while doing so. Does this inspire you and feel like a fit? Then we would love to have you join us! Role Overview We are looking for a Senior Data Product Engineer with deep expertise in DBT, Databricks, Snowflake, Power BI/Tableau and AWS, who can architect and implement scalable, reliable, and high-performance data products. This role will require strong technical skills across data modeling, distributed data processing, ELT pipeline development, and dashboarding, with an emphasis on automation, observability, and engineering best practices supporting our EU and UK commercial data warehouse. You will be expected to design and deliver production-grade data pipelines and models, optimize query, and compute performance, and expose data to end users through well-structured semantic layers and dashboards. The ideal candidate is conceptually strong in modern data architectures and principles, capable of adapting across tools rather than being limited by them and operate as a hands-on technical lead, defining frameworks, making architectural decisions, and guiding implementation patterns across dbt, Snowflake, Databricks and BI platforms. The ideal candidate should have familiarity with commercial data sets like IQVIA One Key, Veeva Open Data, Veeva CRM, Integrichain, Symphony Non-Retail, Claims data, Digital, Omnichannel and Precision lab data. Work arrangement: This role offers flexibility to work away from the office for 20%–40% of a typical schedule. Employees may use this work schedule in increments of single days or multiple consecutive days, provided it does not exceed 40% within a 60-day period, and is approved by the hiring manager.

Requirements

  • 5+ years in data engineering, analytics engineering, or data platform development.
  • 10+ years of experience IT with some in Biotech/Pharma industry
  • Expert-level proficiency in: DBT: advanced macros, Jinja, testing, exposures, dbt Cloud deployment.
  • Databricks: Spark (PySpark, SQL), Delta Lake, Unity Catalog.
  • Snowflake
  • AWS: S3, Glue, Lambda, Step Functions, Datasync, EMR, Redshift, IAM and networking/security fundamentals.
  • Data Visualization: Power BI or Tableau.
  • Strong programming background in Python and SQL (including query optimization).
  • Proven experience with distributed systems and large-scale datasets (TB scale).
  • Experience implementing CI/CD pipelines, data testing, and infrastructure as code.
  • Solid understanding of data governance, security, and compliance in enterprise environments.

Nice To Haves

  • Familiarity with commercial data sets like IQVIA One Key, Veeva Open Data, Veeva CRM, Integrichain, Symphony Non-Retail, Claims data, Digital, Omnichannel and Precision lab data.

Responsibilities

  • Architect and implement end-to-end ELT workflows with DBT (core and Cloud), ensuring modular, testable, and reusable transformations.
  • Build high-performance data pipelines in Snowflake and Databricks (PySpark, Delta Lake, Unity Catalog) for batch and streaming workloads.
  • Orchestration using Apache Airflow and Amazon tools.
  • Engineer scalable data ingestion pipelines into AWS (DBT, SQL, PythonS3, Kinesis, Glue, Lambda, Step Functions) with strong monitoring and fault tolerance.
  • Ensure observability, cost efficiency & scalability in all pipeline and compute designs.
  • Design normalized and star-schema models for analytical workloads, following dbt’s best practices and software engineering principles.
  • Implement data quality testing frameworks (dbt tests, Great Expectations, or custom validations) with automated CI/CD integration.
  • Manage data versioning, lineage, and governance through tools such as Unity Catalog and AWS Lake Formation.
  • Ensure data lineage, cataloging, and metadata management are maintained.
  • Apply HIPPA privacy rules, and pharma‑specific compliance standards.
  • Support audits, documentation, and validation activities.
  • Develop semantic data layers that support self-service analytics across BI tools (Tableau, Power BI etc.).
  • Partner with analysts and data scientists to optimize queries and deliver production-ready datasets.
  • Manage ingestion and harmonization of EU commercial datasets such as: HCP/HCO master data (IQVIA and Veeva) Sales and distribution data (Integrichain, Symphony Non-Retail and Country specific market data) Omnichannel engagement (email, rep-triggered, web, events) CRM activity data (Veeva, LifeScience Cloud) Market research and syndicated data (IQVIA, etc.)
  • Support data pipelines used for incentive compensation, field force effectiveness, and brand performance analytics.
  • Automate deployment pipelines with CI/CD (GitHub Actions, GitLab CI, or AWS CodePipeline) for dbt and Databricks.
  • Implement infrastructure-as-code (IaaC) for reproducibility (Terraform, CloudFormation).
  • Ensure system reliability through observability and monitoring (Datadog, CloudWatch, Prometheus, or similar).
  • Benchmark and optimize SQL, Spark, and BI query performance at scale.
  • Partner with Data Scientists, Commercial Analysts, and Business Partners to translate business needs into technical solutions.
  • Work closely with IT, Compliance, and Data Privacy teams to ensure GDPR‑aligned data handling.
  • Provide technical guidance on data availability, feasibility, and best practices.

Benefits

  • 401(k) Plan: 100% match on the first 6% of contributions
  • Health Benefits: Two medical plan options (including HDHP with HSA), dental, and vision insurance
  • Voluntary Plans: Critical illness, accident, and hospital indemnity insurance
  • Time Off: Paid vacation, sick leave, holidays, and 12 weeks of discretionary paid parental leave
  • Support Resources: Access to child and adult backup care, family support programs, financial wellness tools, and emotional well-being support
  • Additional Perks: Commuter benefits, tuition reimbursement, and a Lifestyle Spending Account for wellness and personal expenses

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service