Senior Staff Data Engineer

BayerTulsa, OK
Remote

About The Position

The Senior Staff Data Engineer is responsible for the design and implementation of numerous complex data flows to integrate operational systems, external and internal satellites and deliver data for analytics, AI, business intelligence (BI) systems and applications(s). Individuals in this role will design, build and maintain a scalable data infrastructure and implement integration pipelines for multiple systems upstream and downstream; identify integration patterns and design architecture based on business requirements and platform capabilities which may include defining patterns for data streaming; lead projects for data assets and mentor and coach engineers while collaborating with other teams and business for technical consult; champion data engineering across platforms; and establish standards for platform(s) being used and share across teams and engineers. This role will be Residence-Based in the US.

Requirements

  • Minimum of a Bachelor’s degree in Computer Science, Software Engineering, or related field, and significant professional software engineering experience. Additional professional experience will be suitable in lieu of a Bachelor’s degree;
  • Professional data engineering experience to include: Experience working with Relational Database technologies, such as BigQuery, Oracle, Teradata, Postgres and/or Redshift;
  • Demonstrated expertise engineering data intensive solutions using streaming and resource based design principles;
  • Hands on experience using programming languages such as Python, Java and/or Golang;
  • Relevant certifications as applicable;
  • Demonstrated experience with data architecture and modeling, including designing both logical and physical models;
  • Strong organizational, interpersonal, written and oral communication skills and desire to work in a highly collaborative environment;
  • Strong problem solving, analytical and proven project delivery skills;
  • Ability to work collaboratively in a team environment;
  • Focused on data accuracy, timely delivery and attention to detail;
  • Ability to quickly learn upcoming technologies and familiarity with the relevant industry trends;
  • Deep knowledge and demonstrated experience with Google BigQuery and Google Cloud Platform;
  • Experience with version control and related tools Github, GitLab and IaC (like Terraform or others).

Nice To Haves

  • One of the following: Bachelor’s degree in Computer Science, Software Engineering, or related field, with 5+ years of related professional software engineering experience, OR Master's degree with 3 years of relevant experience, OR PhD with 1 year of relevant experience;
  • 9+ years of relevant experience is an acceptable substitute for degree requirement
  • Professional data engineering experience to include: 5+ years of experience working with Relational Database technologies, such as BigQuery, Oracle, Teradata, Postgres and/or Redshift;
  • 3+ years of experience engineering data intensive solutions using streaming and resource based design principles;
  • 5+ years of experience using programming languages such as Python, Java and/or Golang

Responsibilities

  • Design and implement Market360 Product Supply and Commercial data models using various GCP technologies;
  • Develop solutions/ETL that ingest data from multiple sources and deliver solutions for BI Reporting and advanced analytical capabilities;
  • Establish standards, keep them up to date and ensure adherence to them;
  • Keep abreast of best practice in industry and across platforms;
  • Design data models that follow data warehousing industry standards and maintain required documentation or reverse engineer existing models as needed;
  • Work across platforms, product teams, and customer teams, recognizing opportunities for the reuse and alignment of data models in different organizations;
  • Perform data discoveries to understand data formats, source systems, etc. and engage with business partners in this discovery process;
  • Assemble and evaluate data such that new insights, solutions, and visualizations can be derived;
  • Bring multiple data sources together in a conformed model for analysis;
  • Ensure data solutions are scalable, repeatable, optimized and follow governance and engineering guidelines;
  • Assess technical requirements to deliver streaming/real time or batch solutions as needed;
  • Assess data delivery/access patterns to deliver data as API, Kafka or data marts;
  • Establish enterprise-scale data integration procedures across the data development life cycle and ensure that teams adhere to them;
  • Challenge and manage team to improve processes and methodologies to deliver cost optimized solutions in a timely manner;
  • Use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations;
  • Collaborate with others to review specifications where appropriate;
  • Develop and implement solutions, metadata and documentation that support AI capabilities;
  • Design an appropriate metadata repository and present changes to existing metadata repositories;
  • Understand a range of tools for storing and working with metadata;
  • Provide oversight and advice to more inexperienced members of the team;
  • Lead and participate in design sessions with Data Stewards, Engineering leads, Data Scientists, Product Managers, business and IT stakeholders, that result in design documentation for data processing, storage and delivery solutions;
  • Manage active and reactive communication;
  • Support or host difficult discussions within the team or with diverse senior stakeholders;
  • Provide reliable estimates for short term projects and assist in large scale project estimation;
  • Show an awareness of opportunities for innovation with new tools and uses of data and recognize appropriate timing for adoption;
  • Ensure that the most appropriate actions are taken to resolve problems as they occur;
  • Co-ordinate teams to resolve problems and to implement solutions and preventative measures;
  • Show a thorough understanding of the technical concepts required for the role and explain how these fit into the wider technical landscape;
  • Review requirements and specifications, and define test conditions;
  • Identify issues and risks associated with work;
  • Analyze and report test activities and results;
  • Integrate, transform, validate, and reconcile large amounts of data;
  • Execute and automation of testing as needed;
  • Mentor junior and aspiring Data Engineers on the team and across the data community.

Benefits

  • health care
  • vision
  • dental
  • retirement
  • PTO
  • sick leave
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service