About The Position

At Boeing Digital Services, you’ll be a part of a team that creates innovative digital solutions and analytics that drive the future and evolution of Digital Services and enable our customers to transform the way they do business. Using agile digital technology, our solutions optimize all aspects of the aircraft operations and maintenance ecosystem including safety, environmental sustainability, and efficiency. Boeing Vancouver is seeking a Data Engineer, reporting to the Data & AI Platform Manager, working out of the Seattle, WA office. The Data & AI Platform Data Engineer (Level 4) will help Boeing transform our industry through the application and continuous improvement of advanced analytics and machine learning in the aviation domain. The position will be embedded in a multi-disciplinary Data & AI Platform team producing industry-leading insights, and will use their data management, software development and infrastructure skills to help build bigger, faster, and better cloud-based tools and pipelines. They will be broadly responsible for the design, implementation and support of data pipelines, including the data models, data contracts, and model features. This is a challenging role, requiring versatile problem-solver with keen conceptual mind, ontological thinking, an understanding of data science and valuable data features, as well as computational load and performance. They will work closely with aviation engineers and data scientists in a problem-solving role, helping bridge the gap from data into working data science models. Although primarily responsible for data management, the Data Engineer must be a versatile team player and may be called upon to assist in back-end development, cloud deployment, and even data science from time to time. They must be able to adapt, find the knowledge they need, learn, and make decisions as needs arise.

Requirements

  • Minimum 3-year Cloud deployment experience (Azure preferred)
  • Minimum 3 years’ experience in relational and non-relational database technologies
  • Minimum 3-years’ experience supporting data science and analytics projects and/or infrastructure
  • Minimum 3-years building production grade ETL pipelines and Data Mesh and/or Data Bricks
  • Must be proficient in Python, SQL, PySpark
  • Experience working with Large Language Models (LLM) and Natural Language

Nice To Haves

  • A technical degree/diploma in a related field of study
  • Processing (NLP) technologies
  • Experience working with graph databases, knowledge graphs, and their languages (e.g. GraphQL, Cypher)
  • Experience designing and implementing data quality monitoring solutions
  • Expertise in data modeling principles/methods
  • Experience with development, deployment and version control tools
  • Experience with production-level Software Development
  • Experience in DevOps technologies (e.g. CI/CD, Docker) and practices
  • Experience with cloud-deployed APIs and micro-services is an asset
  • Experience in pipeline software is an asset

Responsibilities

  • Support data eco-system via data Ingestion and Modelling, Operational Monitoring, Onboarding, Data Health, governance
  • Propose data engineering solutions to support different modelling strategies
  • Design, build and support healthy, automated, and repeatable data ingestion and processing pipelines
  • Raw data ingestion, cleansing, and data contracts
  • Design data models and data contracts
  • Monitor and maintain data quality, integrity, consistency
  • Help design and build scalable, reliable, and high-performance systems and environments
  • Effectively contribute to building the overall knowledge and expertise of the technical team
  • Participate in work and code reviews with the team
  • Take part in implementation and support of continuous integration and continuous delivery (CI/CD)
  • Work on systems to monitor system health, data quality and scientific performance
  • Implement data access-control for compliance with data governance policies
  • Contribute to technical documentation
  • Collaborate with developers, data analysts, data scientists and organizational leaders to identify opportunities for process improvements
  • Exhibit sound judgment, keen eye for details and tenacity for solving difficult problems.
  • Experience working and supporting a data mesh on Databricks
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service