Data Engineer

Blueprint TechnologiesRedmond, WA
Onsite

About The Position

In this role, you will join a data engineering team responsible for building, improving, and maintaining internal data pipelines that support high‑priority product initiatives and upcoming launch deadlines. You will work on enhancing pipeline performance and reliability while integrating new capabilities that enable large‑scale model usage for generating training labels used in ranking and search systems. This position partners closely with engineers, data scientists, and stakeholders to ensure high‑quality, accessible data for data‑driven decision‑making.

Requirements

  • Bachelor’s degree in Computer Science, Computer Engineering, or a related technical field.
  • 2–4 years of hands‑on experience in data engineering or a related role.
  • At least 2 years of experience with Python.
  • At least 2 years of experience working with cloud platforms, particularly Azure.
  • At least 2 years of experience with data engineering fundamentals, including: SQL, ETL processes, Data pipeline design and maintenance.
  • Experience working with database technologies and data storage systems.
  • Knowledge of at least one scripting or programming language (Python preferred; C# a plus).
  • Strong troubleshooting, analytical, and problem‑solving skills.
  • Strong written and verbal communication skills.
  • Demonstrated ability to deliver reliable, production‑grade data solutions.

Nice To Haves

  • Industry experience as a Data Engineer with a strong computer science foundation.
  • Experience supporting search, ranking, or recommendation systems.
  • Familiarity with large‑scale data systems used to support machine learning or AI workflows.
  • Experience optimizing data pipelines for performance and scalability.
  • Exposure to data labeling, feature engineering, or data preparation for model training.
  • Prior experience working on projects with tight timelines or launch‑driven priorities.
  • Strong collaboration skills and comfort working in cross‑functional, fast‑moving environments.

Responsibilities

  • Collaborate with senior leadership, engineering teams, and business stakeholders to gather data requirements and define effective data implementation strategies.
  • Design, build, automate, and maintain large‑scale ETL pipelines and data processing workflows.
  • Improve the performance, scalability, and responsiveness of existing data pipelines to support time‑sensitive product needs.
  • Implement new data features and integrations to support advanced model usage for training data generation.
  • Develop and maintain logical and physical database designs, including schema definitions and data identifiers.
  • Modify and optimize existing databases and data management systems as requirements evolve.
  • Ensure data quality, reliability, and accessibility across data platforms.
  • Test data pipelines, scripts, and database systems; troubleshoot issues and implement fixes as needed.
  • Partner closely with data scientists and analysts to support analytics, modeling, and experimentation needs.
  • Follow best practices for data manipulation, storage, security, and performance optimization.

Benefits

  • Medical, dental, and vision coverage
  • Flexible Spending Account
  • 401k program
  • Competitive PTO offerings
  • Parental Leave
  • Opportunities for professional growth and development
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service