Forward Deploy Data Engineer-Associate

BlackRockAtlanta, GA
1dHybrid

About The Position

About this role About the Role BlackRock's Enterprise Data Platform (EDP) is the firm's strategic foundation for how data products are built, governed, and consumed at scale, powering investment decisions, risk analytics, and operational workflows across the firm and its global client base. Data Platform as a Service (DPaaS) is a core capability within EDP, purpose-built to make data product creation fast, reliable, and repeatable. Whether a team is onboarding a new market data feed, publishing a risk dataset, or operationalizing a model output, DPaaS provides the infrastructure, tooling, and guided experience to take a raw data source and turn it into a trusted, production-grade data product. Teams get acquisition, ingestion, transformation, quality validation, and governance without having to build any of it themselves. Why This Role is Exciting Most engineers either build platforms or use them. As a Forward Deploy Engineer on the DPaaS team, you do both. You will deploy by embedding directly with teams across the firm, bringing their data products to life and solving real problems that only surface when a platform meets production data. You will build by developing solutions that fill gaps and make it easier for teams to create and publish data products on EDP. You will be at the frontier of how BlackRock thinks about data products, working with real users, influencing what gets built next, and seeing your work in production quickly across a wide range of data domains.

Requirements

  • 3+ years of data engineering or software engineering experience with a track record of shipping production-grade solutions
  • Understanding of data product concepts including schema design, data ownership, SLAs, quality frameworks, and governance
  • Experience working with both structured and unstructured data
  • Strong proficiency in Python; working knowledge of Java or Go is a plus
  • Experience with orchestration and pipeline tooling for structured data (e.g., Apache Airflow) and unstructured data processing frameworks
  • Familiarity with the Azure ecosystem including Azure Data Lake Storage, Azure Blob Storage, Azure Data Factory, and Azure-native data services
  • Working knowledge of Snowflake including ingestion patterns, database setup, roles, and basic query optimization
  • Familiarity with Kubernetes, Helm, and cloud-native infrastructure on Azure
  • Some experience working directly with client or partner engineering teams in a collaborative or client-facing capacity
  • Active user of AI assisted development tools (GitHub Copilot, Cursor, Windsurf, or equivalent)
  • Bachelor's or Master's degree in Computer Science, Engineering, or equivalent practical experience

Nice To Haves

  • Prior exposure to a forward deploy, solutions engineering, or client-embedded engineering role
  • Familiarity with financial data platforms or enterprise data ecosystems
  • Experience with data governance, data cataloging, or metadata management platforms
  • Exposure to dbt or data quality validation frameworks
  • Hands-on experience with unstructured data processing including document parsing, embeddings, vector stores, or blob-based data pipelines
  • Familiarity with LLM based tooling or AI agent frameworks (e.g., LangChain, MCP)
  • Working knowledge of other cloud platforms (AWS, GCP) and their equivalent data services such as S3, Redshift, BigQuery, and Dataflow
  • Experience with CI/CD pipelines and DevSecOps practices (Azure DevOps, ArgoCD)

Responsibilities

  • Forward Deployment & Data Product Onboarding Embed directly with partner engineering and data teams to drive end-to-end data product onboarding onto DPaaS, from source configuration through to production
  • Work hands-on with teams to define data product structure including schema, ownership, SLAs, quality expectations, and governance attributes
  • Support onboarding of both structured and unstructured data products, adapting approaches to fit the nature of the data
  • Troubleshoot onboarding failures across infrastructure, pipeline, and data layers in real time
  • Run technical onboarding sessions and workshops tailored to each team's data product needs
  • Enable partner teams to self-serve on data product creation over time, reducing dependency on FDE support
  • Solution Development & Platform Contribution Develop reusable data product accelerators including pipeline templates, configuration generators, and schema mapping utilities
  • Build custom acquisition connectors, ingestion templates, and transformation scaffolding for both structured and unstructured data
  • Contribute to core DPaaS platform engineering efforts including new feature development and framework improvements
  • Build and maintain data product accelerators and onboarding utilities that become reusable assets across the platform
  • Client Enablement Act as a trusted technical advisor on data product design and onboarding best practices for partner engineering and data teams
  • Run office hours, enablement sessions, and targeted training to help teams build platform confidence independently
  • Translate partner-specific data requirements into platform-compatible data product configurations
  • Document onboarding patterns, common failure modes, and solutions into reusable playbooks
  • Capture and channel structured feedback from onboarding engagements into the DPaaS product and engineering roadmap
  • AI Assisted Development & Intelligent Data Product Onboarding Use AI assisted coding tools as a core part of daily workflow, accelerating configuration authoring, pipeline generation, and onboarding automation
  • Build and contribute to AI assisted onboarding workflows leveraging schema inference, automated attribute mapping, and AI driven data profiling to reduce manual effort
  • Implement emerging AI tooling including Model Context Protocol (MCP), AI agents, and Copilot extensions to automate repetitive onboarding tasks
  • Define what AI native data product creation looks like on EDP, contributing patterns that shape the platform roadmap
  • Feedback Loop & Platform Evolution Translate real onboarding experiences into structured product feedback that drives platform improvements
  • Work closely with DPaaS product, engineering, and infrastructure teams to close the loop between partner needs and platform capabilities
  • Navigate and operate across the full DPaaS technology stack including structured and unstructured data pipelines, Azure Data Lake Storage, Snowflake, Kubernetes, and Vault
  • Validate data product correctness and pipeline integrity across raw, staging, and curated data layers
  • Support testing and validation of new platform capabilities before broader rollout

Benefits

  • employees are eligible for an annual discretionary bonus, and benefits including healthcare, leave benefits, and retirement benefits
  • Flexible Time Off (FTO)
  • a strong retirement plan
  • tuition reimbursement
  • comprehensive healthcare
  • support for working parents
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service