Senior Data Architect/Data Engineer, Aladdin Engineering - Vice President

BlackRockNew York, NY
20h$162,000 - $215,000Hybrid

About The Position

About this Role: At BlackRock, technology is the foundation of our business. As a Data Engineer, you’ll build resilient systems that power our global post-trade operations. You’ll design and deliver enterprise-scale software with a focus on reliability, performance, and clean engineering practices. This role is ideal for engineers who like to innovate and solve complex challenges while fostering a culture of excellence and continuous improvement. About Post Trade Accounting (PTA): A major strategic area within Aladdin and one of BlackRock’s largest engineering investments. Responsible for the systems that ensure accurate, scalable, and efficient accounting across global operations. Expanding into data analytics and pipeline initiatives using Snowflake, Redis, and Kafka to manage high-volume, real-time data. Collaborates closely with Product, Operations, and other Engineering teams to deliver business-critical capabilities. Agile and collaborative environment that values technical depth, quality, and innovation.

Requirements

  • B.S./M.S. in Computer Science, Engineering, or related discipline (or equivalent practical experience).
  • 8+ years of experience building production data systems, with demonstrated ownership of data modeling and data pipeline engineering.
  • Strong SQL skills (advanced querying, query plans, performance tuning) with hands-on experience in Snowflake and/or Microsoft SQL Server.
  • Proven experience with data modeling for analytics (dimensional modeling / star schemas, conformed dimensions, slowly changing dimensions) and translating business concepts into robust schemas.
  • Hands-on experience designing and implementing ELT/ETL pipelines, including batch and near-real-time patterns.
  • Proficiency in at least one general-purpose language used for data engineering (e.g., Python, Java, or Scala) for automation, orchestration, and integrations.
  • Working knowledge of modern data engineering practices: testing for transformations, CI/CD, environment promotion, and operational monitoring.
  • Strong communication skills and comfort collaborating with domain experts to turn ambiguity into clear, implementable data products.

Nice To Haves

  • Experience with transformation and modeling frameworks (e.g., dbt) and/or a semantic/metrics layer approach.
  • Exposure to orchestration tools (e.g., Airflow, Dagster, Prefect) and patterns for dependency management and backfills.
  • Streaming and event-driven data experience (e.g., Kafka, CDC patterns) and understanding of late-arriving data, watermarking, and replay.
  • Experience integrating downstream serving/search systems (e.g., Elasticsearch) and operational datastores (e.g., Cosmos DB).
  • Familiarity with data governance and observability tooling (catalog/lineage, OpenLineage-style concepts, data quality frameworks).
  • Cloud-native exposure (Docker/Kubernetes, AWS/Azure/GCP) and infrastructure-as-code (Terraform).
  • Interest in financial systems, accounting, or investment technology.

Responsibilities

  • Partner with domain experts, product, and engineering teams to design canonical data models (conceptual → logical → physical) that power trusted reporting, analytics, and downstream integrations.
  • Build and evolve analytics-ready datasets in Snowflake (curated layers / data marts), including clear metric definitions (grain, dimensions, measures) that enable consistent enterprise reporting.
  • Design and develop reliable ELT/ETL pipelines across Snowflake and SQL Server to support both scheduled batch loads and low-latency ingestion where needed.
  • Implement robust pipeline patterns such as incremental processing, idempotency (replay-safe loads), deduplication, and backfill/reprocessing strategies.
  • Establish and enforce data quality and observability practices (freshness, completeness, accuracy checks; alerting; runbooks; SLAs) to keep data products production-grade.
  • Optimize analytical performance and cost by applying Snowflake best practices (clustering/partition strategies, materializations, query optimization) and SQL Server performance tuning where appropriate.
  • Publish curated data to downstream systems and serving layers when needed (e.g., search indices like Elasticsearch and operational stores like Cosmos DB) with clear contracts and monitoring.
  • Drive best practices for documentation, lineage, schema evolution, and secure handling of sensitive data (PII) in collaboration with platform and governance partners.

Benefits

  • employees are eligible for an annual discretionary bonus, and benefits including healthcare, leave benefits, and retirement benefits
  • strong retirement plan
  • tuition reimbursement
  • comprehensive healthcare
  • support for working parents
  • Flexible Time Off (FTO)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service