Improve BlackRock’s ability to enhance our retail sales distribution capabilities and services suite by creating, expanding and optimizing our data and data pipeline architecture. You will create and operationalize data pipelines to enable squads to deliver high quality data-driven product. You will be accountable for managing high-quality datasets exposed for internal and external consumption by downstream users and applications. Top technical / programming skills – Python, Java and Scala with ability to work across big data frameworks such as Spark, Hadoop Suite, PySpark, Hive, Cloud Data Platforms Preferably Snowflake and SQL. Experience working with flat files (e.g., csv, tsv, Excel), Database API sources is a must to both ingest and create transformations. Given the highly execution-focused nature of the work, the ideal candidate will roll up their sleeves to ensure that their projects meet deadlines and will always look for ways to optimize processes in future cycles. The successful candidate will be highly motivated to create, optimize, or redesign data pipelines to support our next generation of products and data initiatives. You will be a builder and an owner of your work product.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
5,001-10,000 employees