Who You’ll Work With: You will design, build, and maintain scalable, cloud‑ready data pipelines and infrastructure enabling investment business teams—research analysts, quantitative modelers, traders, risk managers, and portfolio managers—to access accurate, timely, and high‑quality data. The role blends engineering rigor with fluency in investment data, helping power analytics, models, and mission-critical investment applications What You’ll Do: Data Pipeline & Engineering Development · Build and optimize end‑to-end data pipelines to support ingestion, scrubbing, transformation, and distribution of investment data. · Implement robust ETL/ELT processes for structured and unstructured datasets across fixed income, Equities, and Multi asset domains. · Automate workflows and enhance data reliability for downstream risk, attribution, analytics, and reporting platforms. Data Modeling, Architecture & Quality Control · Develop, maintain, and document data models, schemas, and database structures supporting investment analytics and operational workflows. · Implement data validation and quality frameworks aligned with investment teams’ data integrity standards. Investment Data Domain Expertise · Build fluency across various investment data domains and design data access patterns that deliver clean, consistent, and efficient access to consumers. · Understand how key model inputs (terms and conditions, pricing, index attributes, reference data) influence analytical outputs and decision‑making. Systems Integration & API Development · Integrate data with internal platforms and external vendors; streamline and standardize access through APIs, warehouses, or curated data layers. · Support enhancements to data streams and contribute to cloud‑native data engineering initiatives aligned with machine learning and analytics engineering trends. · Enable seamless data access through analytics and visualization tools (e.g., Power BI, Excel, and BQL), ensuring that curated datasets, semantic models, and governed views are optimized for consumption by investment teams and downstream reporting workflows. Collaboration & Cross‑Functional Partnership · Partner with investment teams to understand requirements and deliver high‑quality data that supports research, risk, and portfolio‑management workflows. · Contribute to documentation, data governance, and best practices across the technology and investment organization. Make Data AI‑Ready · Design and structure datasets and documents so AI assistants (e.g., Copilot, ChatGPT) can seamlessly interpret and generate on‑the-fly access requests for citizen developers. · Standardize metadata, tagging, and ontologies to reduce ambiguity in natural‑language queries and improve AI‑driven retrieval accuracy. · Implement data normalization, consistent schemas, and well‑defined relationships to support LLM‑driven reasoning across investment and operational domains. · Build and maintain semantic layers or knowledge catalogs that allow AI models to map user intent to underlying data assets. · Partner with architecture, security, and business teams to ensure AI‑facing datasets meet governance, privacy, and compliance requirements. · Develop automated pipelines that prepare and refresh AI‑ready data—including embeddings, vector indexes, and domain‑specific feature stores. · Evaluate and optimize data structures for compatibility with enterprise GenAI tools (e.g., Microsoft Copilot, RAG systems, internal LLMs).
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed