The Vanguard Group-posted 1 day ago
Full-time • Mid Level
Hybrid • Charlotte, PA
5,001-10,000 employees

At Vanguard, we don't just have a mission—we're on a mission. To work for the long-term financial wellbeing of our clients. To lead through product and services that transform our clients' lives. To learn and develop our skills as individuals and as a team. From Malvern to Melbourne, our mission drives us forward and inspires us to be our best. How We Work Vanguard has implemented a hybrid working model for the majority of our crew members, designed to capture the benefits of enhanced flexibility while enabling in-person learning, collaboration, and connection. We believe our mission-driven and highly collaborative culture is a critical enabler to support long-term client outcomes and enrich the employee experience.

  • Architect, build, and deploy RAG pipelines , including chunking, embeddings, vector stores, retrieval, ranking, grounding, and evaluation.
  • Design and implement Graph RAG solutions leveraging knowledge graphs for multi‑hop reasoning and structured retrieval.
  • Build robust, scalable ML/LLM services using Python (and Java where applicable) with well‑designed APIs and microservices.
  • Develop data processing pipelines for ingestion, transformation, metadata extraction, and indexing.
  • Implement observability, monitoring, evaluation harnesses, automated testing, and CI/CD for GenAI services.
  • Optimize retrieval quality, response accuracy, latency, and cost across model + retrieval layers.
  • Apply responsible AI, security, and governance practices for LLM systems (e.g., content filtering, guardrails, model monitoring).
  • Collaborate with product, data engineering, and cloud platform teams to translate business problems into robust AI solutions.
  • Produce clear documentation, design specs, and operational runbooks for all delivered components.
  • 3+ years of experience as an ML Engineer, AI Engineer, or similar role.
  • Hands‑on experience building GenAI applications and RAG systems end‑to-end.
  • Strong proficiency in Python for ML/LLM development.
  • Experience with vector databases (e.g., pgvector, Pinecone, Weaviate, FAISS) and embedding models.
  • Knowledge of LLM frameworks (LangChain, LlamaIndex, Transformers, etc.).
  • Strong understanding of cloud environments (AWS/Azure) and containerized deployments.
  • Solid software engineering foundations — APIs, microservices, version control, testing, CI/CD.
  • Experience with data pipelines, ETL/ELT, and processing unstructured data.
  • Ability to evaluate retrieval quality, implement ranking strategies, and build evaluation datasets.
  • Excellent communication skills and ability to work in cross‑functional teams.
  • Graph RAG experience using knowledge graphs, graph databases, or graph‑based retrieval.
  • Experience with Java for backend services, data processing, or connector development.
  • Familiarity with MLOps/LLMOps tooling and practices.
  • Experience integrating AI outputs into metadata/catalog systems or workflows.
  • Experience with prompt engineering, guardrailing, and LLM safety controls.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service