Anheuser-Busch InBev-posted 4 months ago
Full-time • Entry Level
Service, MS
5,001-10,000 employees
Merchant Wholesalers, Nondurable Goods

The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of LLM-based frameworks, tools, and technologies, cloud-native technologies and solutions, and microservices-based software architecture and design patterns. As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus.

  • Experience with LangChain, LangGraph for Large Language Models (LLM)
  • Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler
  • Expertise in multi-modal AI systems (text, images, audio, video) for Multi-modal Retrieval-Augmented Generation (RAG)
  • Designing and optimizing chunking strategies and clustering for large data processing
  • Experience in audio/video streaming and real-time data pipelines for Streaming & Real-time Processing
  • Low-latency inference and deployment architectures
  • Natural language-driven SQL generation for databases (NL2SQL)
  • Experience with natural language interfaces to databases and query optimization
  • Building scalable APIs with FastAPI for AI model serving
  • Proficient with Docker for containerized AI services
  • Experience with orchestration tools for deploying and managing services
  • Experience with chunking strategies for efficient document processing
  • Building data pipelines to handle large-scale data for AI model training and inference
  • Experience with AI/ML frameworks like TensorFlow, PyTorch
  • Proficiency in LangChain, LangGraph, and other LLM-related technologies
  • Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting
  • Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy
  • Strong understanding of context window management and optimizing prompts for performance and efficiency
  • Bachelor's or master's degree in Computer Science, Engineering, or a related field
  • Proven experience of 3+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database
  • Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc.
  • Proficiency in implementing and optimizing machine learning models for natural language processing
  • Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc.
  • Strong programming skills in languages such as Python and proficiency in relevant frameworks
  • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service