AI Engineer II

RepliGen CorporationWaltham, MA

About The Position

Our mission is to inspire advances in bioprocessing as a trusted partner in the production of biologic drugs that improve human health worldwide. Focused on cost and process efficiencies, we deliver innovative technologies and solutions that help set new standards in bioproprocessing. The estimated salary range for this role, based in the United States of America is $105,000-$160,000. Compensation decisions are dependent on several factors including, but not limited to an individual's qualifications, location, internal equity, and alignment with market data. Additionally, employees are eligible to participate in one of our variable cash programs (bonus or commission) and eligible roles may receive equity as part of the compensation package. We offer a wide range of benefits such as paid time off, health/dental/vision, retirement benefits and flexible spending accounts. All compensation and benefits information will be confirmed in writing at the time of offer. Thank you for your interest in Repligen! Working at Repligen means being part of a team that is excited to find new and creative ways to overcome challenges in bioprocessing — and to help our customers make a difference in the world.

Requirements

  • Bachelor’s degree in computer science, Statistics, Data Science, Information Systems or related field
  • 5+ years of experience with AI technologies, particularly GenAI, LLM, Agentic AI, or MLOps, and delivering enterprise-scale, production-grade solutions
  • Hands-on experience in building agentic AI systems, including multi-agent workflows, orchestration, and tool integration using frameworks such as AutoGen or CrewAI and failure handling
  • Experience deploying and scaling AI solutions on platforms including Azure AI Foundry, Copilot Studio, or Google Vertex AI
  • Hands-on experience in machine learning and deep learning frameworks such as TensorFlow, PyTorch, and scikit-learn.
  • Expertise in Generative AI and LLM application development using OpenAI and Anthropic APIs, including: Advanced prompt engineering and optimization techniques, Design and implementation of Retrieval-Augmented Generation (RAG) architectures, Embeddings, semantic search, and contextual retrieval strategies
  • Experience with vector databases such as Pinecone, FAISS, Weaviate or AI data cloud platform such as Databricks or Snowflake
  • Strong background in API development, microservices architecture, and distributed systems, enabling scalable AI solution deployment
  • Experience with knowledge graphs, graph databases, or hybrid retrieval systems to enhance contextual reasoning and data relationships
  • Proficiency in Python programming
  • Experience deploying using CI/CD, infrastructure as code (Terraform), monitoring, debugging & managing AI/ML solutions in cloud environments AWS, Azure & GCP

Responsibilities

  • Build and orchestrate multi-agent AI systems using state-of-the-art platforms, enabling agents to collaborate on task planning, data retrieval, and execution across complex business workflows.
  • Implement and continuously refine prompt engineering strategies, including prompt chaining, evaluation, and guardrails to improve accuracy and reliability.
  • Build and maintain RAG (Retrieval-Augmented Generation) pipelines, including Data ingestion and indexing, Embedding generation, Retrieval optimization & Response grounding and validation.
  • Development of scalable backend services using Python frameworks for data processing.
  • Design and manage embedding pipelines and vector search infrastructure using tools such as Pinecone, FAISS, Weaviate, Databricks or Snowflake.
  • Integrate LLM capabilities from platforms like OpenAI, Anthropic, and Microsoft into enterprise applications.
  • Build and maintain scalable APIs and microservices to expose AI capabilities across systems and applications.
  • Collaborate with data engineering and platform teams to ensure reliable data pipelines and access to high-quality data sources.
  • Deploy, monitor, and optimize AI solutions in cloud environments (Azure, AWS, GCP), ensuring performance, scalability, and cost efficiency.
  • Implement evaluation frameworks and monitoring for LLM outputs, including quality metrics, drift detection, and feedback loops.
  • Own the administration, provisioning, and ongoing optimization of the enterprise AI tool ecosystem, ensuring secure access controls and optimal functionality for development teams.
  • Troubleshoot and resolve issues in distributed AI systems, ensuring high availability and reliability.
  • Partner with business stakeholders to translate requirements into AI-driven solutions and automation opportunities.
  • Document architecture, design decisions, and best practices to support scalability and team adoption.

Benefits

  • paid time off
  • health/dental/vision
  • retirement benefits
  • flexible spending accounts
  • variable cash programs (bonus or commission)
  • equity
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service