AI Engineer

REDICA SystemsPleasanton, CA

About The Position

We’re looking for an AI Engineer to join our team as we continue to develop the first-of-its-kind Quality and Regulatory Intelligence (QRI) platform for the life sciences industry. In this role, you will help build and deploy AI-powered capabilities that extract insights from complex regulatory datasets, inspection reports, and government data sources. You will work closely with product managers, data engineers, and software engineers to integrate LLM-powered systems into the Redica platform. The ideal candidate maintains a high bar for engineering quality while remaining hands-on in the code, building scalable AI services and applications that operate reliably in production environments.

Requirements

  • 3+ years of experience as an ML Engineer developing and productionizing traditional ML models and/or Generative AI applications
  • Hands-on experience in Python
  • Strong experience in building and deploying LLM and Generative AI applications at scale
  • Extensive hands-on experience with third-party LLM provider APIs (OpenAI, Google, Anthropic, Amazon Bedrock) and open-source LLMs (Llama, Mistral)
  • Experience in building conversational systems using LLMs and agentic frameworks (Langchain, LlamaIndex, Langgraph, CrewAI)
  • Hands-on experience with microservices architecture and orchestration, including building backend APIs using FastAPI
  • Experience with vector databases (e.g., Pinecone), graph databases (e.g., Neo4J), and hybrid search
  • Hands-on experience working with SQL (e.g., Postgres, Snowflake) and NoSQL (e.g., DynamoDB) databases/warehouses
  • Bachelor's degree in Computer Science, Computer Engineering, or a related technical field

Nice To Haves

  • Familiarity with lightweight UI design using Python/JavaScript frameworks (Streamlit, ReactJS) and integration with ML model backends
  • Hands-on experience with container orchestration services on AWS (e.g., ECS and EKS) and ML deployment on AWS (AWS Sagemaker)
  • Experience with both batch and event-driven application architectures and ML inference methods

Responsibilities

  • Build and deploy AI-powered applications using large language models and generative AI frameworks.
  • Develop conversational systems and intelligent workflows using LLMs and agentic frameworks.
  • Integrate AI capabilities into existing platform services and APIs.
  • Design and implement backend APIs and services supporting AI functionality using Python and FastAPI.
  • Develop microservices that enable scalable AI inference and data processing.
  • Integrate AI services with other platform components to deliver end-to-end product capabilities.
  • Work with structured and unstructured regulatory datasets to power AI-driven insights.
  • Implement hybrid search and retrieval workflows using vector databases and graph databases.
  • Integrate AI models with data pipelines and data stores to support scalable inference.
  • Deploy and maintain AI systems in production environments.
  • Contribute to testing, monitoring, and performance optimization of AI services.
  • Assist in troubleshooting production issues related to AI systems and model inference.
  • Work closely with product managers and engineering teams to translate product requirements into AI-powered solutions.
  • Participate in engineering discussions, code reviews, and sprint planning.
  • Contribute to continuous improvement of AI development practices and system performance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service