Forward Deployed AI Engineer

Voloridge, LLC
10hOnsite

About The Position

As a Forward Deployed AI Engineer, you’ll work at the intersection of engineering and real-world business context as a builder, problem-solver, and strategic partner. You’ll embed with users across Voloridge Investment Management and Voloridge Health to understand their challenges, then design and deliver LLM-powered solutions. This role demands technical judgment and genuine empathy: listening, learning how people work, and translating insights into product direction. You’ll own features end-to-end and shape what gets built. Successful candidates will embrace the cultural principles at Voloridge: Drive creativity and find pleasure in your work: You love coding and technology. It is as much a hobby as it is work. Whether it is following blogs and podcasts, to downloading and trying out new projects from Github or elsewhere you have ideas you want to explore. Learning new things and bringing new ideas and solutions to the challenges we face is a key driver of innovation and excitement. Seek continual improvement and roll-up-your sleeves: Be willing to learn whatever technologies, tools, or patterns necessary to solve a problem. These are critical systems, and you cannot avoid a problem because "someone else 'owns' the code" - learn the code, learn the domain, solve the problem. Embrace truth and openness and practice humility and honor: We are a collection of top-performers with strong opinions but respect for the ideas of others is a must for finding the right solution. Everyone makes mistakes at times, so we don’t judge others. What is important is uncovering errors quickly, getting fixes in place, and understanding what can be improved for next time.

Requirements

  • Strong experience with SQL databases (PostgreSQL, MySQL) and familiarity with NoSQL (MongoDB, DynamoDB, Redis)
  • Hands-on LLM integration experience: prompt engineering and context management
  • Proficiency in AWS and infrastructure-as-code (Terraform, CloudFormation)
  • Solid understanding of microservices and containers (Docker, Kubernetes)
  • 4-year college degree (or higher) in Computer Science or related field
  • 3–5 years software engineering with production LLM experience
  • Ability to work onsite in our Jupiter, FL office
  • Experience working with users to understand needs and iterate on solutions
  • Strong problem-solving orientation: outcomes and empathy over process
  • Demonstrated ability to detect and mitigate LLM hallucinations
  • Experience with RAG pipelines: embeddings, chunking, vector databases
  • Awareness of LLM security risks and mitigation strategies

Nice To Haves

  • Experience building agentic AI systems with tool use, multi-step reasoning, and safety constraints
  • 5+ years Python development experience
  • CI/CD and DevOps experience
  • Familiarity with financial data and HIPAA
  • Deeply curious and high-agency: you drive solutions without waiting for direction
  • Strong communication: you build trust and translate between technical and business audiences
  • Understanding of LLM evaluation: metrics, benchmarks, non-deterministic testing

Responsibilities

  • Embed with internal users to understand workflows, identify pain points, and deliver AI-powered solutions that solve real problems
  • Engineer prompts, context, and system architecture for LLMs, balancing technical rigor with practical user needs
  • Build validation pipelines and evaluation benchmarks for LLM outputs, ensuring correctness, safety, and alignment with user expectations
  • Build and maintain RAG pipelines and LLM-powered data workflows
  • Develop cloud-native solutions on AWS (Lambda, S3, ECS, SageMaker, Step Functions), architecting for LLM-specific constraints
  • Translate user feedback and field insights into product direction, influencing what gets built based on real-world needs
  • Build trust across engineering, product, and business teams; mentor others on AI technologies and best practices
  • Identify edge cases, diagnose failure modes, and shape how AI systems learn and improve over time
  • Design reliable LLM integrations: error handling, retries, circuit breakers, and monitoring for hallucinations and quality drift
  • Optimize LLM API costs through caching, batching, and model selection
  • Implement security controls against prompt injection and LLM attack vectors

Benefits

  • Highly competitive base salary
  • Profit sharing bonus
  • Health, dental, vision, life, disability insurance
  • 401K
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service