About The Position

As a Context Engineer at CapIntel, you'll sit at the intersection of AI infrastructure and engineering. You will be responsible for how large language models are integrated into our core platform and how our engineering team adopts agentic workflows. This is a hands-on, production-focused role, not a research one. You'll build the systems that make our AI features reliable, accurate, and scalable for the wealth management enterprises that depend on us. You'll be embedded in development teams working closely with engineers, product managers, and domain experts across the organization to design and deliver LLM-powered capabilities that directly enhance the advisor and client experience. As one of the first practitioners in this discipline at CapIntel, you'll also help define what context engineering looks like here: setting patterns and practices the broader team can build on. This role is ideal for someone who thinks in systems, cares about production reliability over demo-day performance, and is energized by working in a discipline that is evolving quickly.

Requirements

  • 5+ years of professional software engineering experience, with at least 1–2 years working with LLMs in a production context
  • Strong experience with Python or Node and building API-integrated backend services
  • Hands-on experience with an orchestration or execution framework
  • Working knowledge of RAG architecture, vector databases (e.g. Pinecone, pgVector, AWS OpenSearch), and semantic search
  • Familiarity with context management techniques: summarisation, chunking, session splitting, and memory strategies
  • Experience building or consuming REST APIs and integrating with third-party services
  • Comfortable collaborating with cross-functional teams in a fast-paced, high-growth environment
  • Strong problem-solving instincts and a willingness to learn and adapt as the field evolves

Nice To Haves

  • Experience with the Model Context Protocol (MCP) or similar tool-integration standards
  • Familiarity with LLMOps practices: tracing, observability (e.g. LangSmith, Datadog), and model versioning
  • Exposure to multi-agent architectures and orchestration patterns
  • Knowledge of AI output validation, context safety, and governance considerations particularly relevant in regulated industries like financial services
  • Familiarity with AWS or cloud-based infrastructure and containerised deployments (Docker, Kubernetes)
  • Ability to communicate technical concepts clearly to both technical and non-technical partners

Responsibilities

  • Design and implement LLM-powered features into our core application via model APIs (e.g. Anthropic, OpenAI, Cohere), with a focus on reliability and production-readiness
  • Architect and maintain retrieval-augmented generation (RAG) pipelines, connecting language models to internal knowledge bases, databases, and live data sources
  • Manage context window strategy, determining what information enters the model, when, in what format, and at what level of compression to optimise for accuracy, cost, and latency
  • Design and implement agentic workflows enabling the platform to handle multi-step, autonomous tasks
  • Build guardrail and output validation layers that constrain model behaviour and ensure AI features act within well-defined, compliant boundaries
  • Develop reusable agent primitives, prompt templates, and workflow components that other engineers can build on independently
  • Build evaluation frameworks to measure context effectiveness, output quality, and agent reliability in production
  • Monitor deployed AI systems for failure patterns and implement mitigation strategies, feeding learnings back into continuous improvement cycles
  • Collaborate with Product, Product Engineering, Implementation, and Data teams to translate business requirements, and proof of concepts into production AI system specifications
  • Act as an internal practitioner and resource helping upskill the broader engineering team on context engineering principles and agentic best practices

Benefits

  • Compensation at CapIntel goes beyond base pay. Depending on the role, total rewards may include variable pay, equity, comprehensive benefits, flexible time off, and dedicated opportunities for growth and development.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service