Generative AI Applications Engineer (Agents & RAG)

Accenture Federal ServicesFairfield, CA
19h

About The Position

At Accenture Federal Services, nothing matters more than helping the US federal government make the nation stronger and safer and life better for people. Our 13,000+ people are united in a shared purpose to pursue the limitless potential of technology and ingenuity for clients across defense, national security, public safety, civilian, and military health organizations. Join Accenture Federal Services, a technology company within global Accenture. Recognized as a Glassdoor Top 100 Best Place to Work, we offer a collaborative and caring community where you feel like you belong and are empowered to grow, learn and thrive through hands-on experience, certifications, industry training and more. Join us to drive positive, lasting change that moves missions and the government forward! Build AI that matters. We ship production GenAI apps for confidential federal programs across defense, national security, public safety, civilian, and military health where reliability, privacy, and safety aren’t optional. AFS is a technology company within global Accenture and a Glassdoor Top 100 Best Place to Work. You’ll join a collaborative, inclusive community with handson growth, certifications, and industry training. We ship in weeks, not quarters, and measure success with latency, reliability, safety, and cost. Confidentiality matters: We don’t disclose program details publicly. If you advance, we’ll share specifics during the process. Role Overview You’ll turn mission needs into secure, reliable, and scalable GenAI applications no model training required. This is a hands-on role across agentic workflows, RAG, prompt/policy design, LLM evaluation, and platform integration. You’ll own the end-to-end path from use case evaluation → production deployment → operational excellence, partnering with product, security, data, and SRE to ship features safely and at scale.

Requirements

  • End-to-end ownership of production systems: integration → deployment → observability → incident response.
  • Hands-on experience with LLMs, transformer based apps, and RAG in production.
  • Strong Python
  • Experience with vector search and retrieval (Pinecone, Weaviate, OpenSearch, pgvector, FAISS/Chroma) and grounding AI in enterprise/mission data.
  • U.S. Citizenship

Nice To Haves

  • Integration with leading cloud AI services or on prem inference stacks
  • Background in LLM evaluation, prompt authoring/testing, A/B experimentation, and LLM Ops.
  • Responsible AI expertise (privacy, security, bias, transparency, human in the loop) and data governance.
  • Experience implementing tool using agents for API integration and external data access.
  • Containerization & orchestration (Docker, Kubernetes, VMware) and scripting/automation (Linux Bash, PowerShell).
  • Prior work in regulated/secure environments (e.g., ATO, STIGs, Zero Trust) with fast shipping.
  • Familiarity with NVIDIA AI Foundations, OpenAI ChatGPT, and AI assisted dev tools (Cursor, Windsurf, Claude).
  • Contributions to internal frameworks or opensource; mentorship of engineers.
  • Clear communication with engineers, PMs, and security/compliance stakeholders.

Responsibilities

  • Design & ship mission grade GenAI: Build agentic workflows and RAG systems tailored to mission data and environments; target low hallucination, tight p95 latency, and predictable cost.
  • Agent frameworks & orchestration: Apply patterns from LangChain/LlamaIndex/Semantic Kernel; design task decomposition, tool use, guardrails, and recovery/fallback strategies.
  • Platform integration (no model training): Implement with AWS Bedrock, Azure OpenAI, Google Vertex AI, Amazon Kendra, and managed services (e.g., Document AI, Gemini, Gemma).
  • LLM selection & evaluation: Compare models for quality, safety, latency, cost; author/test prompts & policies; deploy with observability and safe rollback/fallback.
  • RAG done right: Build retrieval pipelines & vector search (Pinecone, Weaviate, OpenSearch, pgvector, FAISS/Chroma); handle data prep, chunking, metadata, and IRstyle evals (e.g., NDCG) to maximize signal to noise.
  • Production rigor: Instrument metrics/logs/traces; run A/B experiments; maintain incident playbooks; and implement safety & compliance guardrails.
  • SRE & FinOps for AI: Define SLIs/SLOs (quality/latency/safety/cost), run on call and postmortems, reduce MTTR; meter usage and optimize token/spend.
  • Reusable platform components: Ship SDKs, CI/CD templates, Terraform/IaC modules, evaluation harnesses that accelerate multiple mission team not one-off projects.
  • Operate in real world constraints: Deliver into hybrid, restricted, or air gapped environments with Zero Trust principles and audit ready controls.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service