Stanford University-posted 3 months ago
$169,728 - $190,000/Yr
Full-time • Senior
Redwood City, CA
Educational Services

Are you an experienced AI/GenAI engineer who loves shipping real systems? Join Stanford's Enterprise Technology team to design, implement, and support AI solutions across university use cases. In this role, you will influence strategic direction, requirements, and architecture for AI-driven information systems, incorporating new capabilities (LLMs, RAG, agentic frameworks, MLOps) to improve workflow, efficiency, and decision-making. You may serve as the technical lead for specific AI tracks and interrelated applications. This role blends hands-on engineering with mentorship and thought leadership. You will prototype and productionize-presenting proofs of concept, demoing solutions to stakeholders, and partnering with project managers, technical managers, architects, security, infrastructure, and application teams (ServiceNow, Salesforce, Oracle Financials, etc.).

  • Translate requirements into well-engineered components (pipelines, vector stores, prompt/agent logic, evaluation hooks) and implement them in partnership with the platform/architecture team.
  • Build and maintain LLM-based agents/services that securely call enterprise tools (ServiceNow, Salesforce, Oracle, etc.) using approved APIs and tool-calling frameworks.
  • Configure and optimize RAG workflows (chunking, embeddings, metadata filters) and integrate with existing search/vector infrastructure-escalating architecture changes to designated architects.
  • Follow and improve team standards for CI/CD, testing, prompt/model versioning, and observability. Own feature delivery through dev/test/prod, coordinating with release managers.
  • Apply established guardrails (PII redaction, policy checks, access controls). Partner with InfoSec and architects to close gaps; document decisions and risks.
  • Instrument services with KPIs (latency, cost, accuracy/quality) and build lightweight dashboards.
  • Write clear technical docs (APIs, workflows, runbooks), user stories, and acceptance criteria. Support and sometimes lead UAT/test activities.
  • Facilitate working sessions with stakeholders; mentor junior engineers through code reviews and pair programming; provide concise updates and risk flags.
  • Bachelor's degree and eight years of relevant experience or a combination of education and relevant experience.
  • Built and shipped at least one production LLM agent or agentic workflow using frameworks such as LangGraph, LangChain, CrewAI/AutoGen, Google Agent Builder/Vertex AI Agents (or equivalent).
  • Implemented 3+ AI/ML projects and 2+ GenAI/LLM projects in production, with operational support (monitoring, tuning, incident response).
  • Strong understanding of AI/ML concepts (LLMs/transformers and classical ML) and experience designing, developing, testing, and deploying AI-driven applications.
  • Programming Expertise: Python (primary) plus experience with Node.js/Next.js/React/TypeScript and Java.
  • Experience with cloud AI stacks (e.g., Google Vertex AI, AWS Bedrock, Azure OpenAI) and vector/search technologies (Pinecone, Elastic/OpenSearch, FAISS, Milvus, etc.).
  • Knowledge of data design/architecture, relational and NoSQL databases, and data modeling.
  • Thorough understanding of SDLC, MLOps, and quality control practices.
  • Ability to define/solve logical problems for highly technical applications; strong problem-solving and systematic troubleshooting skills.
  • Excellent communication, listening, negotiation, and conflict resolution skills.
  • MLOps Tooling: MLflow, Kubeflow, Vertex Pipelines, SageMaker Pipelines; LangSmith/PromptLayer/Weights & Biases.
  • Experience working with, customizing, and improving open-source solutions.
  • Demonstrated ability to pick up a new technology/framework quickly and deliver production value with it.
  • Experience with GenAI Frameworks: LangChain, LlamaIndex, DSPy, Haystack, LangGraph, Agent Engine, Google ADK, AWS AgentCore, CrewAI/AutoGen.
  • Experience with UI Development: React/Next.js/Tailwind for internal tools.
  • Experience with prompt engineering at scale: Structured prompts (JSON/function-calling), templates, version control.
  • Experience with parameter-efficient fine-tuning (LoRA/QLoRA/adapters), supervised instruction tuning.
  • Experience with safety/guardrails frameworks (Guardrails.ai, NeMo Guardrails, Azure/AWS safety filters).
  • Experience with hybrid search & reranking (BM25+dense, Cohere/Voyage/Jina rerankers).
  • Experience with telemetry & governance: prompt/model drift monitoring, policy-as-code, audit logging.
  • Career development programs, tuition reimbursement, or audit a course.
  • Superb retirement plans, generous time-off, and family care resources.
  • Health care benefits and fitness classes at world-class exercise facilities.
  • Free commuter programs, ridesharing incentives, discounts, and more.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service