About The Position

We are seeking a Prompt Engineer to be responsible for the end-to-end technical migration workflow for transitioning templates to LLM autoraters. The role is required to use client’s internal tools to leverage prompt engineering techniques to maximize model performance.

Requirements

  • Language Skills: Native fluency in English.
  • Location: Must be based in United States.
  • Education: Bachelor’s, Master’s, or Doctorate degree in Computer Science, Data Science, Computational Linguistics, Human-Computer Interaction (HCI), Cognitive Science, or a related analytical field.
  • Prompt Engineering & AI Expertise: At least 2 years' experience as Prompt Engineer. Proven experience tuning Large Language Models (LLMs) for strict, structured outputs, complex classification tasks, and familiarity with chain-of-thought and few-shot learning.
  • Data Analysis: Strong proficiency in identifying error patterns, analyzing model performance, and using SQL or other data analytics tools.
  • Technical Agility: Ability to quickly learn and master proprietary tools with minimal supervision.
  • Communication: Excellent verbal and written communication skills.

Nice To Haves

  • Familiarity with enterprise-grade LLM interfaces like the Goose API.
  • Experience in AI model evaluation, data science, computational linguistics, or software engineering.
  • Hands-on experience with Automated Prompt Optimization (APO) systems or tuning workflows.
  • Linguistic expertise, including an understanding of semantics and logic.

Responsibilities

  • Utilize Automatic Prompt Generation (APG) tools to create baseline prompts for complex parent-child template clusters.
  • Run and supervise Automated Prompt Optimization (APO) tool, review the outputs, and flag when the APO reaches deadlocks or plateaus.
  • Manually draft, test, and refine prompts to navigate complex template architectures, overcome anti-patterns, and handle edge cases where tooling is lacking or broken. Solve edge-case scenarios by designing and refining manual prompts.
  • Monitor shadowbot runs to ensure sufficient disagreements (between human and LLM ratings) are registered, generated, and tracked.
  • Run prompt versions against established gold data to continuously measure autorater quality against the human crowd baseline, calculating accuracy metrics such as F1 scores, precision, and recall.
  • Draft technical launch readiness justifications (Launch Certification Documentation) for final.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Part-time

Career Level

Mid Level

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service