About The Position

Innodata is expanding its GenAI research capability to advance state-of-the-art evaluation and post-training methods for LLM and multimodal systems. As an Applied Research Scientist, LLM Evaluation & Post-Training, you will lead research and experimentation on how evaluation design, measurement strategies, and feedback signals influence model improvement. This role is ideal for a technically rigorous researcher who is deeply fluent in modern LLM evaluation and post-training, and who can turn research insight into practical methods for customer solutions and internal platform innovation. You will work across human-in-the-loop and AI-augmented workflows, partnering with Language Data Scientists and AI/ML Research Engineers to design and validate evaluation frameworks that drive measurable model gains. The ideal candidate combines strong experimental and statistical judgment with hands-on technical ability and can engage as a peer with research and engineering stakeholders at leading AI companies. You have at least 5+ years of relevant experience (including graduate research) in applied ML research, research science, or advanced ML experimentation, with significant experience in LLM evaluation, benchmarking, alignment, or post-training. You have a track record of designing high-quality experiments, interpreting results rigorously, and translating findings into practical improvements. You are comfortable working across research and product/customer contexts. You can identify important methodological questions, build a research agenda, and collaborate with engineers and data experts to execute. You understand that evaluation is not only about metrics, but about measurement validity, robustness, stress testing, and alignment to real-world usage. You are excited by frontier challenges including long-context, cross-modal, and dynamic multi-turn evaluations, and by the opportunity to build new benchmark datasets and evaluation frameworks that become strategic assets for Innodata and its customers. You bring an implementation-minded approach to experimentation and are comfortable collaborating closely with engineers to productionize methods and research outputs when appropriate. As an Applied Research Scientist, LLM Evaluation & Post-Training, you will help define the next generation of evaluation-driven model improvement workflows. You will study how different evaluation approaches (human, automated, hybrid) shape model selection and post-training outcomes, and you will design experiments that produce credible, actionable conclusions. Your work may include designing benchmark datasets, developing evaluation taxonomies and protocols, defining metrics and scoring methodologies, analyzing failure modes, and testing how changes in evaluation setup affect downstream fine-tuning results. You will also support customer engagements by bringing scientific rigor to evaluation strategy, methodology review, and technical recommendations. This is a highly collaborative role that sits at the intersection of research, engineering, and language/data operations.

Requirements

  • MS/PhD in Computer Science, Machine Learning, Statistics, Applied Mathematics, AI, or a related quantitative scientific field (PhD strongly preferred)
  • 5+ years of relevant experience in applied research / research science in ML/AI, with substantial work in LLMs or foundation models
  • Demonstrated experience with LLM evaluation, benchmarking, alignment, post-training, or model quality research
  • Strong foundation in experimental design, statistical analysis, and scientific reasoning for ML systems
  • Strong coding skills in Python for research experimentation and analysis (e.g., data processing, evaluation pipelines, statistical analysis, visualization)
  • Experience working with modern ML tooling/frameworks (e.g., PyTorch, Hugging Face, JAX/TensorFlow as applicable) sufficient to design and execute model/evaluation experiments
  • Ability to evaluate and compare human and automated evaluation methods, including tradeoffs in cost, reliability, validity, and scalability
  • Experience designing evaluation studies and protocols that are reproducible across datasets, model versions, and evaluation runs
  • Ability to collaborate directly with technical stakeholders including research scientists, ML engineers, data scientists, and customer technical counterparts
  • Strong communication skills and ability to present nuanced technical conclusions, assumptions, and limitations clearly
  • Evaluation Science & Benchmarking Experience designing benchmark datasets, test suites, or evaluation frameworks for language or multimodal models
  • Deep understanding of metric design, scoring reliability, and measurement validity
  • Experience with human evaluation methods and quality assurance considerations (e.g., rubric design, inter-rater reliability, adjudication frameworks)
  • LLM / Post-Training Understanding of post-training methods and how training objectives interact with evaluation outcomes
  • Ability to reason about model behavior, failure modes, and tradeoffs across tasks/domains
  • Familiarity with alignment and robustness considerations in model evaluation
  • Quantitative Analysis Strong statistical analysis skills (sampling, uncertainty, significance testing where appropriate, error analysis, metric interpretation)
  • Ability to synthesize complex experimental findings into actionable recommendations

Nice To Haves

  • Hands-on experience running or supporting fine-tuning/post-training experiments (SFT, preference optimization, RLHF/RLAIF-style workflows)
  • Experience with multimodal evaluation (e.g., text-image, audio, video)
  • Experience with long-context benchmarking/evaluation and real-world context management challenges
  • Experience designing multi-turn, interactive, or agentic evaluation protocols
  • Published research and/or open-source benchmark contributions in LLM evaluation, post-training, alignment, or related areas
  • Experience in customer-facing applied research, technical consulting, or cross-functional product/research collaborations
  • Familiarity with safety, trustworthiness, and governance considerations in GenAI evaluation

Responsibilities

  • Define and execute a research agenda focused on LLM evaluation and post-training, especially evaluation-driven model improvement
  • Design rigorous experiments to study how evaluation methodologies impact fine-tuning and post-training outcomes
  • Develop and validate evaluation frameworks for LLM and multimodal systems, including: benchmark/task design scoring methods judge/model-assisted evaluation human evaluation protocols robustness/stress testing
  • Lead research on advanced evaluation domains, including long-context, cross-modal, and dynamic multi-turn evaluations
  • Study the effectiveness and limitations of existing evaluation techniques, and propose improved methodologies with clear validity and scalability tradeoffs
  • Analyze model behavior and failure patterns; generate actionable recommendations for model improvement and evaluation redesign
  • Collaborate with AI/ML Research Engineers to translate research methods into scalable evaluation and post-training pipelines
  • Collaborate with Language Data Scientists to integrate human-in-the-loop and synthetic data/evaluation strategies into research programs
  • Engage with customer technical stakeholders to understand evaluation goals, review methodologies, and provide expert recommendations
  • Contribute to internal benchmark datasets, evaluation frameworks, and reusable research assets
  • Produce high-quality technical documentation, internal research reports, and client-facing materials explaining methods, results, assumptions, and limitations
  • Contribute to thought leadership and best practices in LLM evaluation, post-training, and GenAI quality measurement

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service