Abridge was founded in 2018 with the mission of powering deeper understanding in healthcare. Our AI-powered platform was purpose-built for medical conversations, improving clinical documentation efficiencies while enabling clinicians to focus on what matters most—their patients. Our enterprise-grade technology transforms patient-clinician conversations into structured clinical notes in real-time, with deep EMR integrations. Powered by Linked Evidence and our purpose-built, auditable AI, we are the only company that maps AI-generated summaries to ground truth, helping providers quickly trust and verify the output. As pioneers in generative AI for healthcare, we are setting the industry standards for the responsible deployment of AI across health systems. We are a growing team of practicing MDs, AI scientists, PhDs, creatives, technologists, and engineers working together to empower people and make care make more sense. We have offices located in the Mission District in San Francisco, the SoHo neighborhood of New York, and East Liberty in Pittsburgh. We are looking for passionate GenAI Engineers of all levels who are passionate about making a positive impact. You’ll collaborate closely with a cross-functional team of researchers, clinicians, and engineers to translate cutting-edge language model capabilities into dependable, real-world clinical systems. Your focus will be on designing advanced LLM-driven workflows that can reason through complex clinical contexts, leverage agentic capabilities and structured tool use, navigate branching chains of LLM calls, integrate seamlessly with retrieval systems, and consistently generate outputs that meet the highest standards of clinical reliability and trust. A major part of this role will involve developing and applying rigorous evaluation frameworks (both automated and human-in-the-loop) to continuously assess accuracy, robustness, multilingual capabilities, and more. This is an opportunity to design experiments to probe failure modes, simulate edge cases, and stress-test LLM workflows under realistic load and challenging real-world conditions. You’ll apply a disciplined, data-driven approach to understanding model behavior—developing tools to measure system performance, conduct A/B tests against established baselines, and generate clear, actionable insights that inform deployment decisions. This high impact role will own the end-to-end productionization of LLM workflows: deploying models into low-latency, high-uptime environments, building monitoring and observability systems, implementing post-processing guardrails, and managing workflow versioning.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Entry Level
Education Level
No Education Listed