Pathway builds the first post-transformer frontier model that solves AI's fundamental memory problem. While transformers wake up in the same state every time—like Groundhog Day—our architecture enables true continuous learning, infinite context reasoning, and real-time adaptation. We're not optimizing yesterday's technology; we're building what comes after transformers. Our breakthrough architecture outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. We are trusted by organizations such as NATO, La Poste, and Formula 1 racing teams. Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Geoff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20. The company is backed by leading investors and advisors, including Lukasz Kaiser, co-author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California. The Opportunity You will design and execute rigorous benchmarks and define dataset standards. Collaborating closely with our R&D team, you will build the evaluation infrastructure that guides the evolution of Pathway’s post‑transformer models. You Will Proactively identify, prioritize, and curate relevant public and client-driven benchmarks across our target use cases and markets. Evaluate candidate benchmarks for clarity, data quality, evaluation methodology, and fit with our model roadmap. Run benchmarks with baseline models to validate setup, uncover edge cases, and de‑risk R&D runs. Hand off “benchmark-ready” packages to R&D (specs, data, evaluation scripts, expected metrics, constraints) Maintain a shared vocabulary and documentation around benchmarks, datasets, and evaluation formats that GTM and R&D can both use. Track and organize benchmark results, model leaderboards, and “what good looks like” for different customers and scenarios. Contribute to demos and public‑facing proof points based on benchmark outcomes. You will play a key role in defining and driving the benchmarking process for AI model evaluation. Your work will directly influence what we build, how we talk about it, and how customers and the market experience BDH.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed