Thomson Reuters is seeking a Senior Inference Engineer, AI. This position is open due to an existing vacancy to support our evolving business needs. The role involves collaborating with platform teams to enhance capacity forecasting for AI workloads and working with Product, Data Science, Architecture, and Enterprise AI teams to onboard new research models into production. Within Platform Engineering and Enterprise AI Services, an AI Inference Engineer is responsible for productionizing, optimizing, and scaling AI and LLM workloads that power TR’s AI driven products. This role ensures that our trained models—from classical ML to generative AI—run efficiently across TR’s multi cloud footprint (AWS, Azure, GCP, OCI), meet strict enterprise reliability requirements, and integrate seamlessly with our data backbone (Snowflake, OpenSearch vector search, API managed model routing). The successful candidate will help build the next generation of TR’s AI infrastructure, working alongside cloud engineering, data engineering, product teams, and AI Services.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed
Number of Employees
5,001-10,000 employees