Orbital is a physics-grounded AI copilot that operates complex industrial systems such as refineries, upstream assets, and energy-intensive plants. It combines realtime time-series forecasting, physics-based models, and domain-trained language models to deliver interpretable insights, anomaly detection, and optimisation pathways directly to operations teams. As a Forward Deployed ML Engineer, your job is to make Orbital’s AI systems work in customer reality. You will deploy, configure, tune, and operationalise our deep learning models inside live industrial environments; spanning cloud, on-premise, hybrid, and air-gapped infrastructure. This is not a pure research role. You are not training experimental models in isolation. You are adapting production AI systems to customer data, configuring agents and RAG pipelines, tuning anomaly detection, and ensuring models deliver value in production workflows. If Research builds the models, you make them work on-site. Operating Context Forward Deployed ML Engineers operate in pods of three alongside: • Full Stack Engineers • Data Engineers Each pod delivers 2–3 customer deployments per quarter, owning AI configuration, model tuning, agent orchestration, and inference reliability in production.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level