Tesla - Palo Alto, CA
posted about 1 month ago
As a Software Engineer within our Autonomy teams, you will contribute to one of the most advanced and widely deployed AI Platforms in the world for Autopilot and our Humanoid Robot, Optimus. In this role, you will be responsible for the internal working of the AI inference stack running neural networks in Optimus and millions of Tesla vehicles. You will collaborate closely with the Optimus AI Engineers and AI Hardware Engineers to understand the full inference stack, co-design models to fit the target hardware, and optimize the compiler to extract the maximum performance out of the AI hardware. The inference stack development is purpose-driven: deployment and analysis of production models inform the team's direction, and the team's work immediately impacts performance and the ability to deploy more and more complex models. With a cutting-edge co-designed MLIR compiler and runtime architecture and full control of the hardware, the compiler has access to traditionally unavailable features, that can be leveraged via novel compilation approaches to generate higher performance models.