The Rendering team builds and maintains the core sensor simulation system that produces physically accurate synthetic sensor data for autonomous vehicle development. We own the full rendering pipeline — from scene ingestion and acceleration structure construction through GPU ray tracing and sensor-specific post-processing. Our stack includes C++, CUDA, NVIDIA OptiX, USD (Universal Scene Description), MDL materials, and ROS. We care deeply about performance, correctness, and clean architecture. The Role As a Staff Software Engineer on the Rendering team within Simulation, you will architect, optimize, and extend a GPU-accelerated, physics-based sensor simulation system used to generate synthetic sensor data — cameras, LiDAR, radar, and depth sensors — for autonomous vehicle development and validation. You will work at the intersection of real-time rendering, GPU computing, and large-scale scene management, tackling challenges in memory management, geometry streaming, material systems, and ray tracing performance. Your work will directly impact the fidelity, scalability, and speed of the simulation platform that AV teams depend on for training, testing, and validating perception and planning systems.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level