The Visual eXperience team is looking for a passionate researcher/engineer to help shape the next generation of imaging, rendering, compression, and display solutions for products across the apple ecosystem! The team features a highly collaborative and hands-on environment that fosters scientific and engineering excellence, creativity, and innovation in the interdisciplinary areas of vision science, information theory, compression, machine learning, image enhancement and processing, neuroscience, color science, and optics. This engineer will explore the foundations of perception-aligned loss functions, neural compression systems, and image realism modeling that enable breakthrough performance in our camera, AR/VR, display, and video processing pipelines. You will join a team of scientists and engineers who care deeply about elegant theory, robust implementation, and real-world impact that makes a tangible difference to our user's experience. If you are excited by the intersection of information theory, perception, machine learning, and large-scale imaging systems-and want your work to ship in products used by millions-this role is for you.In this highly visible role, you will invent the next generation of perceptual loss functions used across Apple's imaging ecosystem. Your work will span algorithm development, theoretical analysis, and deployment at scale.