About The Position

Sony Corporation of America, located in New York, NY, is the U.S. headquarters of Sony Group Corporation, based in Tokyo, Japan. Sony's principal U.S. businesses include Sony Electronics Inc., Sony Interactive Entertainment LLC, Sony Music Entertainment, Sony Music Publishing and Sony Pictures Entertainment Inc. With some 900 million Sony devices in hands and homes worldwide today, a vast array of Sony movies, television shows and music, and the PlayStation Network, Sony creates and delivers more entertainment experiences to more people than anyone else on earth. To learn more: www.sony.com/en. Research Intern – Multimodal Foundation Model for Vision Sony AI is seeking research interns to join us. Our team mainly focuses on fundamental and applied research, with a focus on building next-generation foundation models for vision in a responsible manner. The role of a research intern is to develop efficient and effective methodologies and prototype solutions. You will work with a productive team of world-class scientists and engineers to tackle the most challenging problems in foundation models and generative AI, including low-cost yet powerful vision foundation models (VFM), vision-language models (VLM), unified models, automatic model compression, optimization and deployement on cloud and edge. You will see your ideas not only published in papers, but also improve the experience of billions of customers.

Requirements

  • Currently has, or is in the process of obtaining, a master/PhD degree in computer science or related field.
  • Be very self-motivated and capable of proposing and implementing innovative ideas.
  • Solid presentation and communication skills to internal and external audiences.
  • Publications or expertise in compact foundation model development and deployment.
  • Influential open-source projects or paper publication at top conferences, e.g., CVPR, ICCV, ECCV, NeurIPS, ICML, ACL, etc.
  • Solid coding skills in Python, Pytorch, etc.

Nice To Haves

  • Better to have front-end development experience.

Responsibilities

  • Conduct fundamental and innovative development in low-cost yet powerful vision-language models (VLM), unified models, automatic model compression, optimization and deployement on cloud and edge.
  • Design or implement state-of-the-art techs on model compression, inference speedup, deployement on harwares, tool automation.
  • PoC for various vision+text, generation relevant tasks (VQA, captioning, understanding, etc) and hardwares.
  • Contribute to library and tool development to support business; or Publish influential research in top-tier conferences and journals.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service