About The Position

Founded in 2023, the ByteDance Doubao (Seed) Team is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements. With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US. Leveraging substantial data and computing resources and through continued investment in these domains, we have developed a proprietary general-purpose model with multimodal capabilities. In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and is available to external enterprise clients via Volcano Engine. Today, the Doubao app stands as the most widely used AIGC application in China. This position is responsible for researching and building the company's LLMs. The role involves exploring new applications and solutions for related technologies in areas such as search, recommendation, advertising, content creation, and customer service. The goal is to meet the increasing demand for intelligent interactions from users and to significantly enhance their lifestyle and communication in the future. We are looking for talented individuals to join us for a Student Researcher opportunity in 2025. Student Researcher opportunities at ByteDance aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance. The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work.

Responsibilities

  • Reasoning and planning for foundation models. Enhance reasoning and planning throughout the entire development process, encompassing data acquisition, model evaluation, pretraining, SFT, reward modeling, and reinforcement learning, to bolster overall performance.
  • Synthesize large-scale, high-quality (multi-modal) data through methods such as rewriting, augmentation, and generation to improve the abilities of foundation models in various stages (pretraining, SFT, RLHF).
  • Solve complex tasks via system 2 thinking, leverage advanced decoding strategies such as MCTS, A*.
  • Investigate and implement robust evaluation methodologies to assess model performance at various stages, unravel the underlying mechanisms and sources of their abilities, and utilize this understanding to drive model improvements.
  • Teach foundation models to use tools, interact with APIs and code interpreters. Build agents and multi-agents to solve complex tasks.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Career Level

Intern

Industry

Publishing Industries

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service