blue logo

Research Scientist Project Intern (Multimodal Interaction & World Model) - 2026 Start (PhD)

Location:

Singapore

Team:

Algorithm

Employment Type:

Intern

Job Code:

A103720

Share this listing:
default team image

Responsibilities

About the team Welcome to the Multimodal Interaction & World Model team. Our mission is to solve the challenge of multimodal intelligence、virtual reality world interaction in AI. We conduct cutting-edge research on areas such as Foundations and applications of multimodal understanding models, Multimodal agent and inference, Unified models for generation and understanding, World Model. Our team is comprising experienced research scientists and engineers who are dedicated to developing models that boast human-level multimodal understanding and interaction capabilities. The team also aspires to advance the exploration and development of multimodal assistant products. We foster a feedback-driven environment to continuously enhance our foundation technologies. Come join us in shaping the future of AI and transforming the product experience for users worldwide. As a project intern, you will have the opportunity to engage in impactful short-term projects that provide you with a glimpse of professional real-world experience. You will gain practical skills through on-the-job learning in a fast-paced work environment and develop a deeper understanding of your career interests. Applications will be reviewed on a rolling basis - we encourage you to apply early. Successful candidates must be able to commit to at least 3 months long internship period. Responsibilities - Explore and research multi-modal understanding, generative, machine learning, reinforcement learning, AIGC, computer vision, artificial intelligence and other cutting-edge technologies. - Explore the basic model of large-scale/ultra-large-scale multi-modal understanding and generation interweaving, and carry out extreme system optimization; Data construction, instruction fine-tuning, preference alignment, model optimization; Improve the ability of data synthesis, scalable oversight, model reasoning and planning, build a comprehensive, objective and accurate evaluation system, and explore and improve the ability of large models. - Explore and break through the advanced capabilities of multi-modal models and world models, including but not limited to multi-modal RAG, visual COT and Agent, and build a universal multi-modal Agent for GUI/ games and other virtual worlds. - Use pre-training, simulation and other technologies to model various environments in the virtual/real world, provide the basic ability of multi-modal interactive exploration, promote application landing, and develop new technologies and new products with artificial intelligence technology as the core.

Qualifications

Minimum Qualifications: - PhD degree in computer, electronics, mathematics and other related majors. - Have in-depth research in one or more fields such as computer vision, multimodal, AIGC, machine learning, rendering generation, etc.. - Excellent analytical and problem solving skills; Ability to solve large model training and application problems; Ability to explore solutions independently. - With good communication and collaboration skills, proactive work, able to harmoniously cooperate with the team to explore new technologies and promote technological progress. Preferred Qualifications: - With excellent basic algorithms, solid foundation of machine learning, familiar with CV, AIGC, NLP, RL, ML and other fields of technology, CVPR, ECCV, ICCV, NeurIPS, ICLR, SIGGRAPH or SIGGRAPH Asia and other top conferences/journals published papers are preferred. - With excellent coding ability, proficient in C/C++ or Python programming language, ACM/ICPC, NOI/IOl, Top Coder, Kaggle and other competition winners preferred. - In the fields of multimodal, large model, basic model, world model, RL, rendering generation, leading projects with too much influence is preferred. By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy. If you have any questions, please reach out to us at apac-earlycareers@bytedance.com.

Job Information

About Doubao (Seed)

Founded in 2023, the ByteDance Doubao (Seed) Team, is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements.

With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US.

Why Join ByteDance

Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.

As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.

Diversity & Inclusion

ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.


Privacy Policy © 2012-2025 ByteDance