Command Palette
Search for a command to run...

Abstract
While next-token prediction is considered a promising path towards artificialgeneral intelligence, it has struggled to excel in multimodal tasks, which arestill dominated by diffusion models (e.g., Stable Diffusion) and compositionalapproaches (e.g., CLIP combined with LLMs). In this paper, we introduce Emu3, anew suite of state-of-the-art multimodal models trained solely with next-tokenprediction. By tokenizing images, text, and videos into a discrete space, wetrain a single transformer from scratch on a mixture of multimodal sequences.Emu3 outperforms several well-established task-specific models in bothgeneration and perception tasks, surpassing flagship models such as SDXL andLLaVA-1.6, while eliminating the need for diffusion or compositionalarchitectures. Emu3 is also capable of generating high-fidelity video viapredicting the next token in a video sequence. We simplify complex multimodalmodel designs by converging on a singular focus: tokens, unlocking greatpotential for scaling both during training and inference. Our resultsdemonstrate that next-token prediction is a promising path towards buildinggeneral multimodal intelligence beyond language. We open-source key techniquesand models to support further research in this direction.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| visual-question-answering-on-mm-vet | Emu3 | GPT-4 score: 37.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.