Energy-Based Transformers are Scalable Learners and Thinkers

Inference-time computation techniques, analogous to human System 2 Thinking,have recently become popular for improving model performances. However, mostexisting approaches suffer from several limitations: they are modality-specific(e.g., working only in text), problem-specific (e.g., verifiable domains likemath and coding), or require additional supervision/training on top ofunsupervised pretraining (e.g., verifiers or verifiable rewards). In thispaper, we ask the question "Is it possible to generalize these System 2Thinking approaches, and develop models that learn to think solely fromunsupervised learning?" Interestingly, we find the answer is yes, by learningto explicitly verify the compatibility between inputs andcandidate-predictions, and then re-framing prediction problems as optimizationwith respect to this verifier. Specifically, we train Energy-Based Transformers(EBTs) -- a new class of Energy-Based Models (EBMs) -- to assign an energyvalue to every input and candidate-prediction pair, enabling predictionsthrough gradient descent-based energy minimization until convergence. Acrossboth discrete (text) and continuous (visual) modalities, we find EBTs scalefaster than the dominant Transformer++ approach during training, achieving anup to 35% higher scaling rate with respect to data, batch size, parameters,FLOPs, and depth. During inference, EBTs improve performance with System 2Thinking by 29% more than the Transformer++ on language tasks, and EBTsoutperform Diffusion Transformers on image denoising while using fewer forwardpasses. Further, we find that EBTs achieve better results than existing modelson most downstream tasks given the same or worse pretraining performance,suggesting that EBTs generalize better than existing approaches. Consequently,EBTs are a promising new paradigm for scaling both the learning and thinkingcapabilities of models.