HyperAIHyperAI
2 months ago

Reproducible scaling laws for contrastive language-image learning

Cherti, Mehdi ; Beaumont, Romain ; Wightman, Ross ; Wortsman, Mitchell ; Ilharco, Gabriel ; Gordon, Cade ; Schuhmann, Christoph ; Schmidt, Ludwig ; Jitsev, Jenia
Reproducible scaling laws for contrastive language-image learning
Abstract

Scaling up neural networks has led to remarkable performance across a widerange of tasks. Moreover, performance often follows reliable scaling laws as afunction of training set size, model size, and compute, which offers valuableguidance as large-scale experiments are becoming increasingly expensive.However, previous work on scaling laws has primarily used private data \&models or focused on uni-modal language or vision learning. To address theselimitations, we investigate scaling laws for contrastive language-imagepre-training (CLIP) with the public LAION dataset and the open-source OpenCLIPrepository. Our large-scale experiments involve models trained on up to twobillion image-text pairs and identify power law scaling for multiple downstreamtasks including zero-shot classification, retrieval, linear probing, andend-to-end fine-tuning. We find that the training distribution plays a key rolein scaling laws as the OpenAI and OpenCLIP models exhibit different scalingbehavior despite identical model architectures and similar training recipes. Weopen-source our evaluation workflow and all models, including the largestpublic CLIP models, to ensure reproducibility and make scaling laws researchmore accessible. Source code and instructions to reproduce this study will beavailable at https://github.com/LAION-AI/scaling-laws-openclip

Reproducible scaling laws for contrastive language-image learning | Latest Papers | HyperAI