HyperAIHyperAI

Command Palette

Search for a command to run...

2 days ago

Superposition Yields Robust Neural Scaling

Yizhou Liu Ziming Liu Jeff Gore

Superposition Yields Robust Neural Scaling

Abstract

We resolve a 30-year-old open problem concerning the power of unlabeled data in online learning by tightly quantifying the gap between transductive and standard online learning. We prove that for every concept class with Littlestone dimension, the transductive mistake bound is at least. This establishes an exponential improvement over previous lower bounds of, , and , respectively due to Ben-David, Kushilevitz, and Mansour (1995, 1997) and Hanneke, Moran, and Shafer (2023). We also show that our bound is tight:for every , there exists a class of Littlestone dimension with transductive mistake bound . Our upper bound also improves the previous best known upper bound of from Ben-David et al. (1997). These results demonstrate a quadratic gap between transductive and standard online learning, thereby highlighting the benefit of advanced access to the unlabeled instance sequence. This stands in stark contrast to the PAC setting, where transductive and standard learning exhibit similar sample complexities.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Superposition Yields Robust Neural Scaling | Papers | HyperAI