HyperAIHyperAI

Command Palette

Search for a command to run...

LLM Based on First Principles, a New Training Paradigm POET

Date

3 months ago

Reparameterized Training via Orthogonal Equivalence Transformation (POET) is a novel reparameterized training algorithm proposed by the Max Planck Institute in Germany and the Chinese University of Hong Kong on June 9, 2025. It uses orthogonal equivalence transformation to optimize neurons. The related paper results are "Reparameterized LLM Training via Orthogonal Equivalence Transformation".

POET works by reparameterizing each neuron using two learnable orthogonal matrices and a fixed random weight matrix. Because POET provably preserves the spectral properties of the weight matrix, it can stably optimize the objective function and improve generalization. The research team developed efficient approximation methods that make POET flexible and scalable for training large-scale neural networks.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp