Frequency Principle
Frequency Principle, or F-Principle for short, is an important concept in the field of deep learning. It describes the characteristic that deep neural networks (DNNs) tend to fit the target function from low frequency to high frequency during training. This principle was proposed by Zhi-Qin John Xu and his collaborators at Shanghai Jiao Tong University in 2018.Training behavior of deep neural network in frequency domain" is clearly stated in the ".
The frequency principle provides a new perspective for understanding the training behavior and generalization ability of deep neural networks. According to this principle, DNN will first capture the low-frequency components of the target function during the learning process, and then gradually learn the high-frequency components. This fitting order from low frequency to high frequency is opposite to the behavior of many traditional numerical calculation methods (such as Jacobi method), which usually converge faster on high-frequency components.
The research team verified the frequency principle through experiments on one-dimensional synthetic data, and further confirmed its effectiveness on high-dimensional real data sets such as MNIST and CIFAR10. They also proposed a linear frequency principle model that can accurately predict the learning results of a wide two-layer ReLU neural network and theoretically explain the generalization ability of DNN.