Multilayer Perceptron
Multilayer Perceptron (MLP) is a forward-structured artificial neural network that maps a set of input vectors to a set of output vectors. It can be viewed as a directed graph consisting of multiple node layers, each of which is fully connected to the next layer. In addition to the input node, each node is a neuron (or processing unit) with a nonlinear activation function. MLP is a generalization of the perceptron, overcoming the weakness of the perceptron that it cannot recognize linearly inseparable data.
The concept of multilayer perceptron emerged after the introduction of the backpropagation algorithm, which made it possible to train multilayer networks.Learning representations by back-propagating errorsThe backpropagation algorithm is described in detail in the 1986 paper by David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, which shows how to use it to train multilayer perceptrons.
Although the early concept and prototype of multilayer perceptron existed before, this paper was an important document that clearly linked the backpropagation algorithm with the multilayer network structure and was widely recognized in the field of neural network research. Before this, multilayer networks had not been widely used due to the lack of effective training methods.