Hopfield Network
The Hopfield network was proposed by John Hopfield, a physicist at California Institute of Technology in 1982. He published a paper that had a great influence on the research of artificial neural networks.Neural networks and physical systems with emergent collective computational abilities(Neural Networks and Physical Systems with Emergent Collective Computation), which introduces the basic theory of Hopfield networks. In 2024, John Hopfield won the Nobel Prize in Physics for his work on neural networks.
The Hopfield network is a recursive neural network. It is a neural network that combines a storage system and a binary system, and is mainly used for problems such as associative memory and pattern recognition. Its core feature is that it can converge to one or several stable states, which are also called attractors. Each attractor can be used to store a pattern. When the input data is incomplete or disturbed information, the network can recall the complete information through association.
Structurally, the Hopfield network is a single-layer, fully connected neural network, that is, any two neurons in a layer of the neural network are connected, and the connection weights are symmetrical. The output of a neuron has only two states (such as -1 or 1, or 0 or 1), and the state of a neuron depends on the state of other neurons and the connection weights between them.