HyperAI

Perceptron

PerceptronIt is a binary linear classification model, which can be regarded as a simple form of feedforward neural network. It was proposed by Frank Rosenblatt in 1957. It inputs the feature vector of the instance and outputs the category of the instance.

As a linear classifier, it can be regarded as the simplest forward artificial neural network. Despite its simple structure, it can still learn and handle complex problems. Its main drawback is that it cannot handle linear inseparable problems.

Perceptron Definition

The perceptron uses feature vectors to represent a feedforward neural network, which is a binary classifier that maps a matrix such as x to an output value f(x).

Here, w is a real number representing the weight vector, w · x is the dot product, and b is the bias constant.

f(x) is used to classify x to determine whether it is positive or negative. This is a binary classification problem. If b is a negative value, then the weighted input value must produce a positive value greater than -b in order to make the classification neuron greater than the threshold of 0. From a spatial perspective, the bias will change the position of the decision boundary.

Perceptron learning strategy

Core: Minimizing the loss function

If the training set is separable, then the learning goal of the perceptron is to find a separating hyperplane that can completely separate the positive instance points and negative instance points in the training set. In order to determine the perceptron model parameters w and b, a loss function is generally used and the loss function is minimized at the same time.

Related words: multi-layer perceptron, neural network