Inductive Bias
Inductive biasIt can be regarded as a set of assumptions in machine learning. It is used as a necessary assumption of the objective function in machine learning. The most typical example is Occam's razor.
Inductive bias is based on mathematical logic, but in practical applications, the inductive bias of a learner may be just a very rough description or even simpler. In comparison, the theoretical value is too rigorous to be used in practical applications.
Types of Inductive Bias
At present, the common inductive bias methods are as follows:
- Maximum conditional independence: bias for naive Bayes classifiers;
- Minimum cross-validation error: When trying to choose among hypotheses, the hypothesis with the lowest cross-validation error should be chosen;
- Maximum margin: used for biasing support vector machines, the assumption is based on wide margin discrimination;
- Minimum description length: When forming a hypothesis, minimize the description length of the hypothesis;
- Minimum number of features: assumptions used by feature selection algorithms;
- Nearest neighbor: A bias used for the nearest neighbor method, where similar samples tend to belong to the same category.