Bias-variance Decomposition
Bias-variance decomposition is a tool for explaining the generalization performance of a learning algorithm from the perspective of bias and variance. It is defined as follows:
Assume that there are K data sets, each of which is independently extracted from a distribution p(t,x) (t represents the variable to be predicted, and x represents the feature variable).
Different models can be obtained by training on different data sets. The performance of the learning algorithm is measured by the average performance of the K models trained on these K data sets, that is:

Here h(x) represents the true function that generates the data, that is, t=h(x).