Approximate Inference
Approximate inferenceIt is a model training method that uses hypothesis-verification logic for training. Its core task is to calculate the function expectation under a certain distribution, or to calculate the marginal probability distribution, conditional probability distribution, etc.
Since they usually require integration or summation operations, and the parameter conditions are not very clear or the calculation cost is relatively high, the approximate inference method can reduce the cost and difficulty of deriving the results.
Approximate inference method
Random Methods: Gibbs sampling method estimates the true posterior through a large number of samples and approximates the target distribution based on real data.
Advantages: more accurate, relatively simple sampling process, easy to operate, good theoretical convergence
Disadvantages: Slow convergence speed, difficult to judge the degree of convergence
Variational method: Approximate the posterior distribution with some known simple distribution.
Advantages: analytical solution, low computational overhead, fast speed, easy to apply to large-scale problems
Disadvantages: The derivation process is relatively complicated and requires high human skills