kevin
@k3vin-wang
The overall error of multiple models is equal to the average error of a single model minus the diversity of the model’s predictions. In other words, if you have many different predictions, the overall performance may be better than relying on a single prediction. The theorem can be expressed as a mathematical equation: {Average Prediction} - {True Value})^2 = {Average Single Model Error} - {Prediction Variability} The left-hand side represents the error of the overall prediction. The first term on the right-hand side is the average error of a single model. The second term on the right-hand side represents the diversity (or variability) of the model’s predictions. The theorem does not imply that any collection of diverse models will be accurate. If all of the models share a common bias, their average will also contain that bias. The theorem does imply that any collection of diverse models (or people) will be more accurate than its average member, a phenomenon referred to as the wisdom of crowds.
0 reply
0 recast
2 reactions