In science the role of error covariance is well understood. In business and politics there are sometimes limited attempts to incorporate different points of view, but companies and governments all make the same fundamental mistake: they ignore the tendency of individuals to make similar mistakes.
As part of social network optimization, a careful choice of individuals who tend to make very different mistakes can lead to much better overall performance. For an example, see the section on legal tribunals on my justice system website.
Let’s look at error covariance in general.
In this image below, the solid blue and the hollow dots represent two sets data. If that is too abstract for you, consider the dots to represent opinions of two different people.The red line is an adequate least-squares fit to the data represented by the solid blue dots. It is also an adequate least-squares fit to that of the hollow blue dots. If we combine these two data sets to give a larger sample size, then it seems that the red line is a very good guess at the truth.
But in this (artificial) example, we know that the truth is in fact what’s shown as a green line.
The data shown by the solid blue dots contains errors, and so the least-squares fit to them is not a very good estimate. The data shown by the hollow blue dots also contains errors. They are not the same errors, but the least-squares fit to them is almost exactly the same. Therefore, when the two sets of estimates are combined, the result is not any better.
The problem is that the two datasets have a large error covariance. Whatever created the first dataset had the same tendency to make errors as that for the second.
We might imagine that a different dataset was such that its least-squares fit was a vertical line. A blue line, perhaps. If the first dataset, to which the red line was fitted was combined with one giving a vertical blue line, the combined dataset would match the green line shown in the diagram. Looking at the top-left quadrant, we can think of the lines as the hands of a clock. The red line pointing at roughly 10 o’clock could be combined with a blue one pointing roughly at 12 o’clock to give one very much like the green line at 11 o’clock.The light purple points represent a third dataset, and the light blue vertical line is a least-squares fit to it. Again, this dataset contains errors, and the hypothesis represented by the blue line is wrong, assuming that the green line represents the actual facts. But it is wrong in the other way. The first two datasets produce a line with two little slope. The new one leads to a vertical one. The truth would be the slope of the green line, which is roughly half way between the two.
If you combine all three data sets, represented by solid blue, hollow blue, and purple dots, and try to fit a line to the combined dataset, you get something like the green line, which we have said is approximately correct.
Combining two datasets or more datasets which are full of errors can produced a much better one if their error covariance is low.
It is essential to recognize the difference between a change of perspective and an actual mistake. That sounds obvious, but is often misunderstood. Please see the page which explains this with illustrations of both.
Please refer to the Decision Making and Estimation site for more information.