A major element of the new ISU judging proposals is the random selection of seven judges' assessments from a group of 14. The purpose of this is to discourage deal making and bias in the judging of competitions. This is an idea that is superficially appealing on first hearing, but does not hold water when put to the test.
Random selection of judges offers no deterrence to misconduct. Half the time each judge's assessment will count and the other half it is ignored, with no other consequence. There is no down-side for attempting to cheat, and without a down-side there is no reason not to try, regardless of the odds. In another article we have also shown that randomly selecting the judges' assessments has a number of mathematical weaknesses and quirks that leads to results frequently being determined by the roll of the dice, and to judging blocks actually being in a better position to manipulate results than currently.
At the most fundamental level, however, random selection of half the marks is a futile effort because the net result is no better than keeping all the marks in the first place. On the average, the frequency with which bias will be present in the marks is not reduced by random selection.
Consider the extreme cases. If there are no biased marks among the 14 judges then there will be no biased marks among the seven selected. At the other extreme, if all 14 judges give biased marks then all seven selected will also be biased. In the extreme cases there is no effect at all by making random selections.
Suppose now only half the judges give biased marks. When seven of the 14 are selected, three or four of the selected marks will show bias on the average. In some cases, luck will result in fewer biased judges among the seven, but for every skater who lucks out, some poor soul will end up with seven marks with more than four biased marks. Who gets punished by bias and who does not will be a matter of chance, but on the average for an entire event bias will be present half the time for the seven marks chosen if half the original 14 judges give biased marks. There is no improvement, averaged over the entire event. This holds true regardless for the percentage of biased marks in the full panel of fourteen. So why bother?
As a final example, suppose there was just one biased judge among the fourteen. For half the skaters in an event that judge will affect the placements, for the other half the judge will not. That may seem like at least a small improvement; but it is not, because the impact of the judge on the placements is doubled in a panel of seven compared to the panel of 14. In the larger panel of 14 the biased judge makes up 7% of each placement. For the smaller panel the biased judge makes up 14% of the placements, but half the time, which on the average makes 7%. The impact of a single biased judge is always 7% on the full panel, and averages 7% after random selection, when the entire event is taken as a whole.
The bottom line is, that for all occurrences of bias on a panel, the mathematical weight (impact) of bias on the marks in an entire event is the same, on the average, using the full panel of 14 all the time or seven randomly selected judges for each skater.
So why bother?
Return to title page