But two aspects of this system for measuring surface temperatures have long been worrying a growing array of statisticians, meteorologists and expert science bloggers. One is that the supposedly worldwide network of stations from which GHCN draws its data is flawed. Up to 80 per cent or more of the Earth’s surface is not reliably covered at all. Furthermore, around 1990, the number of stations more than halved, from 12,000 to less than 6,000 – and most of those remaining are concentrated in urban areas or places where studies have shown that, thanks to the “urban heat island effect”, readings can be up to 2 degrees higher than in those rural areas where thousands of stations were lost.

To fill in the huge gaps, those compiling the records have resorted to computerised “infilling”, whereby the higher temperatures recorded by the remaining stations are projected out to vast surrounding areas (Giss allows single stations to give a reading covering 1.6 million square miles). This alone contributed to the sharp temperature rise shown in the years after 1990.
But still more worrying has been the evidence that even this data has then been subjected to continual “adjustments”, invariably in only one direction. Earlier temperatures are adjusted downwards, more recent temperatures upwards, thus giving the impression that they have risen much more sharply than was shown by the original data.