Nearly 25 years on, we are now able to see how the forecasters did. The good news is that their prediction was right – the number of victims was indeed between 100 and 500,000. But this is hardly surprising, given how far apart the goalposts were.
The actual number believed to have died from vCJD is about 250, towards the very bottom end of the forecasts, and about 2,000 times smaller than the upper bound of the prediction.
But why was the predicted range so massive? The reason is that, when the disease was first identified, scientists could make a reasonable guess as to how many people might have eaten contaminated burgers, but they had no idea what proportion of the public was vulnerable to the damaged proteins (known as prions). Nor did they know how long the incubation period was. The worst-case scenario was that the disease would ultimately affect everyone exposed to it – and that we hadn’t seen the full effect because it might be 10 years before the first symptoms appeared. The reality turned out to be that most people were resistant, even if they were carrying the damaged prion.
It’s an interesting case study in how statistical forecasts are only as good as their weakest input. You might know certain details precisely (such as the number of cows diagnosed with BSE), but if the rate of infection could be anywhere between 0.01% and 100%, your predictions will be no more accurate than that factor of 10,000.
At least nobody (that I’m aware of) attempted to predict a number of victims to more than one significant figure. Even a prediction of ‘370,000’ would have implied a degree of accuracy that was wholly unjustified by the data.
DOES THIS NUMBER MAKE SENSE?
One of the most important skills that back-of-envelope maths can give you is the ability to answer the question: ‘Does this number make sense?’ In this case, the back of the envelope and the calculator can operate in harmony: the calculator does the donkey work in producing a numerical answer, and the back of the envelope is used to check that the number makes logical sense, and wasn’t the result of, say, a slip of the finger and pressing the wrong button.
We are inundated with numbers all the time; in particular, financial calculations, offers, and statistics that are being used to influence our opinions or decisions. The assumption is that we will take these figures at face value, and to a large extent we have to. A politician arguing the case for closing a hospital isn’t going to pause while a journalist works through the numbers, though I would be pleased if more journalists were prepared to do this.
Often it is only after the event that the spurious nature of a statistic emerges.
In 2010, the Conservative Party were in opposition, and wanted to highlight social inequalities that had been created by the policies of the Labour government then in power. In a report called ‘Labour’s Two Nations’, they claimed that in Britain’s most deprived areas ‘54% of girls are likely to fall pregnant before the age of 18’. Perhaps this figure was allowed to slip through because the Conservative policy makers wanted it to be true: if half of the girls on these housing estates really were getting pregnant before leaving school, it painted what they felt was a shocking picture of social breakdown in inner-city Britain.
The truth turned out to be far less dramatic. Somebody had stuck the decimal point in the wrong place. Elsewhere in the report, the correct statistic was quoted, that 54.32 out of every 1,000 women aged 15 to 17 in the 10 most deprived areas had fallen pregnant. Fifty-four out of 1,000 is 5.4%, not 54%. Perhaps it was the spurious precision of the 54.32’ figure that had confused the report writers.
Other questionable numbers require a little more thought. The National Survey of Sexual Attitudes has been published every 10 years since 1990. It gives an overview of sexual behaviour across Britain.
One statistic that often draws attention when the report is published is the number of sexual partners that the average man and woman has had in their lifetime.
The figures in the first three reports were as follows:
Average (mean) number of opposite-sex partners in lifetime (ages 16–44) |
|
Men |
Women |
1990–91 |
8.6 |
3.7 |
1999–2001 |
12.6 |
6.5 |
2010–2012 |
11.7 |
7.7 |
The figures appear quite revealing, with a surge in the number of partners during the 1990s, while the early 2000s saw a slight decline for men and an increase for women.
But there is something odd about these numbers. When sexual activity happens between two opposite-sex people, the overall ‘tally’ for all men and women increases by one. Some people will be far more promiscuous than others, but across the whole population, it is an incontravertible fact of life that the total number of male partners for women will be the same as the number of women partners for men. In other words, the two averages ought to be the same.
There are ways you can attempt to explain the difference. For example, perhaps the survey is not truly representative – maybe there is a large group of men who have sex with a small group of women that are not covered in the survey.
However, there is a more likely explanation, which is that somebody is lying. The researchers are relying on individuals’ honesty – and memory – to get these statistics, with no way of checking if the numbers are right.
What appears to be happening is that either men are exaggerating, or women are understating, their experience. Possibly both. Or it might just be that the experience was more memorable for the men than for the women. But whatever the explanation, we have some authentic-looking numbers here that under scrutiny don’t add up.
THE CASE FOR BACK-OF-ENVELOPE THINKING
I hope this opening section has demonstrated why, in many situations, quoting a number to more than one or two significant figures is misleading, and can even lull us into a false sense of certainty. Why? Because a number quoted to that precision implies that it is accurate; in other words, that the ‘true’ answer will be very close to that. Calculators and spreadsheets have taken much of the pain out of calculation, but they have also created the illusion that any numerical problem has an answer that can be quoted to several decimal places.
There are, of course, situations where it is important to know a number to more than three significant figures. Here are a few of them:
In financial accounts and reports. If a company has made a profit of £2,407,884, there will be some people for whom that £884 at the end is important.
When trying to detect small changes. Astronomers looking to see if a remote object in the sky has shifted in orbit might find useful information in the tenth significant figure, or even more.
Similarly in the high end of physics there are quantities linked to the atom that are known to at least 10 significant figures.
For precision measurements such as those involved in GPS, which is identifying the location of your car or your destination, and where the fifth significant figure might mean the difference between pulling up outside your friend’s house and driving into a pond.
But take a look at the numbers quoted in the news – they might be in a government announcement, a sports report or a business forecast – and you’ll find remarkably few numbers where there is any value in knowing them to four or more significant figures.
And if we’re mainly dealing with numbers with so few significant figures, the calculations we need to make to find those numbers are going to be simpler. So simple, indeed, that we ought to be able to do most of them on the back of an envelope or even, with practice, in our heads.
Читать дальше