Two items about statistics and organisations struck me recently:
- The Law Society periodically requests legal firms to complete a questionnaire on diversity: A number of people were suggesting on line that this was not relevant to them because they were in small firms where submissions would be statistically of limited relevance.
- The Times revelations about IAAF doping tests: The Times has revealed data obtained via a whistleblower that out of 5,000 athletes tested for doping, more than 800 had produced results “highly suggestive of doping”. The IAAF has argued, inter alia, that abnormalities in these tests are not of themselves proof of doping.
In both cases it seems that many people are in danger of missing the real point. Make the same mistakes as an organisation and you could be missing an opportunity or storing a problem too.
Take the first example, and let’s simplify it for easy of discussion. Yes, if 1 in 4 of the population is non-white and you have a firm of four people with no non-white staff it does not de facto mean you are racist. Even if the population as a whole and applicants are totally evenly distributed in terms of qualification and white / non-white status and your selection is perfectly non-discriminatory, there is a 32% chance that you end up with a firm of four white staff. Knowing there is a firm of 4 white staff alone tells me little about racial bias. But when I get ten submissions back from the only ten firms in the same town all with 4 white staff it begins to be a bit more unlikely – in fact I am down to a 1 in 100,000 chance this would happen without bias – so it is beginning to look like there is a systematic problem (although I still cannot from the data tell you which firm it is at). So collecting multiple sets of data, whilst each separately may not be statistically relevant, may well be able to enhance our knowledge of our situation.
By all means don’t draw the wrong conclusions from data – and realise its limitations – but hats off to the Law Society and do carry on collecting it – just make sure is it used in the right way.
Now back to the IAAF. I don’t know what the chances of a blood sample appearing abnormal as a result of a perfectly legitimate reason – such as illness. I am not an expert on blood testing results. Given the fact that there can be a doubt, the IAAF is almost certainly right to protect the identities of individual athletes. However, in being so secretive about the results on an aggregated basis they appear to have done athletics a huge disservice.
Suppose there is a 1 in 50 chance illness causes an abnormal result – that does not explain some 800 out of 5,000 athletes producing abnormal results, nor does it explain huge variations among athletes from different countries.
The IAAF has come across as defensive, slow to react and at times pretty incompetent. Yes it is right to protect the identities of individual athletes if results can really be anomalous, but unless they can be anomalous at the rate of 20% it owes it to athletics and spectators alike to be candid about the extent of the problem – only then will athletics be able to tackle its problems and clean up their reputation. If there is one thing we should have learnt from the past ten years in banking, sport, politics, care it is few problems get better from burying them internally.
And what can we take from this back to our organisations. Well just because a set of numbers cannot tell me everything does not necessarily mean they tell me nothing. Just be careful what you collect data for and how you present it. And the second lesson, burying bad news deep is rarely the solution. Doing so prevents you from listening, learning, evolving and developing – and oh eventually it will out and come tumbling down around you.