I admit that today’s post has only a tenuous connection to the usual Transitions fare, although I hope it has something to say about topics we cover frequently like corruption and freedom of the press. The news peg is the release of this year’s Corruption Perceptions Index compiled by the corruption watchdog Transparency International.
Item: “Georgia Slightly Improves Standing in TI Corruption Index.” This article on a Georgian news site declared, “Georgia’s score in Transparency International’s annual Corruption Perception [sic] Index this year was slightly better with the country taking 64th place out of 182 countries surveyed. The index, released on December 1, gave Georgia score of 4.1, an improvement from last year’s 3.8; Georgia scored the same point of 4.1 in the similar index in 2009.”
Georgia came in “ahead of” most other former Soviet states on the index as well as several EU countries, the article says.
The language used – “Georgia’s score … was slightly better”; “an improvement from last year’s” – strongly suggests that the TI rankings measure corruption is some statistically valid manner. Is this solid reasoning?
Let’s look at another story based on the TI index, from a normally well-edited site about the EU. This one mines the TI index and comes to a headlineable conclusion: “Crisis-hit EU countries becoming more corrupt.” Oh, really?
The Civil.ge story made some effort to put the TI rankings into the context of political and social life in Georgia, talking about progress against corruption in certain areas and problems that still need to be addressed. And it dutifully noted that the TI index, as the name says, ranks countries according to perceptions of how corrupt their public sectors are.
However, it committed the same fallacies that so often come with these stories: blithely assuming that Georgia’s rise to 64th rank can be correlated with real life, and that small rises and falls in Georgia’s scores mean something. It also conflates the terms score and rank. Does the writer have any notion of how these scores are obtained, or whether there is any statistical, or other, significance in these small yearly changes?
The second article is worse; it contains no useful information, no checkable facts, only factoids. It merely boils down the index to focus on the EU, as though it reflected verifiable data about the real world. The only piece of context, the only statement in the entire piece than can be related to the real world, is taken from a TI press release:
“Eurozone countries suffering debt crises, partly because of public authorities’ failure to tackle the bribery and tax evasion that are key drivers of debt crisis, are among the lowest-scoring EU countries.”
I have no bone to pick with TI’s reasoning in the first part of the sentence – perceptions of corruption have certainly risen on the back of reports of the staggering level of tax evasion in Greece, for example, as TI pointed out in an e-mail when I asked for a clarification. TI is careful not to make one-on-one correspondences between its index figures and verifiable data, such as the level of debt. But the article misrepresents the index by implying that index rankings represent something measurable in the real world and that small changes year on year on the index have some sort of significance, statistical or otherwise.
This article tells us precisely as much about corruption or the economic crisis as about Brad Pitt’s nose. It is nothing more than copy-paste journalism, which can be harmless if the topic is stupid, like Top 10 Ugliest Dogs. If the topic is important, it’s the media that looks stupid.
Well, what’s wrong with regurgitating the TI results? Don’t they generally jibe with people’s gut feelings about corruption? Yes, but that’s the problem here. As TI stresses, its index attempts to systematize perceptions of corruption, but because perceptions are difficult to get a statistical handle on, it’s dangerous to assume that they can be compared over time.
In fact, in the fine print to its annual reports TI warns that its data should not be used to compare a country’s scores over time. This is because a country’s ranking can change even if the (perceived) level of corruption there is steady, as other countries’ scores rise or fall around it. Yet comparing countries’ scores over time is virtually the only thing these two articles and many others do.
I don’t say the TI index is not useful, and I’m picking on it only because it happened to be released this week. I think TI and other think tanks like Freedom House and Reporters Without Borders provide a useful service with their country rankings. But that’s all they are, helpful guides rather than scientific surveys, as the think tanks themselves admit, although usually in the obscurer recesses of their sites. Freedom House says its ratings for democracy, civil society etc. in post-communist countries “should not be taken as absolute indicators of the situation in a given country, [they] are valuable for making general assessments of how democratic or authoritarian a country is.”
TI says its index “is not intended to measure a country’s progress over time. It is a snapshot of perceptions of corruption.”
All I’m saying to my fellow journalists is, go ahead and write fluff, but stick to fluffy subjects. If the subject is important, for the sake of your self-respect, put some thought into your work. That’s our job, isn’t it?