Blog by Nate Archives: The Corrupting Influence of Corruption Research (June 10, 2012)

[My blog is back.  I’m posting pretty much all of my old content.  But I swear this post is actually interesting.  Relative to my other content.]

The Corrupting Influence of Corruption Research

Measuring Business Corruption

A new Transparency International (TI) corruption report has me thinking about the difficulties of measuring corruption.  This post will provide links to a bunch of my published and unpublished papers.  Since my mom is probably the only reader of this blog, I don’t feel too bad about this shameless self-promotion.  Hi mom.

For years TI has provided one of the standard measure of corruption that is based on expert assessments of country level corruption.  This shouldn’t be news to any researcher.

The latest TI report examines a number of institutions in 25 European countries.  Most of these findings map into my intuitions on the level of corruption.  Scandinavian countries are the least corrupt and Southern Europe countries are some of the most corrupt.  The most disturbing part of the report are the backslides in anti-corruption efforts in Eastern and Central Europe.

The measuring and analysis of corruption is a lot more complicated than you might think.  The two most common approaches are to either ask experts to assess the level of corruption in a country or to ask individuals about their personal experiences with corruption.

Dan Treisman has an excellent Annual Review of Political Science piece reviewing the corruption literature and comparing the findings from studies using expert assessments versus individual experiences.  Treisman finds a pretty low correlation between the two, and only the perception based measures map into our theories of the causes of corruption.  For example, expert assessments of corruption find democracies as less corrupt, but reports from individual experiences with corruption don’t systematically vary.  (Ben Olken has some great work in this area as well.  See here).

The provocative conjecture from Treisman is that perception based measures may suffer from serious bias.  Experts believe that countries with democratic institutions, for example, are less corruption.  Thus when asked of how corrupt country X is, they use the level of democracy to evaluate the level of corruption.  The point is that democracy may not cause lower levels of corruption.  Democracy causes people to think a country is less corrupt.  Then we run regressions using this data and find that democracies are less corrupt.

I’ve often thought about this problem in my own research.  When asked about the level of corruption in government procurement, what information are experts drawing from to make these assessments?  Did they actually see the corrupt transactions?  Are they basing it on outcomes?  From media stories on corruption cases?  From research reports?

Due to these problems, many corruption researcher have begun to focus on asking individuals about their personal experiences with corruption.  This is especially prevalent in firm level surveys of the business environment across countries.  Without going into too many details, there are a few problems with this approach.

In a paper with Quan Li and Aminur Rahman, we observe some weird patterns in cross-national business corruption data.  Why does China and Kenya show up as having low levels of business corruption?  Why is Germany so high?  Our main finding is that there is systematic “non-response” bias.  Lots of firms don’t answer these questions, and this number increases in countries with more authoritarian institutions and less press freedom.  This non-response bias helps us explain these weird patterns.

This probably isn’t a surprise to many readers.  But the key point is that self-reporting of corruption can also be problematic.  Respondents can fail to answer questions or they could give false responses.

What to do?

One response is to try to help shield respondents from any potential repercussions from answering honestly.  One practical way to achieve this is to give respondents a coin and have them flip it without showing the enumerator.  Then the enumerator asks a series of questions, often around seven or so.

First question, have you ever paid a bribe to a police officer?  Flip the coin.  If it lands heads, say yes no matter what.  If it lands tails, answer honestly.  Second question.  Have you ever watched Dancing with the Stars?  Flip and answer.  Third question.  Have you ever cheated on your taxes?  Flip.  Have you ever cheered for the zombies to win in the Walking Dead?  Flip.

The logic behind this is: a) respondents have plausible deniability since they can always claim that they answered yes due flipping heads and b) laws of probability tells us that on average, each question should have at least 50% yes answers, and c) it if very unlikely that respondents would answer “no” seven times in a row (flipping 7 heads in a row).  Individual respondents are shielded for being held personally accountable for their answer, but researchers can use this date to examine systematic patterns of corruption and look at the distibution of answers to see if there problematic responses.

Previous research has used this technique to identify “reticent” respondents, or firms that don’t seem to be answering honestly.  In a survey of firms in Bangladesh, Aminur Rahman and I fielded corruption and tax questions to firms.  The coin flip technique dictates that at least 50% of responses should be “yes”.  For a bunch of the questions, such as cheating on taxes, we had responses well below 50%.  This means, even with the deniability assured by the coin flip technique, the pattern of responses indicates systematic underreporting of tax evasion.  Seehere for more details.

We also find that when enumerators are asked to evaluate the “truthfulness” of the respondents, the enumerators did a terrible job.  In the end, we find consistently problematic answers to direct questions on corruption and the coin flip technique.  The conjecture is that firms lie or skip direct questions on corruption, and answer “no” to the coin flip questions (even if they flip “heads”).  The coin flip technique helped us identity that there was systematic misreporting, but it didn’t directly allow us to measure corruption.

In another paper with Dimitar Gueorguiev and Edmund Maleksy we use a LIST experiment to evaluate the level firm-level corruption in Vietnam.  The logic of the list technique is that half of the sample (firms) in our case, gets a list of 4 questions and the other half gets 3 questions.  Our question was:

LIST QUESTION:  Please take a look at the following list of common activities that firms engage in to expedite the steps needed to receive their investment license/registration certificate. How many of the activities did you engage in when fulfilling any of the business registration activities listed previously?

1. Followed procedures for business license on website.

2. Hired a local consulting/law firm to obtain the license the firm for you.

3. Paid informal charge to expedite procedures

4. Looked for a domestic partner who was already registered

Thus half the same gets all 4 questions, and the other half gets only gets questions 1,2 and 4 (the non-corruption questions).  Respondents only give the number of activities they’ve engaged in, not the specific items.   But, by including the corruption question in the expeirmental group, we can compare the mean of the experimental group to that of the control group.  If the mean reponse of the groups are the same, this technique hasn’t detected any corruption.  If the means are different, we calculate the percentage of firms reporting that they engaged in corruption.

By comparing the two groups, we find that almost 23% of firms paid bribes, and this relationship varies across sectors, time, and foreign versus domestic ownership.  Our key points relate to the relationship between foreign investment and corruption and Vietnam’s entry in the World Trade Organization.

Unfortunately, this technique only works if you can directly survey the individuals or firms engaging in corruption.  If you believe there is corruption within political parties, it is difficult to design a study to directly measure this corruption.  Then we are back in the world of having to lean on the evaluations of experts.

I started this blog post claiming this is shameless self-promotion.  But if you actually made it this far, you can see that there are more open questions than answers.  I guess this is both the benefit and frustration in being really invested in a research question.  These is very little low hanging fruit in this area, but there are lots and lots of interesting to be answered.

If you have a paper on this topic, shoot me off an email.  Bye mom.