Third Step: Manage the Business for Major ROI—Part V—Freedom from Hunger Studies

Continuing the quest for evidence of enough return on investment in income-generating activities (IGAs) to increase household income.

Drawing entirely this time from the paper written a decade ago by my Freedom from Hunger colleagues Barbara MkNelly and Mona McCord, this post focuses on results of four studies conducted by Freedom from Hunger in the 1990s—in coastal Ghana, the altiplano of Bolivia and the Sahelian zone of Burkina Faso and Mali. I have to devote this post to setting the scene before presenting results relevant to the Third Step of the classic microfinance theory of change.

The Ghana and Bolivia studies were done with the same longitudinal, experimental research design, involving random assignment of communities to receive or not receive the Credit with Education program (village banking with education in health/nutrition and basic business management) of Freedom from Hunger. I have referred in other places to these studies as randomized controlled trials (RCTs), and indeed they randomized program assignment after a baseline survey (1993 in Ghana, 1994-95 in Bolivia), followed three years later by a follow-up survey (1996 in Ghana, 1997 in Bolivia).

However, the results analysis did not follow the now-standard “intent to treat” design; there were too few communities in the treatment (11 in Ghana, 16 in Bolivia) and control (6 in Ghana, 12 in Bolivia) categories to generate sufficient statistical power to yield statistically significant differences between treatment and control. Instead, the most useful analysis was a “difference of differences” comparison between those households in the program communities who ultimately joined the program and those who never joined the program. That is, we analyzed the difference in survey results from baseline to follow-up in each of the two categories of participants and non-participants in program communities and then compared these two differences. We also compared these program community differences to the control community difference between baseline and follow-up.

The studies in Sahelian Burkina Faso and Mali (both in the mid-1990s) also examined apparent impacts of Freedom from Hunger’s Credit with Education program design, but with very different research designs, never intended to demonstrate causation conclusively. The Burkina Faso study was qualitative, involving in-depth interviews of participants and non-participants by Santa Clara University economist Michael Kevane in three distinctly different rural communities relatively close to Ouagadougou. The Mali study was one of the early field tests of the USAID AIMS “practitioner-led impact assessment” tools:

    • An impact survey to collect information to test AIMS project hypotheses.
    • A client exit survey to determine reasons clients left the program and whether motivating factors were related to the program.
    • In-depth individual interviews about loan use over time.
    • In-depth individual interviews about empowerment.
    • Client-satisfaction group discussions about program and their suggestions for improvement.

These five research tools were applied to clients at different stages of program participation: incoming, one year and two years.

Clearly, this was a major commitment of resources to evaluate apparent impacts of Freedom from Hunger’s signature program design. I recently wrote a six-part series of posts for the Microfinance Focus – Latest News Blog (June-July 2012) to explain in depth why we made this commitment and what were the issues that arose in doing so. It is an interesting look at the challenges of being “evidence-based.” Here are the titles/topics of the posts with hyperlinks:

Post 1—The burden of innovation: proof

Post 2—The challenge of generating valid evidence

Post 3—The need for external validity

Post 4—The relationship between RCTs, theory and quality control

Post 5—Getting to the point: a conversation with David Roodman

Post 6—Good evidence, confidence and the ‘probabilistic mind’

The awkward “back story” that is not covered in these six posts is that many years after our formal reports of the RCTs in Ghana and Bolivia, attributing to program participation some exciting differences between baseline and follow-up, re-analysis of the data by Jonathan Morduch and Dean Karlan showed us the problem with insufficient statistical power in our relatively small samples of communities. Despite the noble effort, the resulting analysis could only document in detail the difference between baseline and follow-up dates without being able to attribute these differences to the program intervention with any acceptable level of statistical confidence. Such are the perils of trying to do pioneering impact research on an NGO budget.

I will summarize results of these studies and how we can interpret them in the next posts.