[From Stanford Social Innovation Review blog, 24 February 2012]
Timothy Ogden’s recent blog post, “Freeing the Knowledge,” persuasively argues that social sector organizations may be shortchanged (my term) when they engage academic researchers under academic rules. The prevailing model is to fund academics to build proprietary datasets to answer questions that interest them based on where the research can get published, with little concern for how accessible it is to the broader public. This often results in missed opportunities in terms of the timeliness, breadth, quantity, actionability, and publicity of insights generated.
I like his proposals, but an interesting question Timothy’s post raised in my mind is: when does it make sense for NGOs to outsource their research to academics? To my mind, this hinges on what the research is for.
A clear area where academic-grade research is warranted is in impact assessments. Because these studies are generally ultimately commissioned to drive general advocacy, inform policy discussions, and influence allocation of funds decisions beyond the specific project being analyzed, there is very little point to an impact study that doesn’t stand up to scrutiny. Let’s face it: academic independence helps bring credibility, not only rigor. In today’s development scene, having data, especially if it is certified by well-known academics, is a matter of branding as much as anything else. While that’s the case, the social sector will continue to struggle to find ways of fixing the incentive problems that Timothy outlines; much of the problem is self-inflicted.
Also, I could easily be convinced that knowing all the intricacies of how to avoid sample pollution need not be a core skill that every NGO must have. Given all this, the outsourced research model seems appropriate, unless you are very large and can afford to have a dedicated team doing just that.
But when it comes to questions of marketing or service design, where what you are measuring is not impact but uptake, it is less clear to me whether the research needs to meet the same high academic standards. (This is not to say that impact becomes irrelevant, but presumably once you have established a credible linkage between usage and impact, then you don’t need to bring the analysis back to impact every time you want to improve your intervention.)
Say that you want to study whether the packaging of a product makes clients consume more of your service. I’d be inclined to apply the 80/20 rule there; it’s a matter of value for money. Instead of doing one expensive study with the requisite representative samples, I might see merit in doing five cheaper, quick-and-dirty studies for the same price and test many more variations around the concept. There are many concrete marketing and design features that you could vary when you are designing a service, and you are not going to do a full-ticket price experiment on each.
On this sort of work, academics may have the wrong incentives guiding topics (new, new, and new), and the problem is too multi-dimensional for academic methods. They may not have the appropriate context knowledge to do the work sufficiently nimbly to be useful to the organization. A full cost-benefit evaluation would require them to have intimate knowledge of the delivery channels and cost structure of the provider, which is seldom taken into account. There is little practical value in conceiving new services if they cannot be delivered at scale on commercially viable terms. The supply side needs to be brought back into the picture in product evaluations.
Also, in the context of marketing and product development, being able to understand customers’ motivations and reactions to variations in the offering should be part of the NGO’s core skill set. It’s always been part of the marketer’s role. That expertise ought to be brought in-house.Have funders, and their NGO implementation vehicles, become overly-reliant on academic-grade research to justify their decisions? Of course, it’s hard having two different grades of research done (impact and uptake/usage analyses) without expecting some turf battles. It goes back to the respect point that a commenter on Timothy’s blog emphasized: we need to appreciate that we all have different purposes and one-size-fits-all criteria of research quality are not helpful.