Impact Performance - moving from a fixed to growth mindset to create impact

Stanford Social Innovation Review (SSIR) explores the need for impact investing to move from a fixed to growth mindset. To move from impact being something that is fixed (proven once), to a continuous "impact performance" mindset that treats data as dynamic and acknowledges that each intervention has a different impact in each place and each moment in time.


Below are highlights sourced from the article. Read the full article.

  • In a world of increasing transparency, we expect that what’s on the label will reflect what’s inside the package. Their credibility rests on whether what’s on the label is consistent with the product itself.

  • Accompanying the growth of impact investing, there’s been a marked increase in activity around how to measure and manage impact. It will enable investors to direct capital to make the most change, and it will empower investors and companies to manage impact performance with the same rigour as financial and operational performance.

  • However, Jed Emerson famously quipped: “We fund endless studies to guide us toward a vague truth. Still the answer remains: We simply do not know. Instead, we have created what is all too often a collective dance of deceit whereby funders are told what they like to hear, and grantees are freed of true accountability for their efforts.”

  • In recent years the impact investing sector has made progress developing better, more accessible, and more transparent ways of talking about impact, from the IMP’s five dimensions of impact to the IRIS+ indicators and the IFC Operating Principles.

  • Yet despite this progress, we have yet to set a clear minimum expectation around what constitutes “good enough” impact data to judge performance. Is it an articulation of intent? Assurance that certain practices are being followed? Or do we need hard data about material impacts for people and planet? Lacking this shared minimum expectation, current impact performance reports typically rely on basic operational data that are represented as impact created. As a result, most impact “reporting” comes up short: It serves as a self-affirming indicator of good efforts rather than an objective view of performance.

  • As impact reporting has become more mainstream, there is increased recognition within impact investing of the need to strive for top-notch “impact performance.” Impact investing has a similar potential to leap forward if we recognise that we need more than “evidence” of impact - something fixed, proven once, that either exists or it doesn’t. We also need continuous “impact performance” data - data that is dynamic, fluid, and iterated upon, so that we can grow our impact. We tend to slip up when we confuse the two.

  • We see the limitations of the “fixed” impact mindset every time we think we’ve discovered the “truth” about the impact of any given intervention (a microfinance loan, an improved school curriculum). It is, of course, helpful to uncover studies that show that an intervention works. But relying on studies alone and applying their findings to (somewhat) similar interventions ignores something essential: differences in impact performance are the result of the specific actions of a company, the specific characteristics of a product or service, and the specific ways the company interacts with customers in a local context. These differences, by definition, are invisible if one relies on a static definition of “this product or service (always) creates this much impact for a customer.”

  • In contrast, an impact performance mindset is grounded in the knowledge that each intervention has a different impact in each place and each moment in time. We covered how place-based investing can match the investment to context of place, people and culture in the Impact Investing Network's article and webinar.

  • Social interventions of all types exist in much too dynamic a context to blindly extrapolate from a single anecdote or study as broadly as we do. We would never assume that each supermarket, airline, or online marketplace has the same operational and financial ratios per unit sold. Yet we act as if the well-studied impact of one intervention in one place and one moment in time can represent all interventions of that type globally.

To be an impact performance report, it would have to, at a minimum:

  1. Be anchored in the impact priorities of the affected stakeholders (customers, employees, planet).

  2. Have at least some impact performance data gathered directly from these stakeholders.

  3. Have data that allows for comparison of the relative impact performance of different enterprises or impact funds engaged in the same or similar activities.

  • The spottiness of impact data is our industry’s most open secret. The question we must therefore answer is how best to solve it. Our objective should be to go directly to the stakeholders for whom these enterprises are meant to create better outcomes and, quite simply, ask them whether or not those outcomes are occurring. Even if this data is not perfect, even if self-reported data is often subjective, surely it is better to ask and get the data than it is not to ask at all. The need for impact investors to better engage with communities and stakeholders was highlighted in BlueMark's recent research.

  • The good news is that we are living in a period of massive acceleration and innovation in the collection of beneficiary-level outcomes data by some of the world’s leading impact investors and social enterprises. Flourish Ventures, Omidyar Network India, Global Partnerships, REGMIFA, BRAC, the Rockefeller Foundation, Ceniarth, and Solar Sister have all recently shared their experience with collecting client-level data. In a similar vein, last year we at 60 Decibels published a Why Off-Grid Energy Matters report that includes a true ranking of impact performance, aligned with the Impact Management Project (IMP), for 59 off-grid energy companies, all based on what their customers said about the impact they experience.

  • Too much impact capital, and too much effort, is being deployed to keep on settling for vague estimations of impact performance. We have a duty to the people being served by impact investments to push past “likely outcomes” and triangulated output data and work towards impact performance. Indeed, if we agree that our work is about these customers, we must collectively call foul every time their voices are absent in a report about their lives and their well-being. Without their direct feedback, these reports quietly reinforce the faulty and pervasive logic that it is possible to assert that one is having impact in client-facing investments without ever hearing directly from clients.

To collectively improve, we make the following three recommendations:

  1. Be wary of “impact performance” reports that don’t allow you to assess performance. “Performance” means that some organizations do better, and some do worse. There are leaders and laggards. If your performance rating methodology doesn’t allow you to distinguish in this way, it is, quite simply, not a performance rating.

  2. Be transparent about where your data came from. All impact reports should clearly signal where they do and do not have outcomes data, as well as explain where outcomes data are based on direct data-gathering as opposed to “impact math.” The field takes a step back each time we blur the line between operational metrics and outcomes data collected from stakeholders. If we, as impact investors, truly care about customers and believe that the impact on their lives matters as much as returns to shareholders, then our aspiration must be to hear directly from these customers when assessing impact.

  3. The impact measurement and management conversation must prioritize outcomes data. Until then, packaging and repackaging outputs with glossy reports featuring big numbers will continue to be both burdensome and futile. At best, these output-based reports serve to justify investment decisions and attract more capital. At worst, they perpetuate the dance of deceit at the expense of learning about what works to create better outcomes for customers.

Read the full article on SSIR.

9 views0 comments