One of the most consequential weaknesses in CSR impact reporting is the conflation of correlation with causation. A programme operates in a community. Conditions improve. The report claims the programme caused the improvement. This pattern is so common it has become almost invisible — but it is the single fastest route to an accreditation failure under Principle 6.

06
Attribution, Contribution & Impact Logic
The report must responsibly address causality without over-claiming impact.
View principle →

The Problem with Attribution

Attribution — the claim that a specific outcome was caused by a specific intervention — requires a standard of evidence that most CSR programmes cannot meet. True attribution demands a counterfactual: what would have happened in the absence of the intervention? In medical research, this is established through randomised controlled trials. In social programmes operating in complex, open systems, establishing a credible counterfactual is extraordinarily difficult.

Consider a rural livelihoods programme that reports household income increases of 34% over three years. During the same period, the national government introduced a price support scheme for the same crops, monsoon patterns were unusually favourable, and a new road connected the area to a larger market town. Each of these factors plausibly influenced household income. Claiming the CSR programme "caused" the 34% increase ignores all of them.

This is not a theoretical concern. In our review of first-cycle accreditation submissions, over 70% of reports made attribution claims that did not adequately address alternative explanations. It is the most common Principle 6 failure.

Contribution: The More Honest Framework

Contribution analysis asks a different question: not "did this intervention cause the observed change?" but "did this intervention make a meaningful difference, and through what mechanisms?" This is both a more honest question and, critically, a more useful one for learning and programme improvement.

The distinction matters profoundly for credibility. A report that claims "our programme increased incomes by 34%" invites scrutiny of that specific claim — and usually cannot survive it. A report that says "household incomes rose by 34%, of which our market linkage activities plausibly contributed 8–12 percentage points, based on comparison with non-intervention areas and adjustment for concurrent government schemes" makes a more modest claim that is far more defensible.

Attribution vs. Contribution: Causal Logic
CSR
Intervention
Mechanism
(e.g. market linkage)
Observed
Outcome
Government
scheme
Market
conditions
Climate /
monsoon
Contribution analysis accounts for multiple causal streams flowing into the same outcome, rather than claiming a single intervention caused the change.

How Contribution Analysis Works

Contribution analysis, as developed by John Mayne, follows a structured process that builds a credible "contribution story" — an evidence-based narrative about how and why an intervention contributed to observed changes. The process involves six steps, though in practice they iterate:

  1. Set out the attribution problem. What is the specific causal claim to be assessed? What are the key questions?
  2. Develop the theory of change. Articulate the causal chain — the expected pathway from intervention to impact, including assumptions.
  3. Gather existing evidence. What do you already know about whether the theory of change is operating as expected?
  4. Assemble the contribution story. Build the narrative: which links in the causal chain are supported by evidence? Where are the gaps?
  5. Seek out additional evidence. Target data collection at the weakest links — the assumptions most at risk of being wrong.
  6. Revise and strengthen the story. Iterate until the contribution story is credible, with remaining uncertainties transparently acknowledged.

The output is not a statistical estimate of effect size (though it may incorporate quantitative evidence). It is a reasoned, evidence-based narrative about the intervention's contribution — honest about what it can and cannot claim.

When to Use Which Approach

Consideration Attribution Contribution
Best suited for Simple, bounded interventions with clear control groups Complex, multi-factor environments where isolation is impossible
Evidence required Counterfactual (control/comparison), statistical analysis Theory of change, multiple evidence streams, alternative explanations
Typical CSR applicability Rare — most CSR operates in open, complex systems High — fits the reality of most CSR interventions
Claim strength "The programme caused X" "The programme plausibly contributed to X, alongside other factors"
Credibility risk High if counterfactual is weak — entire claim collapses Lower — acknowledges uncertainty, which strengthens trust
Learning value Limited — tells you what happened but not always why High — examines mechanisms, assumptions, and context

A Worked Example

Education: Digital Literacy Programme in Government Schools

A CSR programme provides tablets and teacher training to 200 government schools. After two years, standardised test scores in target schools improve by 12 percentage points more than the district average.

Attribution claim (problematic): "The programme improved learning outcomes by 12 percentage points." This ignores that the state education department simultaneously introduced a new curriculum, that several target schools received additional government funding for infrastructure, and that teacher turnover in target schools happened to be lower than the district average during this period.

Contribution claim (credible): "Test scores in target schools improved 12 percentage points above the district average. Our analysis suggests the tablet programme contributed to approximately 5–7 percentage points of this gap, based on comparison with schools that received the new curriculum but not the tablets, and adjustment for infrastructure quality and teacher stability. The remaining improvement is likely attributable to concurrent government investments and lower-than-average teacher turnover."

The contribution claim is more modest — but it is believed. It demonstrates that the assessors understand the causal environment, have considered alternatives, and are making a responsible, evidence-informed claim. This is exactly what the Accreditation Committee looks for under Principle 6.

Practical Guidance for Report Authors

For practitioners preparing reports for accreditation, the key shifts are:

Start from the theory of change, not from the results. If your theory of change identifies the mechanisms through which the programme was expected to work, you can assess whether those mechanisms actually functioned. This is far more credible than presenting outcomes and claiming credit for them.

Map the causal environment. Before writing the findings section, list every plausible factor — beyond your intervention — that could have influenced the observed outcomes. Government schemes, market conditions, weather, other NGO activity, demographic shifts. If you cannot name them, your contribution claim is incomplete.

Use comparison, not control. You may not have a randomised control group, but you can often identify meaningful comparisons: non-intervention areas, pre-intervention baselines, state or national trends. These do not prove causation, but they strengthen contribution reasoning.

Acknowledge uncertainty explicitly. A report that says "we estimate our contribution at approximately X, with the following caveats" is dramatically more credible than one that claims precise impact without qualification. Uncertainty is not weakness — it is intellectual honesty.

"The most credible impact reports are not those that make the largest claims, but those that make the most honest ones. Over-claiming is not ambition — it is a failure of rigour."

What the Accreditation Committee Looks For

Under Principle 6, the committee evaluates whether the report:

Reports that score well on every other principle but fail on Principle 6 — typically through over-claimed attribution — are routinely returned as "Accredited with Conditions." This principle cannot be treated as optional.

Further reading: Mayne, J. (2012). "Contribution analysis: Coming of age?" provides the foundational methodology. For practical application in the Indian context, see our annotated walkthroughs, particularly the healthcare and livelihoods examples where attribution challenges are most acute.