Opinion·8 min read

The Cooperation Problem

By Byron Fuller

The environmental funding crisis is not a resource problem. It is not an awareness problem. It is a cooperation problem — and it has been solved, on paper, since 1984. The implementation is what has taken forty years.

The logic is familiar. The commons — clean air, stable coastlines, healthy oceans — belongs to everyone, is maintained by collective action, and collective action fails when the individual cost of contributing exceeds the individual benefit of free-riding.

Donation appeals produce diminishing returns for exactly this reason. Each request asks the individual to bear a visible cost for an invisible benefit. Rational self-interest says: let someone else do it. This is not selfishness. It is the structural feature of one-shot games with diffuse outcomes. The mathematics is indifferent to your intentions.

The question that matters is not “how do we make people more generous?” People are already generous — remittances, volunteering, disaster relief, mutual aid networks all prove it. The question is: how do we structure the interaction so that cooperation is the rational choice?

Robert Axelrod answered this question in 1984, and the answer changed how political scientists, biologists, and economists think about collective action.

What Axelrod proved

Axelrod, a political scientist at the University of Michigan, organised a computer tournament. He invited game theorists from around the world to submit strategies for the iterated Prisoner’s Dilemma — a game where two players must repeatedly decide whether to cooperate or defect, knowing that mutual cooperation produces the best collective outcome but individual defection produces the best personal outcome at the other player’s expense.

The tournament received sixty-three entries. Some were elaborate. Some were devious. Some attempted to exploit opponents’ patterns. The winner was the simplest strategy submitted: Tit-for-Tat, a fourteen-line program written by Anatol Rapoport. It cooperates on the first move, then mirrors whatever the other player did last. If you cooperate, it cooperates. If you defect, it defects — once. Then it forgives and returns to cooperation.

The result was not a fluke. Axelrod ran the tournament again with a larger field. Tit-for-Tat won again. He then subjected the results to evolutionary analysis — simulating thousands of generations where successful strategies replicate and unsuccessful ones die out. Tit-for-Tat dominated, and its success illuminated four conditions for sustained cooperation.

The game must be repeated.One-shot interactions favour defection. When players know they will encounter each other again, the calculus shifts — the future cost of retaliation outweighs the immediate gain from cheating.

Moves must be visible. Cooperation and defection must both be observable. Hidden actions undermine accountability. If nobody knows whether you contributed, there is no social cost to free-riding.

Cooperation must be the norm. When most players cooperate, the few who defect stand out and face consequences. When most players defect, cooperation becomes self-sacrificial. Critical mass matters.

Cooperation must be immediately rewarded. Delayed or uncertain payoffs weaken the incentive. The closer the reward is to the action, the stronger the cooperative equilibrium.

Axelrod documented all of this in The Evolution of Cooperation (1984), and the principles have been validated across domains far removed from computer tournaments.

The trenches and the Montreal Protocol

The most vivid example is one Axelrod himself explored: the “live and let live” system that emerged spontaneously in World War I trenches. Opposing units, stuck in static positions for months, developed tacit cooperation — firing at predictable times, aiming to miss, retaliating only when the other side escalated. This was not ordered by generals. It was not negotiated by diplomats. It emerged because the conditions were right: the game was repeated, moves were visible, cooperation was mutually beneficial, and defection was immediately punished.

The system was so robust that military commanders had to deliberately disrupt it — rotating units, ordering raids, demanding identifiable results from artillery — because spontaneous cooperation between enemies was undermining the war effort. Cooperation was the natural equilibrium. It took institutional effort to break it.

The same logic applies to environmental treaties. The Montreal Protocol on ozone-depleting substances succeeded because it met Axelrod’s conditions: repeated interaction (annual reviews), visible compliance (satellite monitoring), cooperative norms (near-universal ratification), and rapid feedback (ozone measurements). The Kyoto Protocol struggled because it lacked several: compliance was hard to verify, enforcement was weak, and the feedback cycle between emissions reduction and climate outcomes spanned decades.

Apply the framework to environmental funding

A traditional donation appeal is a one-shot game. You give once. You receive a thank-you email. The outcome is invisible, delayed, and disconnected from your action. Every condition for sustained cooperation is violated. The twelve-month audit lagis not a secondary problem; it is a direct violation of Axelrod’s fourth condition.

GreenSweep was designed — deliberately, structurally — to meet Axelrod’s conditions.

The game is repeated. Users vote regularly. Each visit is a new interaction. The platform is designed for habitual engagement, not one-time transactions.

Moves are visible.Every vote is counted and displayed. Every project’s funding progress is published in real time. Your participation — and its absence — is observable. The cryptographic audit trail at /proof means observable here is not rhetorical; it is signed.

Cooperation is the norm.Social proof is built into the interface. “2,400 people voted today.” “This project is 78% funded.” The visible majority sets the expectation.

Cooperation is immediately rewarded. When you vote, the funding counter updates. You see the impact of your action within the same session. The feedback loop is seconds, not months.

This is not a metaphor. It is a design specification. The platform architecture was built to produce the conditions under which cooperation becomes self-sustaining — not because people are guilted into it, not because it is tax-deductible, but because the structure of the interaction makes cooperation the rational, rewarding, visible default.

I spent a formative period at Harvard University studying under Rob Neugeboren and Rajiv Shankar, and the intellectual debt to that time is real. The bridge between Axelrod’s mathematical derivations and the practical design of systems that sustain cooperation is not obvious — it requires thinking carefully about what makes people act in concert over time, not just once. GreenSweep is, in many ways, an applied experiment in iterated cooperation.

The environmental commons does not need more generosity. It needs better game design.

For the mechanism that makes the game iterated rather than one-shot, see how it works. For the live ledger that keeps moves visible, see /transparency. For the structural argument that sets the shadow of the future to infinity, see the foundation that cannot change its mind.

game theorycooperationAxelrodtragedy of the commonsvotingenvironmental fundingiterative games

Ready to make a difference?

Your vote directs real funding to verified environmental projects.

Cast your vote
Byron Fuller
Byron FullerCo-Founder

Byron leads GreenSweep’s go-to-market strategy and technology. He most recently built a 100+ person team in APAC deploying IoT technologies for clients including the Hong Kong MTR.

Dartmouth, UPenn, Harvard, Saïd Business School (Oxford)