4. Public Good Games Experiments Flashcards
General results of public goods games
About 40-60% cooperation in first rounds but there is decay over time
What are possible reasons for why people contribute in public goods games?
-error, confusion
-strategic reasons, repeated game effects, reputation
-warm glow, altruism (unconditional cooperation)
-conditional cooperation
Which experiment is used to measure error as a reason for public good game contribution? Describe the set up
Keser 1996
Design with interior solution
Token 1-13: payoff from keeping> payoff from public good
Token 14-20: payoff from keeping< individual return from PG
NE: contribute 7 tokens, keep 13
Results of Keser 1996
People over contribute but by the end of 25 periods are very close to NE. If we suppose errors are equally likely in both directions, then on average they should cancel out. This shows contribution isn’t likely due to errors
Describe set up of Andreoni 1995
Idea- subtract out the incentives for kindness, leaving confusion as only explanation for cooperation
3 treatments
Regular- standard PG
Rank- subjects play PG but get paid based on their rank- transforms into zero sum game (no gains from cooperation) free riding still dominant strategy
RegRank- same as regular but subjects also receive info about their rank- rank as a level of kindness
Results of Andreoni 1995
On average 75% are cooperative and half of these are confused about incentives while half understand free riding but choose to cooperate out of some form of kindness. The focus on learning in experimental research should shift to include studies of preferences of cooperation
Describe the set up of Andreoni 1988
Repeated PG game with partners and strangers with a restart after 10 rounds. Attempt to disentangle the difference between learning and strategy
Results of Andreoni 1988
Strangers contribute more than partners. Jump in both treatments when restated suggesting contributions are strategic. Note results in stranger treatment on only one independent observation
What did Croson 1996 (replication of Andreoni 1988) find?
Cooperation higher in partners and there is a jump in both after the restart
Describe the set up of Weimann 1994
PGG where each player plays against 4 fictitious characters whose contributions were made by the experimenter to see whether people are conditional cooperators
Weimann 1994 results
In E6 these phantoms invested 89.75% to PG- mean investment was 53% falling over rounds
In E7 phantoms invested 15.75%- mean investment was 33.4% falling over rounds
Definition of deception
Intentional misinformation of subjects and use of computers/ confederates without revealing this to subjects
Definition of deliberate ambiguity
Witholding info about research hypotheses, full range of experimental conditions or some experimental details
What do Charness Et Al 2022 consider grey areas of deception?
-surprise restart
-sub group rematching
-deliberate reliance on misinterpretation
-unexpected data use
-unknown/unpaid participation
Arguments against deception
-it can often be avoided
-use of deception slows down methodological innovation
-loss of control and loss of internal validity, contamination of subject pool
Quote from Hey 1998 on deception
If the subjects don’t believe what the experimenter tells them then the experimenter no longer knows what is being tested in an experiment
How can virtual players be used without deception?
Houser & Kurzbon 2002 disentangle the role of confusion and social preferences by replacing real players with pre programmed computers and tell subjects
Describe the set up of Croson 2000
-repeated linear PG games
-subjects are asked to estimate others contributions
-beliefs were incentivised. The better they estimate the higher the additional money reward
Results of Croson 2000
High correlation between beliefs and individual behaviour. Belief elicitation has an influence on contribution behaviour: lower contributions with belief elicitations
Pros and cons of paying accuracy of beliefs
+subjects have an incentive to take the task seriously and to state correct beliefs
-subjects might have an incentive to make behaviour predictable
-subjects might think about the decision situation differently
-hedging where subjects falsely state pessimistic belief to insure against the risk of true belief being wrong
Set up and results of Gachter & Renner 2010
Impact of incentivised vs non incentivised belief elicitation in a related PGG
-belief accuracy is slightly higher when beliefs are incentivised
-the distribution of beliefs as well as the relationship between contributions and beliefs are unaffected
-eliciting incentivised beliefs increases contribution levels relative to benchmark treatment without belief elicitation
Strategy method
Essentially a contribution table where subjects answer for every option possible in terms of a contribution from an imaginary other player
Pros and cons of strategy method
+data for all info sets
+more info about a player and their motivation
+lower costs
-decisions are “cold” - hypothetical
-subjects have to consider all possible subgames not only those which arise in games
Describe set up of Bayer, Renner & Sausgruber 2013
Two treatments
-standard PGG with no info feedback on others contributions or others payoffs- impedes herding
-learning treatment that induces subject confusion. Subjects are asked to choose a number between 0-20 and learn their payoff after each period
Results of Bayer, Renner, Sausgruber 2013
-contribution patterns are qualitatively very similar
-contribution rates 54%/56% in standard/learning so confusion created no systematic upwards bias
-steeper decline in standard
-strong patterns of conditional cooperation in standard. 48% of subjects exhibit +ve correlation between own and group members last contribution (only 5% in learning). Note subjects must infer others contributions from their own payoffs, these subjects can’t just imitate others
Utility function of public good games
Ui= (ei-ci) + alpha(sum of all contributions)
Where alpha is marginal per capita return
Possible downside to Keser 1996
The dominant strategy is contribute 7/20 tokens which means there is more space to make errors above this level than below. Having a dominant strategy of 10/20 would be better
Critical remarks of Andreoni 1995
-A pure altruist might still want to contribute in Rank treatment since they would rather other people get a higher payoff than them.
-rank treatment might be overall complicated which induces confusion
Critical remark of Andreoni 1988
Play was suspended after only 3 rounds following the restart. He claimed “had the budget for subjects been bigger, this would have been unnecessary. Such deceptive practices are, under less restrictive circumstances not recommended”
Critical remark of Croson 1996?
In strangers treatment there were groups of 4 matched from a room of 12 so it is very likely strangers will be matched together again
What is perfect stranger matching?
Probability of being re-matched with same person is zero. Rules out reputational concerns but lab size restricts number of periods
Results of Fischbacher, Gachter & Fehr 2001
Free riders and conditional cooperators dominate but there are also hump shaped contributors.
Contributions stem from heterogeneity of preferences
Conclusions of Bayer, Renner & Sausgruber (2013)
-simple learning can’t generate the kind of contribution dynamics commonly attributed to the existence of conditional cooperators.
-cooperative behaviour and its decay observed in PGG is not a pure artefact of confusion and learning