Tit for Tat
Australian Broadcasting Company
Chris Meredith

The Slab




You scratch my back ...

Evolutionary biologists have had considerable trouble explaining the evolution of co-operative behaviour. The problem is that co-operation can always be exploited by selfish individuals who cheat. It seems that natural selection should always favour the cheats over the co-operators. Co-operation involves doing and receiving favours and this means that the opportunity to cheat and not return a favour is a very real possibility. Trivers (1971) tackled this problem and developed the theory of reciprocal altruism based on the idea that co-operation could evolve in species clever enough to discriminate between co-operators and cheats. The concept is summarised in the saying 'you scratch my back and I'll scratch yours'. Trivers' theory of reciprocal altruism is particularly successful in explaining human behaviour because reciprocal altruism is a major part of all human activities.

As a first means of eliciting reciprocity we use displays of generosity, gratitude, sympathy and sincerity. These 'guarantors' of reciprocity typically operate at the family, friend, and local community levels. If they fail to generate appropriate reciprocity we employ moralistic aggression in the form of sermons and lectures designed to bully all the cheats back into line. Moralistic aggression is the number one weapon of religions around the world. The strength and weakness of religions lies in their promise of 'reciprocation after death'. The sky is offered but how can we tell if it is true" Religions have found that moralistic aggression of the hell-fire-and-damnation variety is needed to calm such doubts and keep the flow of altruism coming their way.

Trivers' theory of reciprocal altruism was an important advance in our understanding of the evolution of co-operation but it was a 'special theory' rather than a 'general theory'. The discovery of how co-operative behaviour could evolve in species far less intelligent than humans, came in a surprising way - from a detailed study of the well known paradox 'The Prisoner's Dilemma'.

The prisoner's dilemma

The prisoner's dilemma refers to an imaginary situation in which two individuals are imprisoned and are accused of having co-operated to perform some crime. The two prisoners are held separately, and attempts are made to induce each one to implicate the other. If neither one does, both are set free. This is the co-operative strategy available to both prisoners. In order to tempt one or both to defect, each is told that a confession implicating the other will lead to his or her release and, as an added incentive, to a small reward. If both confess, each one is imprisoned. But if one individual implicated the other and not vice versa, then the implicated partner receives a harsher sentence than if each had implicated the other.

The prisoner's dilemma is that if they both think rationally then each one will decide that the best course of action is to implicate the other even although they would both be better off trusting each other. Consider how one prisoner thinks. If his partner fails to implicate him then he should implicate his partner and get the best possible pay-off. If his partner has implicated him he should still 'cheat' - since he suffers less than if he trusts his partner. However, the situation is more complicated than this analysis suggests. It is fairly obvious that the players' strategic decisions will also depend upon their likelihood of future encounters. If they know that they are destined never to meet again, defection is the only rational choice. Both individuals will cheat and both will end up relatively badly-off. But if the prisoner's dilemma is repeated a number of times, then it may be advantageous to co-operate on the early moves and cheat only towards the end of the game. When people know the total number of games of prisoner's dilemma, they do indeed cheat more often in the final games.

Robert Axelrod was interested in finding a winning strategy for repeated prisoner's dilemmas games. He conducted a computer tournament where people were invited to submit strategies for playing 200 games of prisoner's dilemma (Axlerod and Hamilton, 1981). Fourteen game theorists in disciplines such as economics and mathematics submitted entries. These 14, and a totally random strategy, were paired with each other in a round robin tournament. Some of these strategies were highly intricate. But the result of the tournament was that the simplest of all strategies submitted attained the highest average score. This strategy, called TIT FOR TAT by its submitter Anatol Rapoport, had only two rules. On the first move co-operate. ON each succeeding move do what your opponent did the previous move. Thus, TIT FOR TAT was a strategy of co-operation based on reciprocity. By conceptualising reciprocal altruism as a series of prisoner's dilemmas we can see that TIT FOR TAT might be the Evolutionary Stable Strategy for our reciprocal altruism adaptation. It might even help to explain the evolution of co-operation in a more general way than Trivers' theory of reciprocal altruism.

 

TIT FOR TAT

The results of Axelrod's tournament were published and people were invited to submit programs for a second tournament. This was identical in form to the first, except that matches were not of exactly 200 games, but were of a random length with median 200; this avoided the complication of programs that might have special cheating rules for the last game. This time there were 62 entries from six countries. Most of the contestants were computer hobbyists but also present were professors of evolutionary biology, physics and computer science as well as the disciplines represented earlier. Rapoport again submitted TIT FOR TAT and again it won with a leg in the air. Ultimately it displaced all other strategies and became the equivalent of an ESS for prisoner's dilemma.

From an analysis of the 3-million choices made in the second competition, four features of TIT FOR TAT emerged:

1. Never be the first to defect
2. Retaliate only after your partner has defected
3. Be prepared to forgive after carrying out just one act of retaliation
4. Adopt this strategy only if the probability of meeting the same player again exceeds 2/3.

These results provide a model for the evolution of co-operative behaviour. At first sight it might seem that the model is relevant only to higher animals which can distinguish between their various opponents. If so, TIT FOR TAT would simply be Trivers' theory of reciprocal altruism restated. But TIT FOR TAT is more than this and can be applied to animals that cannot recognise each other - as long as each individual starts co-operative encounters with very minor, low cost moves and gradually escalates as reciprocation occurs.

Axelrod and Hamilton emphasise that a formal theory for the evolution of co-operation needs to answer three questions.

1. How can a co-operative strategy get an initial foothold in an environment which is predominantly non-co-operative?
2. What type of strategy can thrive in a varied environment composed of other individuals using a wide diversity of more or less sophisticated strategies?
3. Under what conditions can such a strategy, once fully established, resist invasion by mutant strategies (such as cheating)?

The studies of TIT FOR TAT answer these questions about initial viability, robustness and stability. Provided that the probability of future interaction between two individuals is sufficiently great, co-operation based on reciprocity can indeed get started in an asocial world, can flourish in a variegated environment and can defend itself once fully established.

According to Axelrod, TIT FOR TAT is successful because it is 'nice', 'provokable' and 'forgiving'. A nice strategy is one which is never first to defect. In a match between two nice strategies, both do well. A provokable strategy responds by defecting at once in response to defection. A forgiving strategy is one which readily returns to co-operation if its opponent does so; unforgiving strategies are likely to produce isolation and end co-operative encounters.

Since the appearance of TIT FOR TAT as a model for the evolution of co-operation, there have been many strategies derived from it: TIT FOR TWO TATS, SUSPICIOUS TIT FOR TAT and ALWAYS DEFECT to name just three. Under varying conditions all achieve some success but none demonstrate the robustness of TIT FOR TAT. However the real proof of this theory is in nature where TIT FOR TAT is beginning to be identified.

Forgiving fish

Possibly the most beautiful empirical test of the TIT FOR TAT model comes from Manfred Milinski's laboratory experiments with stickleback fish (Miliniski, 1987). His experiment was based upon the observation that, during the early stages of an attack by a stalking pike, some minnows or three spined sticklebacks leave their shoal to approach within 4-6 body lengths of the predator, for what has been called a 'predator inspection visit'. In the wild, sticklebacks often approach a stalking predator, probably to identify it accurately and gauge its readiness to attack. If the little fish do so together they can get closer to the predator and, should it attack, they might be better protected by being in a group and confusing the predator. Two fish engaged in such an inspection behaviour can be regarded as co-operating if they either stay close together or take turns in leading the advance towards the predator. If one fish consistently lags behind, it may be regarded as a defector (gaining the advantages of inspection with less accompanying risk). There is, therefore, a series of choices to be made. Each time one fish swims closer, the companion can co-operate and go along with it, or defect. If it defects, it runs less risk of being eaten itself, and it may gain more information than the 'sucker' as it watches its fate.

Milinski gave sticklebacks, Gasterosteus acofeatus, the chance to alter their behaviour according to that of a companion. He put a stickleback in a tank from which it could see a large predatory cicid - a fish that resembles the perch, a common predator of sticklebacks. Also in the tank was a mirror angled either to be a co-operating mirror or a defecting mirror.

When the co-operating mirror was in place a stickleback had the illusion of a co-operating companion, but with a defecting mirror the companion lagged behind and eventually disappeared.

In this experiment, those fish with a co-operating mirror went closer to the ciclid and stayed there longer than the fish with a defecting mirror. Milinski observes that the sticklebacks acted as if they perceived that a companion was either following them or staying increasingly behind. Other aspects of TIT FOR TAT seem to be fulfilled too. The fish often forgave its cowardly companion image, approaching the ciclid again and again. This is because at first the companion moves forward too, irrespective of which mirror is in place. It eventually defects if the defecting mirror is in place, but since its first move was co-operative, it is forgiven for its previous defections - exactly what the theory of TIT FOR TAT predicted would happen.

It is beginning to appear that the strategy of TIT FOR TAT is very bit as robust in real life as it is in computer competitions. Laboratory tests of TIT FOR TAT have become a growth industry as the theory gains in stature. We can expect new revelations about its worth as a theory to explain the evolution of co-operative behaviour. But whatever the outcome of this debate, one fact remains unchallenged. TIT FOR TAT is a major regulator of human behaviour. It may be a Culturally Stable Strategy (CSS) - one that humans just learned as a way of regulating our co-operative behaviour - or it may indeed be a very necessary, naturally selected co-operative Evolutionary Stable Strategy.



© 1998 Australian Broadcasting Corporation. All rights reserved.