top of page

Ethical Dilemma and Game Theory: Does our morality drive our strategy?

Updated: July 14, 2023

Riya Jain, Aks Kuldeep Singh


Game theory is the study of strategic decision making, which is the analysis of how a rational agent's choices depend on the choices of other agents, and the analysis of how the distribution of strategies in a population evolves in various contexts and how these distributions impact the outcomes of individual interactions.

The gap between words and actions looms large in the field of ethics. Instead of the lip service people sometimes pay to norms, game theory and the empirical methods it inspires, look at behavior. The questions that arise include ‘why do we think its wrong to treat people as merely the means to an end?’, ‘why do we consider it good to give, irrespective of whether the gift is effective?’ and ‘why do we consider lies of omission less immoral than lies of commission?’.


Game theory is based on assumptions about the preferences of players and their mutual expectations of each other's behavior. Game theorists make predictions about the outcomes of interactions based on these sets of assumptions. In contrast, ethics is a normative discipline informed by morality and attitudes.

Game theory involves applying mathematical rigor to ethical questions. This is not well received by everyone. According to Robert Solomon, for example, the use of game theory in ethics has been a disaster. He contends, citing Aristotle as evidence, that formal thinking does not mix well with the sensitivities required for ethics. According to Solomon, the "rational" agent of game theory is a monster, and thinking in terms of self-interest maximization, as opposed to doing the right thing without hesitation, is having one thought too many.

According to Binmore however, game theory is ethically neutral because it is simply a theory of consistent choice. He observes that it analyzes the outcomes of people's actions based on their motives, i.e., preferences, without considering what these motives should be. As a result, game theory can be used to model both the behavior of altruistic and selfish agents.

In the prisoner's dilemma it isn't guaranteed that both players will always defect. Participants may not just care about the monetary profits (in terms of disutility from time in jail) while deciding their payoffs, but also something greater than that. A player may not wish to Fink on the other player even if it means having to face a greater sentence. This, however, does not mean that the "prisoner's dilemma" is falsified. It rather implies that the game people play after transforming the raw payoffs to their preferences is not a prisoner's dilemma. However, perhaps no one has ever claimed that they do.


Morality is self organizing and evolution has programmed it to evolve. Examples of changing moralities can be dated back to historical civilizations that started out strong then got softer and softer with time and ultimately got taken over by barbarians or colonizers.

In 1980 a political scientist named Robert Axelrod hosted a tournament involving a bunch of players playing a repeated prisoner's dilemma on computer programs. The prisoner's dilemma is a game where two players choose between cooperation or defection. The premise is that two prisoners are held in separate cells and are accused of being involved in a joint crime. Each prisoner has two choices: to stay quiet or to fink.

Cooperation yields greater collective benefit but if we assume rationality (which is the basis of conventional GT) we can see that each player will defect.

Axelrod's aim through his tournament was to find out which strategy prevails in the long run and to that end he organized an iterated prisoner's dilemma. The best strategy was found to be 'tit for tat'. A player would start out by cooperating at first and then and from there on start mirroring whatever his opponent does. So if one player stays quiet and the opponent defects in the first round then the first player will defect in the second round.

The 'tit for tat' strategy that emerged is a lot like the 'eye for an eye' way of thinking.

Axelrod had emphasized that while the 'tit for tat' strategy may have emerged victorious in the long run it is not necessarily the best strategy. Axelrod's experiment assumed perfect information which is unrealistic. Nobody can understand everything their opponent does and there is always a scope for misunderstanding. This happens all the time in real life. Misunderstandings may arise in certain situations involving people and that may escalate into conflict.

Also, if two people are playing against each other using the same strategy of 'tit for tat' then they may get stuck into a vicious cycle of punishing the other for what they did. Suppose player 1 cooperates in the first round and the opponent defects. Then player 1 may think that his opponent tried to insult him and in the next round he will try to get back at player 2 by defecting. Similarly the opponent may think along the same lines and cooperate in the next round. This situation arises much more frequently than one might think. There is a prevailing thinking that 'if you did me wrong then I must pay you back for it'. This results in a long conflict which is hard to break out of.

If we allow for errors to occur with a certain frequency we can see strategies involving a more generous approach. A player may choose to tolerate a certain amount of unprovoked bad behavior from their opponent to avoid getting stuck into the vicious cycle of vendetta described earlier. It is fascinating to see how the old testament way of thinking (i.e. tit for tat) has evolved into the new testament way of thinking by adopting a more generous approach and accounting for misunderstandings between the players.

But another problem arises in this as no scenario is perfect and peace cannot be sustained forever. If everyone is playing nice then that leaves people with the opportunity to take advantage by defecting. And once one player defects then the opponent might feel insulted and the cycle of vendetta will start again.


Evolutionary game theory suggests that the underlying assumption of rationality among players does not hold true. In fact in several life instances players may not choose a strategy that yields them the highest utility, but choose the one that they have a bias for.

The area of Game theory and Morality still remains nascent. With more experimental research taking place in game theory, we are finding more reasons to challenge (or not to challenge) the rational assumption of players and further if the decisions of rationality are supported by moral choices of players. It sheds light on whether players follow an ethical instinct in forming their strategies or they do go by the ones which give them the highest satisfaction. The concept of ‘highest satisfaction’ in itself cannot be described in absolute terms and what defines satisfaction for each individual remains abstract. However, with the amalgamation of psychology, behavioral economics and game theory and the models of evolutionary game theory, one day probably we would be able to understand the interlinkages between game theory, morality and ethics.




2. Games2018, 9(2), 20;



60 views0 comments
bottom of page