sphinx-quickstart on Tue Oct 6 16:03:26 2020.
You can adapt this file completely to your liking, but it should at least
contain the root toctree
directive.

EGTtools – Toolbox for Evolutionary Game Theory¶
EGTtools provides a centralized repository with analytical and numerical methods to study/model game theoretical problems under the Evolutionary Game Theory (EGT) framework.
This library is composed of two parts:
a set of analytical methods implemented in Python 3
a set of computational methods implemented in C++ with (Python 3 bindings)
The second typed is used in cases where the state space is too big to solve analytically, and thus require estimating the model parameters through monte-carlo simulations. The C++ implementation provides optimized computational methods that can run in parallel in a reasonable time, while Python bindings make the methods easily accecible to a larger range of researchers.
Installation¶
From PyPi¶
EGTtools can be installed using PyPi on Linux, macOS, and Windows:
pip install egttools
To update your installed version to the latest release, add -U
(or --upgrade
) to the command:
pip install -U egttools
Note
Currently, only the Linux build supports OpenMP parallelization for numerical simulations. This should normally be ok for most applications, since numerical simulations are heavy (computationally) and should be run on a High Power Computing (HPC) clusters which normally run Linux distributions. We are investigating how to provide support for OpenMP in both Windows and Mac OSX. In the meantime, if you really want to run numerical simulations on either of these two platforms, you should follow the compilation instructions below and try to link OpenMP for your platform yourself. Please, if you manage to do so, open an issue or a pull request with your solutions.
Warning
The arm64 and universal2::arm64 have not been tested upstream on CI, so please report any issues or bugs you may encounter.
Warning
For Apple M1 (arm64) you should install using pip install egttools --no-deps
so that pip does not
install the dependencies of the package. This is necessary since there is no Scipy wheel for architecture arm64
available on PyPi yet.
To install the package dependencies you should create a virtual environment
with miniforge. Once you have miniforge installed you can do the
following (assuming that you are in the base miniforge environment):
conda create -n egtenv python=3.9
conda activate egtenv
conda install numpy
conda install scipy
conda install matplotlib
conda install networkx
Build from source¶
To build egttools
from source you need:
A recent version of Linux (only tested on Ubuntu), MacOSX (Mojave or above) or Windows
CMake
version 3.17 or higherC++ 17
Eigen
3.3.*Boost
1.80.*Python 3.7 or higher
Warning
Boost is required in order for EGTtools to use multiprecision integers and floating point numbers with higher precision. You may still be able to compile EGTtools without Boost, but we highly recommend don’t.
Once you install these libraries, you can follow the following steps.
To install all required packages run:
python -m venv egttools-env
source egttools-env/bin/activate
pip install -r requirements.txt
Or with anaconda:
conda env create -f environment.yml
conda activate egttools-env
Also, to make your virtual environment visible to jupyter:
conda install ipykernel # or pip install ipykernel
python -m ipykernel install --user --name=egttools-env
You can build EGTtools by running:
pip install build
cd <path>
python -m build
Where <path>
represents the path to the EGTtools folder. If you are running this while inside the EGTtools folder,
then <path>
is simply ./
.
Finally, you can install EGTtools in development mode, this will allow the installation to update with new modifications to the package:
python -m pip install -e <path>
If you don’t want development mode, you can skip the option `-e`
.
Python distributions¶
- Anaconda
If you use the Anaconda distribution of Python, you can use the same
pip
command in a terminal of the appropriate Anaconda environment, either activated through the Anaconda Navigator or conda tool.- PyPy
Recent versions of PyPy are supported by the pybind11 project and should thus also be supported by EGTtools.
- Other
For other distributions of Python, we are expecting that our package is compatible with the Python versions that are out there and that
pip
can handle the installation. If you are using yet another Python distribution, we are definitely interested in hearing about it, so that we can add it to this list!
Troubleshooting¶
It is possible that you run into problems when trying to install or use EGTtools. This may happen because you are running on a different platform or configuration than what we have listed, or simply because we have not considered your particular scenario/environment.
If this is the case, and you do run into problems, please create a GitHub issue, or write me a quick email. We would be very happy to solve these problems, so that future users can avoid them and we can expand the use of our library.
Pip version¶
If the standard way to install EGTtools results in an error or takes a long time,
try updating pip
to the latest version by running
pip install --upgrade pip
If you do not have pip
installed, you can follow these instructions to
install pip: https://pip.pypa.io/en/stable/installing/
Tutorials¶
The egttools
library is structured in a way that allows the
user to apply all analytical
and numerical
methods
implemented in the library to any Game
, as long as it follows a common interface.
It also provides several plotting functions and classes to help the user visualize the result of the
models. The following figure shows, at a high level, what is the intended structure of egttools
.
In the following links you may find more information and examples about how to
use the analytical and numerical methods implemented in egttools
; how to implement new Games
;
and how to implement new strategies or behaviors
for the existing games:
How to create a new game¶
Implementing a new game from scratch: the AbstractGame
class¶
The Game
class is core to the EGTtools library, as it defines the
environment in which strategic interactions take place. All
games
must extend the AbstractGame
abstract class.
This class nine methods:
play
calculate_payoffs
calculate_fitness
__str__
nb_strategies
payoffs
payoff
save_payoffs
Below you can see a description of this class in Python and the purpose of each method:
class AbstractGame:
def play(self, group_composition: Union[List[int], numpy.ndarray], game_payoffs: numpy.ndarray) -> None:
"""
Calculates the payoff of each strategy inside the group.
Parameters
----------
group_composition: Union[List[int], numpy.ndarray]
counts of each strategy inside the group.
game_payoffs: numpy.ndarray
container for the payoffs of each strategy
"""
pass
def calculate_payoffs(self) -> np.ndarray:
"""
This method should set a numpy.ndarray called self.payoffs_ with the
expected payoff of each strategy in each possible
state of the game
""""
pass
def calculate_fitness(self, strategy_index: int, pop_size: int, population_state: numpy.ndarray) -> float:
"""
This method should return the fitness of strategy
with index `strategy_index` for the given `population_state`.
"""
pass
def __str__(self) -> str:
"""
This method should return a string representation of the game.
"""
pass
def nb_strategies(self) -> int:
"""
This method should return the number of strategies which can play the game.
"""
pass
def type(self) -> str:
"""
This method should return a string representing the type of game.
"""
pass
def payoffs(self) -> np.ndarray:
"""
This method should return the payoff matrix of the game,
which gives the payoff of each strategy
in each given context.
"""
pass
def payoff(self, strategy: int, group_configuration: List[int]) -> float:
"""
This method should return the payoff of a strategy
for a given `group_configuration`, which gives
the counts of each strategy in the group.
This method only needs to be implemented for N-player games
"""
pass
def save_payoffs(self, file_name: str) -> None:
"""
This method should implement a mechanism to save
the payoff matrix and parameters of the game to permanent storage.
"""
pass
Simplifying game implementation: AbstractTwoPLayerGame
and AbstractNPlayerGame
classes¶
However, in most scenarios the fitness of a strategy at a given population
state is its expected payoff at that state. For this reason,
egttools
provides two other abstract classes to simplify the
implementation of new games:
egttools.games.AbstractTwoPLayerGame
, for two-player games;and
egttools.games.AbstractNPlayerGame
for N-player games.
When using these abstract classes, you only need to implement two methods:
play
andcalculate_payoffs
.
Example: The N-player Stag-Hunt Game¶
Below you can find an example on how to implement the N-player Stag Hunt game from Pacheco et al. [1] :
from egttools.games import AbstractNPlayerGame
from egttools import sample_simplex
class NPlayerStagHunt(AbstractNPlayerGame):
def __init__(self, group_size, enhancement_factor, cooperation_threshold, cost):
self.group_size_ = group_size # N
self.enhancement_factor_ = enhancement_factor # F
self.cooperation_threshold_ = cooperation_threshold # M
self.cost_ = cost # c
self.strategies = ['Defect', 'Cooperate']
self.nb_strategies_ = 2
super().__init__(self.nb_strategies_, self.group_size_)
def play(self, group_composition: Union[List[int], np.ndarray], game_payoffs: np.ndarray) -> None:
if group_composition[0] == 0:
game_payoffs[0] = 0
game_payoffs[1] = self.cost_ * (self.enhancement_factor_ - 1)
elif group_composition[1] == 0:
game_payoffs[0] = 0
game_payoffs[1] = 0
else:
game_payoffs[0] = ((group_composition[1]
* self.enhancement_factor_)
/ self.group_size_) if group_composition[
1] >= self.cooperation_threshold_ else 0 # Defectors
game_payoffs[1] = game_payoffs[0] - self.cost_ # Cooperators
def calculate_payoffs(self) -> np.ndarray:
payoffs_container = np.zeros(shape=(self.nb_strategies_,), dtype=np.float64)
for i in range(self.nb_group_configurations_):
# Get group composition
group_composition = sample_simplex(i, self.group_size_, self.nb_strategies_)
self.play(group_composition, payoffs_container)
for strategy_index, strategy_payoff in enumerate(payoffs_container):
self.payoffs_[strategy_index, i] = strategy_payoff
# Reinitialize payoff vector
payoffs_container[:] = 0
return self.payoffs_
What if you already have calculated a matrix of expected payoffs?¶
In case you have calculated a matrix of expected payoffs for all strategies, and do not want to waste time in implementing
a new game. You can make use of the container classes egttools.games.Matrix2PlayerGameHolder
for 2-player games
and egttools.games.MatrixNPlayerGameHolder
for N-player games.
Matrix2PlayerGameHolder
:This class expects the number of strategies (
nb_strategies
) in the game and the matrix of expected payoffs as a parameter. The payoff matrix must be square and have the shape (nb_strategies, nb_strategies). So that each entry gives the expected payoff of the row strategy versus the column strategy.
MatrixNPlayerGameHolder
:This class expects the number of strategies (
nb_strategies
), size of the group (group_size
) and the matrix of expected payoffs as a parameter. In this case the payoff matrix must have the shape (nb_strategies, nb_group_configurations), wherenb_group_configurations
is the total number of combinations of strategies in the group, and can be obtained usingegttools.calculate_nb_states(group_size, nb_strategies)
. Thus, each entry in the matrix must give the expected payoff of the row strategy in the group configuration given by the column index. You can obtain the group configuration from an index usingegttools.sample_simplex(index, group_size, nb_strategies)
. When the row strategy in not present in the column group configuration, the payoff in this entry must be 0.
Note
You can find an example of how to use these classes here.
List of implemented games¶
- NormalFormGame: implements iterated normal form games (matrix games).
This class expects as parameters the number of rounds of the game, a payoff matrix, and a list of strategies that will play the game. You can find more information on how to use implemented strategies or implement new ones here.
- PGGimplements a version of a Public Goods Game.
This game expects the size of the group, the cost of cooperation, a multiplication factor and the set of strategies that will play the game as parameters. You can find more information on how to use implemented strategies or implement new ones here.
- OneShotCRDimplements the one-shot collective risk dilemma described by Santos and Pacheco [2].
This game takes as parameters and endowment, which will be equal for all group members, the cost of cooperation, the risk of collective failure, the size of the group, and the minimum number of cooperators in a group required to reach the collective target.
- CRDGameimplements the collective risk dilemma proposed by Milinski et al. [3].
This game takes as parameters and endowment, which will be equal for all group members, a threshold, i.e., the collective target contributions, the number of rounds of the game, the size of the group, the risk of collective failure, an enhancement factor, which multiples the payoffs of all members of the group when they reach the collective target, and a
List
with the strategies that will play the game. You can find more information on how to use implemented strategies or implement new ones here.
- NPlayerStagHuntimplements the N-player stag hunt game proposed in Pacheco et al. [1].
This game takes as parameters the size of the group, and enhancement factor, a cooperation threshold, i.e., the minimum number of cooperators required to provide the public good, and the cost of cooperation. The game is implemented so that the only strategies playing the game is
Cooperate
andDefect
.
Create new behaviors for existing games¶
Recommendations on implementing strategies for a new game¶
Implementing new strategies for a Normal Form Game¶
Apply analytical methods¶
The replicator dynamics¶
The replicator equation represents the dynamics of competing individuals in an infinite population (\(Z\rightarrow \infty\)). It defines the rate at which the frequency of strategies in a population will change, i.e., it defines a gradient of selection [2]. It is often found in the form of [4]
where \(x_i\) represents the frequency of strategy \(i\) in the population, and \(f_i(\vec{x})=\Pi_i(\vec{x})\) its fitness (or expected payoff), given the current population state \(\vec{x}\). The term \(\sum_{j=1}^{N}{x_{j}f_j(\vec{x})}\) represents the average fitness of the population in state \(\vec{x}\).
EGTtools implements the replicator equation for 2 and N-player games in the functions egttools.analytical.replicator_equation
and egttools.analytical.replicator_equation_n_player
. Both methods require a payoff matrix as argument. For
2-player games, this payoff matrix must be square and have as many rows (columns) as strategies in the population.
For N-player games, the payoff_matrix reads as the payoff of the row strategy at the group configuration given
by the index of the column. The group configuration can be retrieved from the index using
egttools.sample_simplex(index, group_size, nb_strategies)
, where group_size
is the size of the group (N) and
nb_strategies
is the number of available strategies in the population.
2-player games¶
For a 2 player hawk-dove game, you can calculate the gradients of selection, the roots and their stability with:
import numpy as np
import egttools as egt
from egttools.analytical.utils import (calculate_gradients, find_roots, check_replicator_stability_pairwise_games, )
# Calculate gradient
payoffs = np.array([[-0.5, 2.], [0., 0]])
x = np.linspace(0, 1, num=101, dtype=np.float64)
gradient_function = lambda x: egt.analytical.replicator_equation(x, payoffs)
gradients = calculate_gradients(np.array((x, 1 - x)).T, gradient_function)
# Find roots and stability
roots = find_roots(gradient_function, nb_strategies=2, nb_initial_random_points=10, method="hybr")
stability = check_replicator_stability_pairwise_games(roots, A)
# Plot the gradient
egt.plotting.plot_gradients(gradients[:, 0], xlabel="frequency of hawks", roots=roots, stability=stability)
You can then plot the gradient of selection with:
egt.plotting.plot_gradients(gradients[:, 0], xlabel="frequency of hawks", roots=roots, stability=stability)

You can find other examples here.
N-player games¶
When \(N>2\), the game is played among groups of more than 2 players. In this case,
to analyse the replicator dynamics, we need to use egt.analytical.replicator_equation_n_player
instead.
For example for a 3-player Hawk-Dove game we have:
payoff_matrix = np.array([
[-0.5, 2. , 1. , 3. ],
[ 0. , 0. , 1. , 2. ]
])
gradient_function = lambda x: egt.analytical.replicator_equation_n_player(x, payoff_matrix, group_size=3)
gradients = calculate_gradients(np.array((x, 1 - x)).T, gradient_function)
# Plot the gradient
egt.plotting.plot_gradients(gradients[:, 0], xlabel="frequency of hawks")

Note
For the moment egttools
only supports the analytical calculation of stability for
the replicator equation for 2-player games. We have planned to add support for
N-player games in version 0.13.0.
Stochastic dynamics in finite populations: the pairwise comparison rule¶
The replicator dynamics assumes that populations have infinite size. This is convenient from a technical point of view, as it allows using relatively simple differential equations to model complex evolutionary processes. However, in many occasions, when modelling realistic problems, we cannot neglect the stochastic effects that come along when individuals interact in finite populations [5].
We now consider a finite population of \(Z\) individuals, who interact in groups of size \(N\in[2,Z]\), in which they engage in strategic interactions (or games). Each individual can adopt one of the \(n_s\) strategies. The success (or fitness) of an individual can be computed as the expected payoff of the game in a given state \(\vec{x}\).
We adopt a stochastic birth-death process combined with the pairwise comparison rule [1, 5, 6] to describe the social learning dynamics of each of the strategies in a finite population. At each time-step, a randomly chosen individual \(j\) adopting strategy \(\vec{e_{j}}\) has the opportunity to revise her strategy by imitating (or not) the strategy of a randomly selected member of the population \(i\). The imitation will occur with a probability \(p\) which increases with the fitness difference between \(j\) and \(i\). Here we adopt the Fermi function (see Equation below), which originates from statistical physics and provides a well defined mapping between \(\mathbb{R}^+ \rightarrow [0,1]\). Please also note that since the population is finite, instead of assuming the frequencies of each strategy in the population (\(x_i\)) we use the absolute value \(k_i\) so that \(x_i \equiv[k_i/Z]\).
In equation above, \(f_j\) (\(f_i\)) is the fitness of individual \(j\) (\(i\)) and \(\beta\), also known as inverse temperature, controls the intensity of selection and the accuracy of the imitation process. For \(\beta \rightarrow 0\), individual fitness is but a small perturbation to random drift; for \(\beta \rightarrow\infty\) imitation becomes increasingly deterministic. Also, \(k_{i}\) represents again the total number of individuals adopting strategy \(i\). In addition, we consider that, with a mutation (or exploration) probability \(\mu\), individuals adopt a randomly chosen strategy, freely exploring the strategy space. Overall this adaptive process defines a large-scale Markov chain, in which the transition probabilities between states are defined in function of the fitness of the strategies in the population and their frequency. The complete characterization of this process becomes unfeasible as the number of possible population configurations scales with the population size and the number of strategies following \(\binom{Z+n_s-1}{n_s-1}\) [7].
The probability that the number \(k\) of participants adopting a cooperative strategy \(C\) would increase (\(T^+(k)\)) or decrease (\(T^-(k)\)) can be specified as [5]:
Small Mutation Limit (SML)¶
The Markov chain described above can quickly become too complex for analytical description as the number of strategies, even for small population sizes. However, whenever in the limit where mutations are rare (\(\mu \rightarrow 0\)) it is possible to approximate the complete stochastic process by a Markov chain with a number of states given by the number of strategies. In this small mutation limit (SML) [6], when a new strategy appears through mutation, one of two outcomes occurs long before the occurrence of a new mutation: either the population faces the fixation of a newly introduced strategy, or the mutant strategy goes extinct.
Hence, there will be a maximum of two strategies present simultaneously in the population. This allows us to describe the behavioural dynamics in terms of a reduced Markov Chain of size \(n_s\), whose transitions are defined by the fixation probabilities \(\rho_{ji}\) of a single mutant with strategy \(i\) in a population of individuals adopting another strategy \(j\) [5, 8, 9].
How to use this model in EGTtools¶
All of these analytical equations are implemented in the class egttools.analytical.PairwiseComparison
, which
contains the following methods:
calculate_fixation_probability(invading_strategy_index, resident_strategy_index:, beta:)
:Calculates the fixation probability of a single mutant of an invading strategy of index
invading_strategy_index
in a population where all individuals adopt the strategy with indexresident_strategy_index
. The parameterbeta
gives the intensity of selection.
calculate_transition_and_fixation_matrix_sml(beta)
:Calculates the transition and fixation matrices assuming the SML. Beta gives the intensity of selection.
calculate_gradient_of_selection(beta, state)
:Calculates the gradient of selection (without considering mutation) at a given population state. The
state
parameter must be an array of shape (nb_strategies,), which gives the count of individuals in the population adopting each strategy. This method returns an array indicating the gradient in each direction. In this stochastic model, gradient of selection means the most likely path of evolution of the population.
calculate_transition_matrix(beta, mu)
:Calculates the transition matrix of the Markov chain that defines the dynamics of the population.
beta
gives the intensity of selection andmu
the mutation rate.
You can see an example of the SML here, and of the calculation of the full transition matrix here.
Note
Currently, egttools
only implements the moran process with the pairwise
comparison rule. However, we have planned to add other evolutionary game
theoretical models soon, such as the frequency dependant moran process or the
Wright-Fisher process.
Note
We will add support for multiple populations soon.
Apply numerical methods¶
Often the complexity of the problem (either due to a high number of strategies or a big population size) makes
numerical simulations a requirement. egttools
implements several method to estimate the main indicators required
to characterize a population through the stochastic evolutionary dynamics methods described in
here in the egttools.numerical.PairwiseMoranNumerical
class.
All this class requires is a game
which must inherit from the egttools.games.Abstract
game class and population
size as parameters. It also requires that you specify a cache
size. You should
take into account the memory (RAM) available in the machine you will run the simulations,
as this parameter is used to determine the size of the cache where the computed fitness values
will be stored, so that the simulation can run faster. It implements the following methods:
estimate_fixation_probability(index_invading_strategy, index_resident_strategy, nb_runs, nb_generations, beta))
:Estimates the probability that one mutant of the strategy with index
index_invading_strategy
will fixate in a population where all members adopt strategy of indexindex_resident_strategy
.nb_runs
specifies the number of (parallel) runs that shall be executed. The higher this number the better the estimation will be.nb_generations
indicates the total number of generations that a simulation will run. The simulation will be stopped even if the population did not converge to a monomorphic state.beta
represents the intensity of selection.
estimate_stationary_distribution(nb_runs, nb_generations, transitory, beta, mu)
:Estimates the stationary distribution. This method will run
nb_runs
(parallel) simulations fornb_generations
generations. After an initialtransitory
number of generations, this methods will count the number of times each population state is visited. The final estimation will be an average of all the independent simulations.beta
represents the intensity of selection andmu
the mutation rate.
estimate_stationary_distribution_sparse(nb_runs, nb_generations, transitory, beta, mu)
:Same as above, but returns a Sparse Matrix, which is useful for very large state-spaces.
estimate_strategy_distribution(nb_runs, nb_generations, transitory, beta, mu)
:When the state space is too large to be represented in a 64 bit integer, then we can no longer estimate the stationary distribution with
PairwiseComparisonNumerical
. Instead, we can estimate directly the strategy distribution using this method.
evolve(nb_generations, beta, mu, init_state)
:This method will run a single simulation for
nb_generations
generations and return the final state of the population.init_state
is anumpy.array
containing the initial counts of each strategy in the population.
run(nb_generations, beta, mu, init_state)
:Same as evolve, but instead of just the last state, it will return all states the population went through.
run(nb_generations, transient, beta, mu, init_state)
:Same as above, but will not return the first
transient
generations. This is useful, as long simulations can occupy a lot of memory.
run(nb_generations, transient, beta, init_state)
:This version of run assumes that mutation is 0.
Estimate fixation probabilities¶
Estimate stationary distributions¶
Warning
This method should not use for states spaces larger than the number which can be stored in
a 64 bit - int64_t
- integer!
Estimate strategy distributions¶
Run a single simulation¶
Evolve a population for a given number of rounds¶
Note
Although at the moment egttools.numerical
only contain methods to
study evolutionary dynamics in well-mixed populations, we have planned
to add support for simulations in complex networks in version 0.14.0,
and for multi-level selection in version 0.15.0.
Visualizing results¶
Populations with 2 strategies¶
The gradient of selection and stability in infinite populations¶
The gradient of selection in finite populations¶
Plotting the stationary distribution¶
Populations with 3 strategies¶
The Simplex2D
class¶
The gradient of selection and stability in infinite populations¶
The gradient of selection and stationary distribution in finite populations¶
Populations with more than 3 strategies¶
Note
We will add support for plotting 3D simplexes soon (a pyramid).
Utility functions¶
References¶
- 1
Jorge M Pacheco, Francisco C Santos, Max O Souza, and Brian Skyrms. Evolutionary dynamics of collective action in N-person stag hunt dilemmas. Proceedings of the Royal Society B: Biological Sciences, 276(1655):315–321, 2009. doi:10.1098/rspb.2008.1126.
- 2
Francisco C. Santos and Jorge M. Pacheco. Risk of collective failure provides an escape from the tragedy of the commons. Proceedings of the National Academy of Sciences, 108(26):10421–10425, 2011. doi:10.1073/pnas.1015648108.
- 3
Manfred Milinski, Ralf D Sommerfeld, Hans-Jürgen Krambeck, Floyd A Reed, and Jochem Marotzke. The collective-risk social dilemma and the prevention of simulated dangerous climate change. Proceedings of the National Academy of Sciences, 105(7):2291–2294, 2008.
- 4
Karl Sigmund. The Calculus of Selfishness. Princeton University Press, 2010. ISBN 978-0-691-14275-3. doi:10.1038/4641280a.
- 5
Arne Traulsen, Martin A Nowak, and Jorge M Pacheco. Stochastic dynamics of invasion and fixation. Physical Review E, 74(1):011909, 2006.
- 6
Drew Fudenberg and Lorens A. Imhof. Imitation processes with small mutations. Journal of Economic Theory, 131(1):251–262, 2006. doi:10.1016/j.jet.2005.04.006.
- 7
VÃtor V. Vasconcelos, Fernando P. Santos, Francisco C. Santos, and Jorge M. Pacheco. Stochastic Dynamics through Hierarchically Embedded Markov Chains. Physical Review Letters, 118(5):058301, 2017. doi:10.1103/PhysRevLett.118.058301.
- 8
Warren J Ewens. Mathematical population genetics. Springer Science+Business Media, LLC, second edition, 2012. ISBN 978-1-4419-1898-7. doi:10.1007/978-0-387-21822-9.
- 9
S Karlin and H M A Taylor. A first course in stochastic processes. Academic Press, New York, 2nd edn. edition, 1975.
Examples¶
The following examples give an idea of how to use EGTtools to model social dynamics:
Evolutionary dynamics of Hawk-Dove¶
[1]:
import numpy as np
# Plotting libraries
import matplotlib.pylab as plt
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables high resolution PNGs.
%config InlineBackend.figure_formats = {'png', 'svg'}
Evolutionary Dynamics of the Hawk-Dove Game in Infinite populations¶
Replicator equation¶
The replicator equation represents the dynamics of competing individuals in a population. It normally found in the following form:
\(\dot{x}_i = x_i[f(x_i)-\sum_{i=1}^{n}x_if(x_i)]\),
where \(x_i\) represents the frequency of strategy \(i\) in the population, and \(f(x_i)\) is the fitness of strategy \(i\). This differencial equation, gives the gradient of selection, i.e., the strength with which the frequency of a certain strategy will increase or decrease. It may also be expressed in a more convenient matrix form:
\(G(x_i) = \dot{x}_i = x_i[(Ax)_i - x^TAx]\)
Where the matrix \(A\) is a payoff matrix with element \(A_{ij}\) representing the fitness of strategy \(i\) over strategy \(j\).
[2]:
from egttools.analytical import replicator_equation
from egttools.analytical.utils import (calculate_gradients, find_roots, check_replicator_stability_pairwise_games,)
from egttools.plotting import plot_gradients
[3]:
# Payoff matrix
V = 2; D = 3; T = 1
A = np.array([
[ (V-D)/2, V],
[ 0 , (V/2) - T],
])
Now we can calculate the gradient of selection of the replicator equation and plot it
[4]:
nb_points = 101
strategy_i = np.linspace(0, 1, num=nb_points, dtype=np.float64)
[5]:
# Calculate gradient
gradient_function = lambda x: replicator_equation(x, A)
gradients = calculate_gradients(np.array((strategy_i, 1 - strategy_i)).T, gradient_function)
When we are analysing a pairwise game with the replicator equation, we can also calculate the roots of the system and their stability:
[6]:
roots = find_roots(gradient_function, 2, nb_initial_random_points=10, method="hybr")
stability = check_replicator_stability_pairwise_games(roots, A, atol_neg=1e-4, atol_pos=1e-4, atol_zero=1e-4)
[7]:
plot_gradients(gradients[:, 0], figsize=(5,4), fig_title="Hawk-Dove game replicator dynamics",
xlabel="frequency of hawks", roots=roots, stability=stability)
plt.show()
Evolutionary Dynamics of the Hawk-Dove Game in Finite populations¶
Now we are going to study the effect of having Finite populations. In general, finite populations introduce stochastic effects in the dynamics, also known as random drift \(~ 1/Z\), where \(Z\) is the size of the population. We can represent these dynamics, by adapting the replicator equation, which considers that individuals are sampled from an infinite population, and therefore selecting a member of strategy \(j\) does not reduce its frequency in the population. When the population is finite, we make no longer assume a sampling with replacement, i.e., when an individual of strategy \(j\) is sampled, the fraction of members of that strategy is reduced, instead we must sample without replacement.
Here the fitness of an strategy \(i\) against strategy \(j\) directly depends on the size of the population:
\(f_i(x_i, Z) = \frac{x_i - 1}{Z-1} * A_{ii} + \frac{Z - x_i}{Z-1} * A_{ij}\)
For the selection dynamics, we use a Moran process (or birth-death process) with pair-wise comparison: at each step, 2 individuals, \(a\) and \(b\), are randomly sampled (without replacement) from the population, and their payoff is compared. The fermi equation gives the probability that individual \(a\) (selected for death) will copy the strategy of individual \(b\) (selected for birth):
\(p=[1 + e^{\beta(f_a-f_b)}]^{-1}\)
\(a\) will imitate \(b\) with probability \(p\), in any other case, the population state will not change. \(\beta\) indicates the selection strength and on the limit \(\beta \xrightarrow{} 0\) all strategies are immitated with equal probability.
[8]:
from egttools.analytical import StochDynamics
[9]:
# Parameters and evolver
nb_strategies = 2; Z = 100; N = 2;
beta = 1
pop_states = np.arange(0, Z + 1, 1)
strategy_i = np.linspace(0, 1, num=Z + 1, dtype=np.float64)
evolver = StochDynamics(nb_strategies, A, Z)
<ipython-input-9-0ef21391ac1a>:6: DeprecationWarning: This class will soon be deprecated in favour of egttools.analytical.PairwiseComparison, which is implemented in c++ and runs faster, as well as, avoids precision issues that appeared in some limit cases.
evolver = StochDynamics(nb_strategies, A, Z)
[10]:
gradients = np.array([evolver.gradient_selection(x, 0, 1, beta)
for x in pop_states])
[11]:
plot_gradients(gradients, figsize=(4,4), fig_title="Hawk-Dove game stochastic dynamics",
marker_facecolor='white',
xlabel="frequency of hawks (k/Z)", marker="o", marker_size=20, marker_plot_freq=2)
plt.show()
[12]:
evolver.mu = 0
stationary_SML = evolver.calculate_stationary_distribution(beta)
print("time spent as Hawk: {} & time spent as Dove: {}".format(*stationary_SML))
time spent as Hawk: 1.0 & time spent as Dove: 0.0
/Users/eliasfernandez/PycharmProjects/EGTtools/src/egttools/analytical/sed_analytical.py:687: RuntimeWarning: Some of the entries in the transition matrix are close to 1 (with a tolerance of 1e-11). This could result in more than one eigenvalue of magnitude 1 (the Markov Chain is degenerate), so please be careful when analysing the results.
warn(
If we introduce mutations, i.e., there is a probability that individuals make an error, and adopt a different strategy instead of imitating the best, the dynamics may change. In this case, at each time step, players are selected for death/birth and their fitness is compared.
However, now, with probability \(\mu\) agent \(a\) will adopt a random strategy from the strategy space, and with probability \(1-\mu\) it will immitate \(b\) with probability \(p\). Therefore the probability of immitating \(b\) is \(p_{eff} = (1-\mu)*p\). On the limit \(\mu \xrightarrow{} 1\), all strategies are taken with equal probability. When \(\mu \xrightarrow{} 0\) we go back to the previous case, also known as small mutation limit (SML).
When mutation is small, we may assume that only the the states in which all population adopts a single strategy, also known as monomorphic states, are absorbing and stable. This is because, since there are no mutations, once the population reaches a monomorphic state, it will never leave. Such a simplification, allows us to reduce the number of states of the system, taking the Hawk-Dove game as an example, from Z + 1, to only 2.
Moreover, mixed equilibria are no longer stable in the SML. This occurs, since random drift, even if it takes an infinite ammount of time (please note that we are not looking at fixation times), will drive the population to one of the monomorphic states. For this reason, the SML assumption is only reasonable when we know that there are no mixed stable attractors in the studied system, which is not the case in the Hawk-Dove game. This explains why the results of the stationary distribution differ from the previously calculated Nash equilibria.
Now we are going to calculate the stationary distribution again, but taking mutations into account.
[13]:
evolver.mu = 1e-3
stationary_with_mu = evolver.calculate_stationary_distribution(beta)
[14]:
fig, ax = plt.subplots(figsize=(5, 4))
fig.patch.set_facecolor('white')
lines = ax.plot(np.arange(0, Z+1)/Z, stationary_with_mu[::-1])
plt.setp(lines, linewidth=2.0)
ax.set_ylabel('stationary distribution',size=16)
ax.set_xlabel('$k/Z$',size=16)
ax.set_xlim(0, 1)
plt.show()
[ ]:
Numerical simulations¶
[1]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
[2]:
import egttools as egt
[3]:
egt.Random.init()
seed = egt.Random._seed
[4]:
# Payoff matrix
V = 2; D = 3; T = 1
A = np.array([
[ (V-D)/2, V],
[ 0 , (V/2) - T],
])
[5]:
A
[5]:
array([[-0.5, 2. ],
[ 0. , 0. ]])
Estimate numerically the stationary distribution¶
You can use egttools
to estimate numerically the stationary distribution. All you need to specify is the number of independent runs (simulations) that will be executed and afterwards averaged to obtain the final estimation; the length of each run (the number of generations); the transitory period (a number of generations that will not be taken into account to compute the stationary distribution - this transitory period will depend on the problem that is being studied); the intensity of
selection (\(\beta\)); and the probability of a mutation occuring (\(\mu\)).
[6]:
game = egt.games.NormalFormGame(1, A)
[7]:
Z = 100
x = np.arange(0, Z+1)/Z
[8]:
evolver = egt.numerical.PairwiseComparisonNumerical(Z, game, 1000000)
[9]:
Z = 100
x = np.arange(0, Z+1)/Z
evolver.pop_size = Z
[10]:
dist = evolver.estimate_stationary_distribution(10, int(1e6), int(1e3), 1, 1e-3)
[11]:
# We need to reverse, since in this case we are starting from the case
# where the number of Haws is 100%, because of how we map states
fig, ax = plt.subplots(figsize=(5, 4))
fig.patch.set_facecolor('white')
lines = ax.plot(x, list(reversed(dist)))
plt.setp(lines, linewidth=2.0)
ax.set_ylabel('stationary distribution',size=16)
ax.set_xlabel('$k/Z$',size=16)
ax.set_xlim(0, 1)
plt.show()

We can also plot a single run¶
Note: when looking at single runs bewhare that at the moment the outcome is returned in a dense array (this might change soon). This means that if you want to look at a large number of generations, you will need a lot of memmory. In the future, we will add the option to return only the last \(t\) generations.
[12]:
output = evolver.run(int(1e6), 1, 1e-3, [0, Z])
[13]:
fig, ax = plt.subplots(figsize=(5, 4))
ax.plot(output[:, 0]/Z)
ax.set_ylabel('k/Z')
ax.set_xlabel('generation')
ax.set_xscale('log')
plt.show()

And can visualize what happens if we start several runs from random points in the simplex¶
In this case, since we have a 1 dimensional simplex (only 2 strategies) you simply need to generate random integers. When you study problems with more dimensions, you will need to make use of egttools.sample_simplex
and egttools.calculate_nb_states
to be able to sample the simplex uniformly. All you would need to do is to draw random intergers uniformly in the range [0, total_nb_states)
and then use egttools.sample_simplex
to obtain the state in discrete barycentric coordinates.
[14]:
init_states = np.random.randint(0, Z+1, size=10, dtype=np.uint64)
[15]:
output = []
for i in range(10):
output.append(evolver.run(int(1e6), 1, 1e-3,
[init_states[i], Z - init_states[i]]))
[16]:
# Plot each year's time series in its own facet
fig, ax = plt.subplots(figsize=(5, 4))
for run in output:
ax.plot(run[:, 0]/Z, color='gray', linewidth=.1, alpha=0.6)
ax.set_ylabel('k/Z')
ax.set_xlabel('generation')
ax.set_xscale('log')

And we can also compare how far our approximations are from the analytical results¶
As you will be able to see in the results bellow, the approximation (for this quite simple problem) is good enough for most values of \(\beta\). However, the biggest errors occur when \(\beta\) is low, so bewhare that you might need to increase the number of generations or the number of independent runs (or both) in this case. In the future, we plan to have an adaptive method to stop the simulation when the approximation si good enough (e.g., when th KL-divergence between estimations of the stationary distribution does not change below a certain tolerance after a certain number of generations).
[17]:
# We do this for different betas
betas = np.logspace(-4, 1, 50)
[18]:
stationary_points = []
for i in range(len(betas)):
stationary_points.append(evolver.estimate_stationary_distribution(30, int(1e6), int(1e3),
betas[i], 1e-3))
stationary_points = np.asarray(stationary_points)
[19]:
# Now we estimate the probability of Cooperation for each possible state
state_frequencies = np.arange(0, Z+1) / Z
coop_level = np.dot(state_frequencies, stationary_points.T)
[20]:
# Finnally we do the same, but for the analytical results
from egttools.analytical import StochDynamics
[21]:
analytical_evolver = egt.analytical.StochDynamics(2, A, Z, mu=1e-3)
[22]:
stationary_points_analytical = []
for i in range(len(betas)):
stationary_points_analytical.append(analytical_evolver.calculate_stationary_distribution(betas[i]))
stationary_points_analytical = np.asarray(stationary_points_analytical)
[23]:
# Now we estimate the probability of Cooperation for each possible state
coop_level_analytical = np.dot(1 - state_frequencies, stationary_points_analytical.T)
# Now we estimate the probability of Cooperation for each possible state
coop_level_analytical = np.dot(1 - state_frequencies, stationary_points_analytical.T)
[24]:
from sklearn.metrics import mean_squared_error
[25]:
mse = mean_squared_error(1 - coop_level_analytical, coop_level)
[26]:
import seaborn as sns
[27]:
# Finally, we plot and compare visually (and check how much error we get)
sns.set_context("notebook", font_scale=1.25, rc={"lines.linewidth": 3})
fig, ax = plt.subplots(figsize=(7, 5))
ax.scatter(betas, 1 - coop_level_analytical, marker='x', label="analytical")
ax.scatter(betas, coop_level, marker='o', label="numerical")
ax.text(0.01, 0.5, 'MSE = {0:.3e}'.format(mse), style='italic',
bbox={'facecolor': 'red', 'alpha': 0.5, 'pad': 10})
ax.legend(bbox_to_anchor=(0.5, 0., 0.5, 0.5))
ax.set_xlabel(r'$\beta$', fontsize=15)
ax.set_ylabel('Frequency of Doves', fontsize=15)
ax.set_xscale('log')
sns.despine()
plt.show()

[ ]:
Fixation probabilities and Invasion diagram¶
In this notebook we analyse the Stochastic dynamics of (pairwise) social imitation under the small mutation limit (SML)
[1]:
import os
[2]:
import numpy as np
import matplotlib.pyplot as plt
import egttools as egt
%matplotlib inline
[3]:
egt.Random.init()
seed = egt.Random._seed
Payoff matrix for a Normal Form Game¶
Here we will analyse a Normal Form game, so we need to first define a payoff matrix
[4]:
T=4; R=2; P=1; S=0
A = np.array([
[P, T],
[S, R]
])
Select which strategies we want to analyse¶
We can add all the strategies to a list and pass it to the Game object
[5]:
strategies = [egt.behaviors.NormalForm.TwoActions.Cooperator(),
egt.behaviors.NormalForm.TwoActions.Defector(),
egt.behaviors.NormalForm.TwoActions.TFT(),
egt.behaviors.NormalForm.TwoActions.Pavlov(),
egt.behaviors.NormalForm.TwoActions.Random(),
egt.behaviors.NormalForm.TwoActions.GRIM()]
[6]:
strategy_labels = [strategy.type().replace("NFGStrategies::", '') for strategy in strategies]
[7]:
strategy_labels
[7]:
['AllC', 'AllD', 'TFT', 'Pavlov', 'Random', 'GRIM']
Instantiate the Normal Form Game¶
Now we can instanciate the Game and pass both the strategies and the payoff matrix
[8]:
game = egt.games.NormalFormGame(100, A, strategies)
Now we instanciate the StochDynamics class to perform analytical calculations¶
We pass the expected payoffs calculated by the NormalFormGame class
[9]:
Z= 100; beta=1
evolver = egt.analytical.PairwiseComparison(Z, game)
[10]:
transition_matrix,fixation_probabilities = evolver.calculate_transition_and_fixation_matrix_sml(beta)
stationary_distribution = egt.utils.calculate_stationary_distribution(transition_matrix.transpose())
Plot invasion diagram¶
[11]:
fig, ax = plt.subplots(figsize=(5, 5), dpi=150)
G = egt.plotting.draw_invasion_diagram(strategy_labels,
1/Z, fixation_probabilities, stationary_distribution,
node_size=600,
font_size_node_labels=8,
font_size_edge_labels=8,
font_size_sd_labels=8,
edge_width=1,
min_strategy_frequency=0.00001,
ax=ax)
plt.axis('off')
plt.show() # display

[ ]:
Plot the evolutionary dynamics in a 2-Simplex¶
[1]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
[2]:
import egttools as egt
[3]:
from egttools.plotting.helpers import (xy_to_barycentric_coordinates,
barycentric_to_xy_coordinates,
find_roots_in_discrete_barycentric_coordinates,
calculate_stability,
)
from egttools.analytical.utils import (find_roots, check_replicator_stability_pairwise_games, )
from egttools.helpers.vectorized import vectorized_replicator_equation, vectorized_barycentric_to_xy_coordinates
Evolutionary dynamics in infinite populations¶
In infinite populations the dynamics are given by the replicator equation. Following we calculate the gradients for all the points in a grid and plot them in a 2-simplex using the Simplex2D class.
We will also be plotting in the simplex the stationary points. We will use black circle to represent stable equilibrium and white circles to represent unstable ones. The arrows indicate the direction of the selective pressure.
Calculate gradients¶
[4]:
payoffs = np.array([[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
[5]:
simplex = egt.plotting.Simplex2D()
[6]:
v = np.asarray(xy_to_barycentric_coordinates(simplex.X, simplex.Y, simplex.corners))
[7]:
results = vectorized_replicator_equation(v, payoffs)
xy_results = vectorized_barycentric_to_xy_coordinates(results, simplex.corners)
Ux = xy_results[:, :, 0].astype(np.float64)
Uy = xy_results[:, :, 1].astype(np.float64)
[8]:
calculate_gradients = lambda u: egt.analytical.replicator_equation(u, payoffs)
roots = find_roots(gradient_function=calculate_gradients,
nb_strategies=payoffs.shape[0],
nb_initial_random_points=100)
roots_xy = [barycentric_to_xy_coordinates(root, corners=simplex.corners) for root in roots]
stability = check_replicator_stability_pairwise_games(roots, payoffs)
[9]:
type_labels = ['A', 'B', 'C']
[10]:
fig, ax = plt.subplots(figsize=(10,8))
plot = (simplex.add_axis(ax=ax)
.apply_simplex_boundaries_to_gradients(Ux, Uy)
.draw_triangle()
.draw_stationary_points(roots_xy, stability)
.add_vertex_labels(type_labels)
.draw_trajectory_from_roots(lambda u, t: egt.analytical.replicator_equation(u, payoffs),
roots,
stability,
trajectory_length=15,
linewidth=1,
step=0.01,
color='k', draw_arrow=True, arrowdirection='right', arrowsize=30, zorder=4, arrowstyle='fancy')
.draw_scatter_shadow(lambda u, t: egt.analytical.replicator_equation(u, payoffs), 300, color='gray', marker='.', s=0.1)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

[11]:
fig, ax = plt.subplots(figsize=(10,8))
plot = (simplex.add_axis(ax=ax)
.apply_simplex_boundaries_to_gradients(Ux, Uy)
.draw_triangle()
.draw_gradients(zorder=0)
.add_colorbar()
.draw_stationary_points(roots_xy, stability)
.add_vertex_labels(type_labels)
.draw_trajectory_from_roots(lambda u, t: egt.analytical.replicator_equation(u, payoffs),
roots,
stability,
trajectory_length=15,
linewidth=1,
step=0.01,
color='k', draw_arrow=True, arrowdirection='right', arrowsize=30, zorder=4, arrowstyle='fancy')
.draw_scatter_shadow(lambda u, t: egt.analytical.replicator_equation(u, payoffs), 300, color='gray', marker='.', s=0.1, zorder=0)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05, 1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

Finite populations¶
In finite populations we will model the dynamics using a Moran process. The calculation of the gradients is implemented in EGTtools in the StochDynamics class. In this case, since we want to calculate the gradient at any point in the simplex, we will use the full_gradient_selection
method
Calculate gradients¶
[12]:
Z = 100
beta = 1
mu = 1e-3
[13]:
simplex = egt.plotting.Simplex2D(discrete=True, size=Z, nb_points=Z+1)
[14]:
v = np.asarray(xy_to_barycentric_coordinates(simplex.X, simplex.Y, simplex.corners))
[15]:
v_int = np.floor(v * Z).astype(np.int64)
[16]:
evolver = egt.analytical.StochDynamics(3, payoffs, Z)
<ipython-input-16-ffdd52efb7d6>:1: DeprecationWarning: This class will soon be deprecated in favour of egttools.analytical.PairwiseComparison, which is implemented in c++ and runs faster, as well as, avoids precision issues that appeared in some limit cases.
evolver = egt.analytical.StochDynamics(3, payoffs, Z)
[17]:
result = np.asarray([[evolver.full_gradient_selection(v_int[:, i, j], beta) for j in range(v_int.shape[2])] for i in range(v_int.shape[1])]).swapaxes(0, 1).swapaxes(0, 2)
[18]:
xy_results = vectorized_barycentric_to_xy_coordinates(result, simplex.corners)
Ux = xy_results[:, :, 0].astype(np.float64)
Uy = xy_results[:, :, 1].astype(np.float64)
[19]:
calculate_gradients = lambda u: Z*evolver.full_gradient_selection(u, beta)
roots = find_roots_in_discrete_barycentric_coordinates(calculate_gradients, Z, nb_interior_points=5151, atol=1e-1)
roots_xy = [barycentric_to_xy_coordinates(x, simplex.corners) for x in roots]
stability = calculate_stability(roots, calculate_gradients)
Calculate stationary distribution¶
We can also plot the stationary distribution inside the simplex. It will give us an idea on where our population is going to spend most of the time
[20]:
evolver.mu = 1e-3
sd = evolver.calculate_stationary_distribution(beta)
[21]:
fig, ax = plt.subplots(figsize=(15,10))
plot = (simplex.add_axis(ax=ax)
.apply_simplex_boundaries_to_gradients(Ux, Uy)
.draw_gradients(zorder=5)
.add_colorbar()
.draw_stationary_points(roots_xy, stability, zorder=11)
.add_vertex_labels(type_labels)
.draw_stationary_distribution(sd, vmax=0.0001, alpha=0.5, edgecolors='gray', cmap='binary', shading='gouraud', zorder=0)
.draw_trajectory_from_roots(lambda u, t: Z*evolver.full_gradient_selection_without_mutation(u, beta),
roots,
stability,
trajectory_length=30,
linewidth=1,
step=0.001,
color='k', draw_arrow=True, arrowdirection='right', arrowsize=30, zorder=10, arrowstyle='fancy')
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

[22]:
fig, ax = plt.subplots(figsize=(10,8))
plot = (simplex.add_axis(ax=ax)
.apply_simplex_boundaries_to_gradients(Ux, Uy)
.draw_stationary_points(roots_xy, stability)
.add_vertex_labels(type_labels)
.draw_stationary_distribution(sd, vmax=0.0001, alpha=0.5, edgecolors='gray', cmap='binary', shading='gouraud')
.draw_trajectory_from_roots(lambda u, t: Z*evolver.full_gradient_selection_without_mutation(u, beta),
roots,
stability,
trajectory_length=30,
linewidth=1,
step=0.001,
color='k', draw_arrow=True, arrowdirection='right', arrowsize=30, zorder=4, arrowstyle='fancy')
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

Simplified 2-Simplex plot¶
In the Plot the evolutionary dynamics in a 2-Simplex
example, we show how to use the Simplex2D class to visualize dynamics on a 2 Simplex using arbitrary gradient function (although the examples were shown only for the replicator equation and the Moran process defined in a finite population of strategies).
However, when we want to use the replicator_equation
and StochDynamics
provided in egttools
, then we can simplify the plotting process by using two utility functions: egttools.plotting.plot_replicator_dynamics_in_simplex
and egttools.plotting.plot_pairwise_comparison_rule_dynamics_in_simplex
.
[1]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
[2]:
from egttools.plotting.simplified import plot_replicator_dynamics_in_simplex, plot_pairwise_comparison_rule_dynamics_in_simplex_without_roots
from egttools.utils import calculate_stationary_distribution
Evolutionary dynamics in infinite populations¶
In infinite populations the dynamics are given by the replicator equation. Following we calculate the gradients for all the points in a grid and plot them in a 2-simplex using the Simplex2D class.
We will also be plotting in the simplex the stationary points. We will use black circle to represent stable equilibrium and white circles to represent unstable ones. The arrows indicate the direction of the selective pressure.
Define payoff matrix¶
[3]:
payoffs = np.array([[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
[4]:
type_labels = ['A', 'B', 'C']
Plot the trajectories from the stationary points¶
[5]:
fig, ax = plt.subplots(figsize=(10,8))
simplex, gradient_function, roots, roots_xy, stability = plot_replicator_dynamics_in_simplex(payoffs,
nb_of_initial_points_for_root_search=100,
ax=ax)
plot = (simplex.draw_triangle()
.add_vertex_labels(type_labels)
.draw_stationary_points(roots_xy, stability)
.draw_trajectory_from_roots(gradient_function,
roots,
stability,
trajectory_length=15,
linewidth=1,
step=0.01,
color='k', draw_arrow=True, arrowdirection='right', arrowsize=30, zorder=4, arrowstyle='fancy')
.draw_scatter_shadow(gradient_function, 300, color='gray', marker='.', s=0.1)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

Plot the gradient field using a streamplot¶
[6]:
fig, ax = plt.subplots(figsize=(10,8))
plot = (simplex.add_axis(ax=ax)
.draw_triangle()
.draw_gradients(zorder=0)
.add_colorbar()
.add_vertex_labels(type_labels)
.draw_stationary_points(roots_xy, stability)
.draw_trajectory_from_roots(gradient_function,
roots,
stability,
trajectory_length=15,
linewidth=1,
step=0.01,
color='k', draw_arrow=True, arrowdirection='right', arrowsize=30, zorder=4, arrowstyle='fancy')
.draw_scatter_shadow(gradient_function, 300, color='gray', marker='.', s=0.1, zorder=0)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

Stochastic Dynamics in finite populations - Moran process with pairwise comparison¶
In finite populations we will model the dynamics using a Moran process. The calculation of the gradients is implemented in EGTtools in the StochDynamics class. In this case, since we want to calculate the gradient at any point in the simplex, we will use the full_gradient_selection
method
Define the population size and the intensity of selection¶
[7]:
Z = 100
beta = 1
mu = 1/Z
Plot the gradient field and the stationary distribution¶
We can also plot the stationary distribution inside the simplex. It will give us an idea on where our population is going to spend most of the time
[8]:
fig, ax = plt.subplots(figsize=(15,10))
simplex, gradient_function, game, evolver = plot_pairwise_comparison_rule_dynamics_in_simplex_without_roots(payoff_matrix=payoffs, population_size=Z, beta=beta, ax=ax)
transitions = evolver.calculate_transition_matrix(beta=beta, mu=mu)
sd = calculate_stationary_distribution(transitions.transpose())
plot = (simplex.add_axis(ax=ax)
# .draw_triangle()
.draw_gradients(zorder=5)
.add_colorbar()
.add_vertex_labels(type_labels)
.draw_stationary_distribution(sd, alpha=1, edgecolors='gray', cmap='binary', shading='gouraud', zorder=0)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
plt.show()

[ ]:
Examples of use of EGTtools¶
This notebook is aimed at displaying some of the main examples of use of Evolutionary Game Theoretical models and methods implemented in EGTtools.
The replicator dynamics¶
The replicator equation is one of the fundamental descriptors of evolutionary dynamics. It consists on the following set of coupled ordinary diferencial equations:
This equation can also be expressed in the following vector form for symmetrical 2-player games:
This last form is implemented in EGTtools in egttools.analytical.replicator_equation
. This function takes as input a vector \(\mathbf{x}\) containing the frequencies of each of the strategies (or types) in the population, and a matrix \(A\) which contains the payoff of each strategy when interacting against any other strategy in the population. It will output the gradient of selection, i.e., the change in the population at the next time-step.
You can feed this function directly to scypy
odeint
function if you would like to integrate it and calculate the state of the population at a given point in time.
You can also plot the gradient of selection to visually identify the roots of the dynamical system:
[1]:
import numpy as np
import egttools as egt
import matplotlib.pylab as plt
%matplotlib inline
[2]:
# Payoff matrix
V = 2; D = 3; T = 1
A = np.array([
[ (V-D)/2, V],
[ 0 , (V/2) - T],
])
We can calculate the gradients in the following way:
[3]:
nb_points = 101
strategy_i = np.linspace(0, 1, num=nb_points, dtype=np.float64)
[4]:
# Calculate gradient
gradient_function = lambda x: egt.analytical.replicator_equation(x, A)
gradients = egt.analytical.utils.calculate_gradients(np.array((strategy_i, 1 - strategy_i)).T,
gradient_function)
When we are analysing a pairwise game with the replicator equation, we can also calculate the roots of the system and their stability:
[7]:
roots = egt.analytical.utils.find_roots(gradient_function, 2,
nb_initial_random_points=100, atol=1e-7, tol_close_points=1e-4,
method="hybr")
stability = egt.analytical.utils.check_replicator_stability_pairwise_games(roots, A, atol_neg=1e-4, atol_pos=1e-4, atol_zero=1e-4)
[10]:
egt.plotting.indicators.plot_gradients(gradients[:, 0],
xlabel="frequency of hawks",
roots=roots, stability=stability)
[10]:
<AxesSubplot:xlabel='frequency of hawks', ylabel='gradient of selection (G)'>

[11]:
# Calculate gradient
gradient_function = lambda x: egt.numerical.numerical.replicator_equation(x, A)
gradients = egt.analytical.utils.calculate_gradients(np.array((strategy_i, 1 - strategy_i)).T,
gradient_function)
[14]:
roots = egt.analytical.utils.find_roots(gradient_function, 2,
nb_initial_random_points=100, atol=1e-7, tol_close_points=1e-4,
method="hybr")
stability = egt.analytical.utils.check_replicator_stability_pairwise_games(roots, A, atol_neg=1e-4, atol_pos=1e-4, atol_zero=1e-4)
[16]:
egt.plotting.indicators.plot_gradients(gradients[:, 0],
xlabel="frequency of hawks")
[16]:
<AxesSubplot:xlabel='frequency of hawks', ylabel='gradient of selection (G)'>

It is also possible to make the same plot without showing the stationary points. This is the indicated use when plotting the gradient of stochastic dynamics in finite populations:
[7]:
egt.plotting.indicators.plot_gradients(gradients[:, 0], xlabel="frequency of hawks", marker='o', figsize=(6,5))
[7]:
<AxesSubplot:xlabel='frequency of hawks', ylabel='gradient of selection (G)'>

Replicator dynamics in a 2-simplex (3 strategies)¶
When we have 3 strategies, we can visualize the replicator dynamics over a 2-simples (a triangle). This can be easily achieved in EGTtools through the Simplex2D class:
[8]:
from egttools.plotting.simplified import plot_replicator_dynamics_in_simplex
[9]:
# Payoff matrix
V = 2; D = 3; T = 1; F = 2; S = 1
A = np.array([
[ (V-D)/2, V, 0], # Hawk
[ 0 , (V/2) - T, F], # Dove
[ 0 , S, 0] # Human
])
type_labels = ['Hawk', 'Dove', 'Human']
[10]:
fig, ax = plt.subplots(figsize=(10,8))
simplex, gradient_function, roots, roots_xy, stability = plot_replicator_dynamics_in_simplex(A, ax=ax)
plot = (simplex.draw_triangle()
.add_vertex_labels(type_labels, epsilon_bottom=0.1)
.draw_stationary_points(roots_xy, stability)
.draw_gradients(zorder=0)
.add_colorbar()
.draw_scatter_shadow(gradient_function, 100, color='gray', marker='.', s=0.1)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
[10]:
(-0.02, 0.9160254037844386)

Stochastic Dynamics in Finite Populations - Moran process with pairwise comparison rule¶
Finite populations introduce stochastic effects. We can represent these dynamics, by adapting the replicator equation, which considers that individuals are sampled from an infinite population, and therefore selecting a member of strategy \(j\) does not reduce its frequency in the population. When the population is finite, we may no longer assume a sampling with replacement, i.e., when an individual of strategy \(j\) is sampled, the fraction of members of that strategy is reduced, instead we must sample without replacement.
Here the fitness of an strategy \(i\) against strategy \(j\) directly depends on the size of the population:
\(f_i(x_i, Z) = \frac{x_i - 1}{Z-1} * A_{ii} + \frac{Z - x_i}{Z-1} * A_{ij}\)
For the selection dynamics, we use a Moran process (or birth-death process) with pair-wise comparison: at each step, 2 individuals, \(a\) and \(b\), are randomly sampled (without replacement) from the population, and their payoff is compared. The fermi equation gives the probability that individual \(a\) (selected for death) will copy the strategy of individual \(b\) (selected for birth):
\(p=[1 + e^{\beta(f_a-f_b)}]^{-1}\)
\(a\) will imitate \(b\) with probability \(p\), in any other case, the population state will not change. \(\beta\) indicates the selection strength and on the limit \(\beta \xrightarrow{} 0\) all strategies are immitated with equal probability.
[11]:
Z = 100
beta = 1
mu = 1/Z
Stochastic dynamics in a 1-Simplex (only 2 strategies)¶
Here we do not plot the roots, since the system is discrete and the gradient might jump to a negative value without becoming zero. Yet, we can still observe that the shape of the gradient is almost identical to the replicator dynamics.
[12]:
from egttools.analytical import PairwiseComparison
[13]:
# Payoff matrix
V = 2; D = 3; T = 1
A = np.array([
[ (V-D)/2, V],
[ 0 , (V/2) - T],
])
[14]:
# Parameters and evolver
nb_strategies = 2; Z = 100; N = 2;
beta = 1
pop_states = np.arange(0, Z + 1, 1)
game = egt.games.Matrix2PlayerGameHolder(nb_strategies, A)
evolver = PairwiseComparison(Z, game)
[15]:
gradients = np.array([evolver.calculate_gradient_of_selection(beta, np.array([x, Z-x])) for x in range(Z + 1)])
[16]:
egt.plotting.indicators.plot_gradients(gradients[:, 0], figsize=(6,5),
marker_facecolor='white',
xlabel="frequency of hawks (k/Z)", marker="o", marker_size=30, marker_plot_freq=2)
[16]:
<AxesSubplot:xlabel='frequency of hawks (k/Z)', ylabel='gradient of selection (G)'>

Stochastic dynamics in a 2-simplex (3 strategies)¶
Since it is not so simple to formally define the roots of a discrete system, we can make use of the stationary distribution of the Markov chain as an indication of where they are situated, and of how likely is the system to spend time in these equilibria.
[17]:
from egttools.plotting.simplified import plot_pairwise_comparison_rule_dynamics_in_simplex_without_roots
[18]:
# Payoff matrix
V = 2; D = 3; T = 1; F = 2; S = 1
A = np.array([
[ (V-D)/2, V, 0], # Hawk
[ 0 , (V/2) - T, F], # Dove
[ 0 , S, 0] # Human
])
type_labels = ['Hawk', 'Dove', 'Human']
[19]:
fig, ax = plt.subplots(figsize=(12,10))
simplex, gradient_functionm, game, evolver = plot_pairwise_comparison_rule_dynamics_in_simplex_without_roots(payoff_matrix=A,
group_size=2,
population_size=Z,
beta=beta,
ax=ax)
transitions = evolver.calculate_transition_matrix(beta=beta, mu=mu)
sd = egt.utils.calculate_stationary_distribution(transitions.transpose())
plot = (simplex
.draw_triangle()
.add_vertex_labels(type_labels, epsilon_bottom=0.1, epsilon_top=0.03)
.draw_stationary_distribution(sd, alpha=1, shrink=0.5,
edgecolors='gray', cmap='binary', shading='gouraud', zorder=0)
.draw_gradients(zorder=2, linewidth=1.5)
.add_colorbar(shrink=0.5)
)
ax.axis('off')
ax.set_aspect('equal')
plt.xlim((-.05,1.05))
plt.ylim((-.02, simplex.top_corner + 0.05))
[19]:
(-0.02, 0.9160254037844386)

[ ]:
egttools¶
The egttools
package implements methods to study evolutionary dynamics.
Functions
Calculates the number of states (combinations) of the members of a group in a subgroup. |
|
This function converts a vector containing counts into an index. |
|
Calculates the average frequency of each strategy available in the population given the stationary distribution. |
|
Transforms a state index into a vector. |
|
Samples uniformly at random the unit simplex with nb_strategies dimensionse. |
Classes
Random seed generator. |
API reference documentation for |
|
API reference documentation for |
|
Custom data structures used to store data from numerical simulations. |
|
Helpful implementations of stochastic distributions. |
|
API reference documentation for the |
|
Set of helper functions that can be useful for obtaining analytical results, simulations or for plotting. |
|
The |
|
API reference documentation for the |
|
This python module contains some utility functions to find saddle points and plot gradients in 2 player, 2 strategy games. |
Citing EGTtools¶
You may cite this repository in the following way:
@misc{Fernandez2020,
author = {Fernández Domingos, Elias},
title = {EGTTools: Toolbox for Evolutionary Game Theory},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Socrats/EGTTools}},
doi = {10.5281/zenodo.3687125}
}
Acknowledgements¶
I would like to thank Prof. Tom Lenaerts and Prof. Francisco C. Santos for the great help in building this library.
We would also like to thank Yannick Jadoul author of Parselmouth and Eugenio Bargiacchi author of AIToolBox for their great advice on the implementation of EGTtools.