_images/logo-full.png

EGTtools – Toolbox for Evolutionary Game Theory

EGTtools provides a centralized repository with analytical and numerical methods to study/model game theoretical problems under the Evolutionary Game Theory (EGT) framework.

This library is composed of two parts:

  • a set of analytical methods implemented in Python 3

  • a set of computational methods implemented in C++ with (Python 3 bindings)

The second typed is used in cases where the state space is too big to solve analytically, and thus require estimating the model parameters through monte-carlo simulations. The C++ implementation provides optimized computational methods that can run in parallel in a reasonable time, while Python bindings make the methods easily accecible to a larger range of researchers.

Examples

The following examples give an idea of how to use EGTtools to model social dynamics:

Example of use

[1]:
import numpy as np
# Plotting libraries
import matplotlib.pylab as plt
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables high resolution PNGs.
%config InlineBackend.figure_formats = {'png', 'svg'}

Evolutionary Dynamics of the Haw-Dove Game in Infinite populations

Replicator equation

The replicator equation represents the dynamics of competing individuals in a population. It normally found in the following form:

\(\dot{x}_i = x_i[f(x_i)-\sum_{i=1}^{n}x_if(x_i)]\),

where \(x_i\) represents the frequency of strategy \(i\) in the population, and \(f(x_i)\) is the fitness of strategy \(i\). This differencial equation, gives the gradient of selection, i.e., the strength with which the frequency of a certain strategy will increase or decrease. It may also be expressed in a more convenient matrix form:

\(G(x_i) = \dot{x}_i = x_i[(Ax)_i - x^TAx]\)

Where the matrix \(A\) is a payoff matrix with element \(A_{ij}\) representing the fitness of strategy \(i\) over strategy \(j\).

[2]:
from egttools.analytical import replicator_equation
from egttools.utils import find_saddle_type_and_gradient_direction
from egttools.plotting import plot_gradient
[3]:
nb_points = 101
strategy_i = np.linspace(0, 1, num=nb_points, dtype=np.float64)
strategy_j = 1 - strategy_i
states = np.array((strategy_i,strategy_j)).T

# Payoff matrix
V = 2; D = 3; T = 1
A = np.array([
        [ (V-D)/2, V],
        [ 0      , (V/2) - T],
    ])
[4]:
# Calculate gradient
G = np.array([replicator_equation(states[i], A)[0] for i in range(len(states))])
[5]:
# Find saddle points (where the gradient is 0)
epsilon = 1e-7
saddle_points_idx = np.where((G <= epsilon) & (G >= -epsilon))[0]
saddle_points = saddle_points_idx / (nb_points - 1)

# Now let's find which saddle points are absorbing/stable and which aren't
# we also annotate the gradient's direction among saddle poinst
saddle_type, gradient_direction = find_saddle_type_and_gradient_direction(G, saddle_points_idx)
[6]:
ax = plot_gradient(strategy_i,
                   G,
                   saddle_points,
                   saddle_type,
                   gradient_direction,
                   'Hawk-Dove game replicator dynamics',
                   xlabel='$x$')
plt.show()
_images/examples_hawk_dove_dynamics_9_0.svg

Evolutionary Dynamics of the Hawk-Dove Game in Finite populations

Now we are going to study the effect of having Finite populations. In general, finite populations introduce stochastic effects in the dynamics, also known as random drift \(~ 1/Z\), where \(Z\) is the size of the population. We can represent these dynamics, by adapting the replicator equation, which considers that individuals are sampled from an infinite population, and therefore selecting a member of strategy \(j\) does not reduce its frequency in the population. When the population is finite, we make no longer assume a sampling with replacement, i.e., when an individual of strategy \(j\) is sampled, the fraction of members of that strategy is reduced, instead we must sample without replacement.

Here the fitness of an strategy \(i\) against strategy \(j\) directly depends on the size of the population:

\(f_i(x_i, Z) = \frac{x_i - 1}{Z-1} * A_{ii} + \frac{Z - x_i}{Z-1} * A_{ij}\)

For the selection dynamics, we use a Moran process (or birth-death process) with pair-wise comparison: at each step, 2 individuals, \(a\) and \(b\), are randomly sampled (without replacement) from the population, and their payoff is compared. The fermi equation gives the probability that individual \(a\) (selected for death) will copy the strategy of individual \(b\) (selected for birth):

\(p=[1 + e^{\beta(f_a-f_b)}]^{-1}\)

\(a\) will imitate \(b\) with probability \(p\), in any other case, the population state will not change. \(\beta\) indicates the selection strength and on the limit \(\beta \xrightarrow{} 0\) all strategies are immitated with equal probability.

[7]:
from egttools.analytical import StochDynamics
[8]:
# Parameters and evolver
nb_strategies = 2; Z = 100; N = 2;
beta = 1
pop_states = np.arange(0, Z + 1, 1)
strategy_i = np.linspace(0, 1, num=Z + 1, dtype=np.float64)
evolver = StochDynamics(nb_strategies, A, Z)
[9]:
gradients = np.array([evolver.gradient_selection(x, 0, 1, beta)
                      for x in pop_states])
[10]:
# Find saddle points (where the gradient is 0) - Define epsilon correctly, or you will get wrong matches
epsilon = 1e-3
saddle_points_idx = np.where((gradients <= epsilon) & (gradients >= -epsilon))[0]
saddle_points = saddle_points_idx / Z

# Now let's find which saddle points are absorbing/stable and which aren't
# we also annotate the gradient's direction among saddle poinst
saddle_type, gradient_direction = find_saddle_type_and_gradient_direction(gradients,
                                                                          saddle_points_idx)
[18]:
ax = plot_gradient(strategy_i,
                   gradients,
                   saddle_points,
                   saddle_type,
                   gradient_direction,
                   'Hawk-Dove game on Finite populations',
                   xlabel='$k/Z$')
plt.show()
_images/examples_hawk_dove_dynamics_16_0.svg
[12]:
evolver.mu = 0
stationary_SML = evolver.calculate_stationary_distribution(beta)
print("time spent as Hawk: {} & time spent as Dove: {}".format(*stationary_SML))
time spent as Hawk: 1.0 & time spent as Dove: 0.0

If we introduce mutations, i.e., there is a probability that individuals make an error, and adopt a different strategy instead of imitating the best, the dynamics may change. In this case, at each time step, players are selected for death/birth and their fitness is compared.

However, now, with probability \(\mu\) agent \(a\) will adopt a random strategy from the strategy space, and with probability \(1-\mu\) it will immitate \(b\) with probability \(p\). Therefore the probability of immitating \(b\) is \(p_{eff} = (1-\mu)*p\). On the limit \(\mu \xrightarrow{} 1\), all strategies are taken with equal probability. When \(\mu \xrightarrow{} 0\) we go back to the previous case, also known as small mutation limit (SML).

When mutation is small, we may assume that only the the states in which all population adopts a single strategy, also known as monomorphic states, are absorbing and stable. This is because, since there are no mutations, once the population reaches a monomorphic state, it will never leave. Such a simplification, allows us to reduce the number of states of the system, taking the Hawk-Dove game as an example, from Z + 1, to only 2.

Moreover, mixed equilibria are no longer stable in the SML. This occurs, since random drift, even if it takes an infinite ammount of time (please note that we are not looking at fixation times), will drive the population to one of the monomorphic states. For this reason, the SML assumption is only reasonable when we know that there are no mixed stable attractors in the studied system, which is not the case in the Hawk-Dove game. This explains why the results of the stationary distribution differ from the previously calculated Nash equilibria.

Now we are going to calculate the stationary distribution again, but taking mutations into account.

[13]:
evolver.mu = 1e-3
stationary_with_mu = evolver.calculate_stationary_distribution(beta)
[15]:
fig, ax = plt.subplots(figsize=(5, 4))
fig.patch.set_facecolor('white')
lines = ax.plot(np.arange(0, Z+1)/Z, stationary_with_mu)
plt.setp(lines, linewidth=2.0)
ax.set_ylabel('stationary distribution',size=16)
ax.set_xlabel('$k/Z$',size=16)
ax.set_xlim(0, 1)
plt.show()
_images/examples_hawk_dove_dynamics_22_0.svg
[ ]:

[1]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os

# Ony necessary in MacOSX and Anaconda due to known error with duplicated symbol
# when using OpenMP
# This line can also be avoided installing anaconda's nomkl (conda install nomkl)
os.environ['KMP_DUPLICATE_LIB_OK']='True'
[2]:
import egttools as egt
[3]:
egt.numerical.Random.init()
seed = egt.numerical.Random.seed_
[4]:
# Payoff matrix
V = 2; D = 3; T = 1
A = np.array([
        [ (V-D)/2, V],
        [ 0      , (V/2) - T],
    ])
[71]:
A
[71]:
array([[-0.5,  2. ],
       [ 0. ,  0. ]])
[5]:
game = egt.numerical.games.NormalFormGame(1, A)
[6]:
Z = 100
x = np.arange(0, Z+1)/Z
[7]:
evolver = egt.numerical.PairwiseMoran(Z, game, 1000000)
[8]:
Z = 100
x = np.arange(0, Z+1)/Z
evolver.pop_size = Z
[9]:
dist = evolver.stationary_distribution(10, int(1e6), int(1e3), 1, 1e-3)
[10]:
# We need to reverse, since in this case we are starting from the case
# where the number of Haws is 100%, because of how we map states
fig, ax = plt.subplots(figsize=(5, 4))
fig.patch.set_facecolor('white')
lines = ax.plot(x, list(reversed(dist)))
plt.setp(lines, linewidth=2.0)
ax.set_ylabel('stationary distribution',size=16)
ax.set_xlabel('$k/Z$',size=16)
ax.set_xlim(0, 1)
plt.show()
_images/examples_normal_form_game_mc_simulations_10_0.png

We can also plot a single run

[11]:
output = evolver.run(int(1e6), 1, 1e-3, [0, Z])
[12]:
fig, ax = plt.subplots(figsize=(5, 4))
ax.plot(output[:, 0]/Z)
ax.set_ylabel('k/Z')
ax.set_xlabel('generation')
ax.set_xscale('log')
plt.show()
_images/examples_normal_form_game_mc_simulations_13_0.png

And can visualize what happens if we start several runs from random points in the simplex

[13]:
init_states = np.random.randint(0, Z+1, size=10, dtype=np.uint64)
[14]:
output = []
for i in range(10):
    output.append(evolver.run(int(1e6), 1, 1e-3,
                              [init_states[i], Z - init_states[i]]))
[70]:
# Plot each year's time series in its own facet
fig, ax = plt.subplots(figsize=(5, 4))

for run in output:
    ax.plot(run[:, 0]/Z, color='gray', linewidth=.1, alpha=0.6)
ax.set_ylabel('k/Z')
ax.set_xlabel('generation')
ax.set_xscale('log')
_images/examples_normal_form_game_mc_simulations_17_0.png

And we can also compare how far our approximations are from the analytical results

[16]:
# We do this for different betas
betas = np.logspace(-4, 1, 50)
[17]:
stationary_points = []
for i in range(len(betas)):
    stationary_points.append(evolver.stationary_distribution(30, int(1e6), int(1e3),
                                                             betas[i], 1e-3))
stationary_points = np.asarray(stationary_points)
[18]:
# Now we estimate the probability of Cooperation for each possible state
state_frequencies = np.arange(0, Z+1) / Z
coop_level = np.dot(state_frequencies, stationary_points.T)
[19]:
# Finnally we do the same, but for the analytical results
from egttools.analytical import StochDynamics
[20]:
analytical_evolver = egt.analytical.StochDynamics(2, A, Z, mu=1e-3)
[21]:
stationary_points_analytical = []
for i in range(len(betas)):
    stationary_points_analytical.append(analytical_evolver.calculate_stationary_distribution(betas[i]))
stationary_points_analytical = np.asarray(stationary_points_analytical)
[22]:
# Now we estimate the probability of Cooperation for each possible state
coop_level_analytical = np.dot(1 - state_frequencies, stationary_points_analytical.T)
[25]:
from sklearn.metrics import mean_squared_error
[26]:
mse = mean_squared_error(coop_level_analytical, coop_level)
[74]:
# Finally, we plot and compare visually (and check how much error we get)
fig, ax = plt.subplots(figsize=(7, 5))
# ax.scatter(betas, coop_level, label="simulation")
ax.scatter(betas, coop_level_analytical, marker='x', label="analytical")
ax.scatter(betas, coop_level, marker='o', label="numerical")
ax.text(0.01, 0.535, 'MSE = {0:.3e}'.format(mse), style='italic',
        bbox={'facecolor': 'red', 'alpha': 0.5, 'pad': 10})
ax.legend()
ax.set_xlabel(r'$\beta$', fontsize=15)
ax.set_ylabel('Frequency of Doves', fontsize=15)
ax.set_xscale('log')
plt.show()
_images/examples_normal_form_game_mc_simulations_28_0.png
[ ]:

Projects using EGTtools

The EvoCRD provide an example of use of EGTtools do model dynamics in the Collective Risk Dilemma.

egttools

The egttools package implements methods to study evolutionary dynamics.

Functions

calculate_nb_states

Calculates the number of states (combinations) of the members of a group in a subgroup.

calculate_state

Overloaded function.

calculate_strategies_distribution

Calculates the average frequency of each strategy available in the population given the stationary distribution.

sample_simplex

Transforms a state index into a vector.

Classes

Random

Random seed generator.

egttools.analytical

API reference documentation for sed_analytical submodule.

egttools.behaviors

API reference documentation for behaviors submodule.

egttools.games

API reference documentation for the games submodule.

egttools.numerical

The numerical module contains optimized functions and classes to simulate evolutionary dynamics in large populations.

egttools.plotting

API reference documentation for the plotting submodule.

egttools.utils

This python module contains some utility functions to find saddle points and plot gradients in 2 player, 2 strategy games.

Citing EGTtools

You may cite this repository in the following way:

@misc{Fernandez2020,
  author = {Fernández Domingos, Elias},
  title = {EGTTools: Toolbox for Evolutionary Game Theory},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/Socrats/EGTTools}},
  doi = {10.5281/zenodo.3687125}
}

Indices and tables