Agent-based modelling, Konstanz, 2024
7 May 2024
1000-element Vector{VariationalLearner}:
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
⋮
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
VariationalLearner(0.1, 0.01, 0.4, 0.1)
rand
function can be used to pick random agents from the population:What gets returned is an array of 100 numbers (100-element Vector{Float64}
).
What are these numbers, and where do they come from?
mean
function. This is part of the Statistics
module:mean
takes the average over an array of numbers. But pop
is not an array of numbers – it is an array of VariationalLearner
objects. ☹️How can we obtain the average \(p\) over our pop
object?
average_p (generic function with 1 method)
Array{VariationalLearner}
, which means an array of elements all of which are VariationalLearner
sfor
blockbegin ... end
block:100-element Vector{Float64}:
0.10002888999999995
0.10002788999999994
0.10002688999999994
0.10002588999999994
0.10003488999999995
0.10003388999999994
0.10003288999999996
0.10003188999999994
0.10003088999999994
0.10002989999999994
0.10002889999999993
0.10002790999999994
0.10002690999999994
⋮
0.10002051079999993
0.10001951079999995
0.10001851079999996
0.10001751079999996
0.10001651079999996
0.10002551079999997
0.10002451079999995
0.10002353069999995
0.10002253069999996
0.10002154069999997
0.10002054069999995
0.10001955069999996
Plots
also connects them with veeeeery tiny lines). This is a lot, and may slow your computer down.In our simulation, we see the average value of \(p\) steadily going up with time. What do you predict will happen in the future, i.e. if we continued the simulation for, say, another million time steps?
Answer
We would expect the average to keep increasing, as the \(p\) of every speaker tends to increase over time. Why does it tend to keep increasing? Because of the way we initialized the model: we set the P1
and P2
values for each learner to 0.4
and 0.1
, meaning that there is always more evidence for grammar \(G_1\) than for grammar \(G_2\).
Of course, the average value of \(p\), just like each individual \(p\), cannot increase forever. They have a hard maximum at \(p = 1\), since probabilities cannot be greater than 1. In fact, the average \(p\) plateaus at 1, if we continue the simulation. (Try it!)
t
, for every learner l
in the population, make a random speaker speak to l
.pop = [VariationalLearner(0.1, 0.01, 0.4, 0.1) for i in 1:20]
history = [interact!(rand(pop), l) for t in 1:100, l in pop]
100×20 Matrix{Float64}:
0.099 0.109 0.099 0.099 … 0.099 0.099 0.109
0.09801 0.10791 0.09801 0.10801 0.09801 0.10801 0.10791
0.0970299 0.106831 0.0970299 0.10693 0.0970299 0.10693 0.106831
0.0960596 0.105763 0.0960596 0.105861 0.0960596 0.105861 0.105763
0.095099 0.104705 0.095099 0.104802 0.095099 0.114802 0.104705
0.094148 0.103658 0.094148 0.103754 … 0.094148 0.113654 0.103658
0.0932065 0.102621 0.0932065 0.102716 0.103207 0.112517 0.112621
0.0922745 0.101595 0.0922745 0.101689 0.102174 0.111392 0.111495
0.0913517 0.100579 0.0913517 0.100672 0.101153 0.110278 0.11038
0.0904382 0.109573 0.0904382 0.0996657 0.100141 0.119176 0.109276
0.0895338 0.108478 0.0895338 0.098669 … 0.10914 0.117984 0.108184
0.0986385 0.107393 0.0986385 0.0976823 0.108048 0.116804 0.107102
0.0976521 0.106319 0.0976521 0.0967055 0.106968 0.115636 0.106031
⋮ ⋱
0.0682832 0.12873 0.129221 0.100333 0.108478 0.177992 0.109783
0.0676004 0.127443 0.127929 0.0993302 0.107393 0.176212 0.108685
0.0669244 0.136168 0.12665 0.108337 … 0.106319 0.174449 0.107598
0.0662551 0.134807 0.125383 0.107253 0.105256 0.182705 0.106522
0.0655926 0.133459 0.124129 0.106181 0.104203 0.180878 0.105457
0.0649366 0.132124 0.122888 0.105119 0.103161 0.179069 0.114402
0.0642873 0.140803 0.121659 0.104068 0.102129 0.177278 0.113258
0.0636444 0.139395 0.130442 0.103027 … 0.101108 0.185506 0.112126
0.073008 0.138001 0.129138 0.101997 0.100097 0.193651 0.111005
0.0722779 0.136621 0.127847 0.100977 0.0990961 0.201714 0.119895
0.0715551 0.145255 0.126568 0.0999673 0.0981051 0.209697 0.118696
0.0708395 0.143802 0.125303 0.0989676 0.0971241 0.2176 0.127509
plot()
is rather clever and accepts the matrix as argument: