Agent-based modelling, Konstanz, 2024
21 May 2024
Note
Today’s lecture requires the following Julia packages:
It would be a good idea to install them now, if your system does not already have them.
VariationalLearner
VariationalLearner
objects as arguments, such as speak
, learn!
and interact!
GridVL
VariationalLearner
objectsImportant
From now on, I will use SimpleVL
to refer to our original VariationalLearner
, i.e. the type that lives in an unstructured population.
VariationalLearner
from now on will denote the supertype of all “variational learnery” things.
sleep
function for Mammal
Human
and Cat
inherit this function, and so we don’t need to define one for them separatelyspeak
for the supertype VariationalLearner
SimpleVL
and GridVL
have access to this function<:
operatorVariationalLearner
abstract type itself needs to inherit from AbstractAgent
even though speak
wasn’t defined for SimpleVL
Cat
needs to sleep differently from other Mammal
s?sleep(x::Cat)
Mammal
s will use the default function sleep(x::Mammal)
sleep
) can have multiple definitions depending on the argument’s typesleep
on Human
will trigger the sleep
method defined for Mammal
, since no sleep
method specific to Human
has been definedmodule VL
# Agents.jl functionality
using Agents
# we need this package for the sample() function
using StatsBase
# we export the following types and functions
export VariationalLearner
export SimpleVL
export GridVL
export speak
export learn!
export interact!
export VL_step!
# abstract type
abstract type VariationalLearner <: AbstractAgent end
# variational learner type on a 2D grid
@agent struct GridVL(GridAgent{2}) <: VariationalLearner
p::Float64 # prob. of using G1
gamma::Float64 # learning rate
P1::Float64 # prob. of L1 \ L2
P2::Float64 # prob. of L2 \ L1
end
# "simple" variational learner in unstructured population
mutable struct SimpleVL <: VariationalLearner
p::Float64 # prob. of using G1
gamma::Float64 # learning rate
P1::Float64 # prob. of L1 \ L2
P2::Float64 # prob. of L2 \ L1
end
# makes variational learner x utter a string
function speak(x::VariationalLearner)
g = sample(["G1", "G2"], Weights([x.p, 1 - x.p]))
if g == "G1"
return sample(["S1", "S12"], Weights([x.P1, 1 - x.P1]))
else
return sample(["S2", "S12"], Weights([x.P2, 1 - x.P2]))
end
end
# makes variational learner x learn from input string s
function learn!(x::VariationalLearner, s::String)
g = sample(["G1", "G2"], Weights([x.p, 1 - x.p]))
if g == "G1" && s != "S2"
x.p = x.p + x.gamma * (1 - x.p)
elseif g == "G1" && s == "S2"
x.p = x.p - x.gamma * x.p
elseif g == "G2" && s != "S1"
x.p = x.p - x.gamma * x.p
elseif g == "G2" && s == "S1"
x.p = x.p + x.gamma * (1 - x.p)
end
return x.p
end
# makes two variational learners interact, with one speaking
# and the other one learning
function interact!(x::VariationalLearner, y::VariationalLearner)
s = speak(x)
learn!(y, s)
end
# steps a model
function VL_step!(agent, model)
interlocutor = random_nearby_agent(agent, model)
interact!(interlocutor, agent)
end
end # this closes the module
@benchmark
macro defined by BenchmarkTools.jlBenchmarkTools.Trial: 10000 samples with 1000 evaluations. Range (min … max): 1.335 ns … 21.823 ns ┊ GC (min … max): 0.00% … 0.00% Time (median): 1.542 ns ┊ GC (median): 0.00% Time (mean ± σ): 1.596 ns ± 0.523 ns ┊ GC (mean ± σ): 0.00% ± 0.00% ▅ ▇ ▆▅ █ ▂▇ ▆▇ ▃▅ ▆ ▆▅ ▆▃ ▂▃ ▂▂▂▃▂▁▁ ▁▁ ▃ █▁█▃██▃█▅██▃██▆██▅███▅██▅▃██▆████████████▆▆▆▇▅▆▇██████▇█▆█ █ 1.34 ns Histogram: log(frequency) by time 2.26 ns < Memory estimate: 0 bytes, allocs estimate: 0.
@benchmark begin
result = [] # empty array
for x in 0:100_000
append!(result, sqrt(x)) # put √x in array
end
end
BenchmarkTools.Trial: 3134 samples with 1 evaluation. Range (min … max): 1.235 ms … 5.818 ms ┊ GC (min … max): 0.00% … 52.82% Time (median): 1.413 ms ┊ GC (median): 0.00% Time (mean ± σ): 1.592 ms ± 511.088 μs ┊ GC (mean ± σ): 8.51% ± 14.51% ▁▃▆█▇▅▂▁ ▃▄▃ ▁ █████████▇▄▆███▇▆▄▅▁▅▅▁▁▁▃▃▆▆▇▇▆▅▁▅▁▇▇██▇▇▇▆▆▅▆▆▃▃▃▃▄▇▇▆▇▅▆ █ 1.24 ms Histogram: log(frequency) by time 3.82 ms < Memory estimate: 3.35 MiB, allocs estimate: 100012.
@benchmark begin
result = zeros(100_000 + 1)
for x in 0:100_000
result[x+1] = sqrt(x) # put √x in array
end
end
BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min … max): 100.037 μs … 689.957 μs ┊ GC (min … max): 0.00% … 49.59% Time (median): 102.822 μs ┊ GC (median): 0.00% Time (mean ± σ): 114.841 μs ± 44.015 μs ┊ GC (mean ± σ): 3.94% ± 8.72% █▃▁▂▃▂▁▁▂▁▁▁▁▁▁ ▁ ████████████████▇█▆▆▆▅▅▄▁▄▃▁▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▃▅▃▄▅▅▇▇█ █ 100 μs Histogram: log(frequency) by time 372 μs < Memory estimate: 781.36 KiB, allocs estimate: 2.
BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min … max): 78.327 μs … 654.640 μs ┊ GC (min … max): 0.00% … 49.57% Time (median): 79.472 μs ┊ GC (median): 0.00% Time (mean ± σ): 87.674 μs ± 32.322 μs ┊ GC (mean ± σ): 3.61% ± 8.48% █▂▁▁▁▃▁ ▁ ▁ ████████▆█▇█▇▇▇█▇█▆▆█▇▆▃▃▄▁▄▁▃▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▆▇▇▇▆▅ █ 78.3 μs Histogram: log(frequency) by time 275 μs < Memory estimate: 781.36 KiB, allocs estimate: 2.
BenchmarkTools.Trial: 10000 samples with 1 evaluation. Range (min … max): 78.441 μs … 605.374 μs ┊ GC (min … max): 0.00% … 48.05% Time (median): 79.186 μs ┊ GC (median): 0.00% Time (mean ± σ): 87.363 μs ± 31.767 μs ┊ GC (mean ± σ): 3.71% ± 8.67% █▁▁▁▂▃▁ ▁▁ ▁ ▁ ████████▇█████▇███▆▆▆▅▄▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▇█▇▆ █ 78.4 μs Histogram: log(frequency) by time 274 μs < Memory estimate: 781.36 KiB, allocs estimate: 2.
Procedure | Median time | Mem. estimate |
---|---|---|
Growing an array | ~1.4 ms | ~3.4 MiB |
Adding to 0-array | ~0.1 ms | ~0.8 MiB |
Array comprehension | ~80 µs | ~0.8 MiB |
Broadcasting | ~80 µs | ~0.8 MiB |
StandardABM
, Agents.jl will set up a new PRNG by defaultspeak
or learn!
) utilize a different PRNG, you may run into problems
Random.default_rng()
as an argument to StandardABM
when creating your model: