mutable struct VariationalLearner
::Float64 # prob. of using G1
p::Float64 # learning rate
gamma::Float64 # prob. of L1 \ L2
P1::Float64 # prob. of L2 \ L1
P2end
Speaking and listening
lecture
Update 7 May 2024
Fixed the buggy learn!
function. Also added the missing link to the homework.
Plan
- Last week, we ran out of time
- Better go slowly and build a solid foundation rather than try to cover as much ground as possible
- Hence, today:
- Finish last week’s material
- Introduce a little bit of new material: implementing interactions between variational learners
Dropping the environment
- So far, we’ve been working with the abstraction of a
LearningEnvironment
:
- We will now drop this and have two
VariationalLearner
s interacting:
- The probabilities
P1
andP2
now need to be represented inside the learner:
- Hence we define:
Exercise
Write three functions:
speak(x::VariationalLearner)
: takes a variational learner as argument and returns a string uttered by the learnerlearn!(x::VariationalLearner, s::String)
: makes variational learnerx
learn from strings
interact!(x::VariationalLearner, y::VariationalLearner)
: makesx
utter a string andy
learn from that string
Answer (speak)
using StatsBase
function speak(x::VariationalLearner)
= sample(["G1", "G2"], Weights([x.p, 1 - x.p]))
g
if g == "G1"
return sample(["S1", "S12"], Weights([x.P1, 1 - x.P1]))
else
return sample(["S2", "S12"], Weights([x.P2, 1 - x.P2]))
end
end
speak (generic function with 1 method)
Answer (learn!)
function learn!(x::VariationalLearner, s::String)
= sample(["G1", "G2"], Weights([x.p, 1 - x.p]))
g
if g == "G1" && s != "S2"
= x.p + x.gamma * (1 - x.p)
x.p elseif g == "G1" && s == "S2"
= x.p - x.gamma * x.p
x.p elseif g == "G2" && s != "S1"
= x.p - x.gamma * x.p
x.p elseif g == "G2" && s == "S1"
= x.p + x.gamma * (1 - x.p)
x.p end
return x.p
end
learn! (generic function with 1 method)
Answer (interact!)
function interact!(x::VariationalLearner, y::VariationalLearner)
= speak(x)
s learn!(y, s)
end
interact! (generic function with 1 method)
Picking random agents
rand()
without arguments returns a random float between 0 and 1rand(x)
with argumentx
returns a random element ofx
- If we have a population of agents
pop
, then we can userand(pop)
to pick a random agent - This is very useful for evolving an ABM
Aside: for
loops
- A
for
loop is used to repeat a code block a number of times - Similar to array comprehensions; however, result is not stored in an array
for i in 1:3
println("Current number is " * string(i))
end
Current number is 1
Current number is 2
Current number is 3
A whole population
- Using a
for
loop and the functions we defined above, it is now very easy to iterate or evolve a population of agents:
= [VariationalLearner(0.1, 0.01, 0.4, 0.1) for i in 1:1000]
pop
for t in 1:100
= rand(pop)
x = rand(pop)
y interact!(x, y)
end
Exercise
Write the same thing using an array comprehension instead of a for
loop.
Answer
= [VariationalLearner(0.1, 0.01, 0.4, 0.1) for i in 1:1000]
pop
interact!(rand(pop), rand(pop)) for t in 1:100] [
100-element Vector{Float64}:
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
⋮
0.09801
0.099
0.10900000000000001
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
0.099
Next time
- Next week, we will learn how to summarize the state of an entire population
- This will allow us to track the population’s behaviour over time and hence model potential language change
- This week’s homework is all about consolidating the ideas we’ve looked at so far – the variational learner and basics of Julia