-
Notifications
You must be signed in to change notification settings - Fork 230
Closed
Description
Hello,
For many models, MLE and MAP return results for local rather than global maxima. Sometimes switching optimizers, or tweaking optimizer options can help. In my experience, a simple solution is to run the optimizer multiple times and select the maximum from the various attempts. Would it be possible to add a second method for MLE and MAP, which allows users to specify the number of attemps? For example,
function maximum_likelihood(model::DynamicPPL.Model, n_reps::Integer, args...; kwargs...)
best_lp = -Inf
mle = estimate_mode(model, MLE(), args...; kwargs...)
for i in 2:n_reps
_mle = estimate_mode(model, MLE(), args...; kwargs...)
mle = _mle.lp > best_lp ? _mle : mle
end
return mle
end
Here is an example of the problem using a fairly simple model:
Example
using SequentialSamplingModels
using Random
using Turing
Random.seed!(100)
n_samples = 50
rts = rand(ShiftedLogNormal(ν=-1, σ=.8, τ=.3), n_samples)
@model function model(rts; min_rt = minimum(rts))
ν ~ Normal(-1, 2)
σ ~ truncated(Normal(.8, 2), 0, Inf)
τ ~ Uniform(0, min_rt)
rts ~ ShiftedLogNormal(ν, σ, τ)
return (;ν, σ, τ)
end
lb = [-1,0,0]
ub = [10, 10, minimum(rts)]
# Generate a MLE estimate.
lps = map(_ -> maximum_likelihood(model(rts); lb, ub).lp, 1:10)
# Generate a MAP estimate.
map_estimate = maximum_a_posteriori(model(rts); lb, ub)
Results
10-element Vector{Float64}:
-12.226295752323539
-12.226295752292035
-80.44912241263312
-12.226295752293689
-64.40215460329794
-12.22629575228706
-12.226295752286504
-84.43334167030277
-76.50164720237467
-55.60874854749243
As you can see, half of the time the algorithm landed on a local maxima.
If this is something you are willing to support, I can submit a PR.
Metadata
Metadata
Assignees
Labels
No labels