Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
cb08e5a
WIP SemImplyState
Mar 17, 2024
a9161d0
minus2ll(): cleanup method signatures
alyst Feb 2, 2025
53741ee
fix chi2
alyst Jun 13, 2024
4f86dba
fix RMSEA
Mar 19, 2024
7eef27e
p_values(): use ccdf()
May 28, 2024
aeead7f
FIML: SemFIMLPattern
alyst Feb 4, 2025
afa0605
declare cov matrices symmetric
alyst Jun 13, 2024
c6a3009
EM: optimizations
Mar 20, 2024
d57c1c7
start_simple(SemEnsemble): simplify
Mar 20, 2024
f43b535
RAM: reuse sigma array
Mar 23, 2024
8ccb545
RAM: optional sparse Sigma matrix
Apr 1, 2024
618c6b5
RAM: don't need to copy (I-A)
Mar 23, 2024
855767e
ML: refactor to minimize allocs
alyst Feb 4, 2025
e58b3b8
add PackageExtensionCompat
Mar 12, 2024
08d8c56
variance_params(SEMSpec)
Mar 26, 2024
da83730
predict_latent_vars()
alyst Dec 23, 2024
07dbeff
lavaan_model()
Apr 1, 2024
d42d3d5
EM: move code refs to docstring
Apr 10, 2024
9502f07
EM MVN: decouple from SemObsMissing
alyst Dec 22, 2024
9a83189
test/fiml: set EM MVN rtol=1e-10
alyst Apr 14, 2024
a95fa04
SemObsMissing: fix obs_mean() test
alyst Aug 11, 2024
331e2f9
MissingPattern: transpose data
Apr 17, 2024
1a52cb2
EM MVN: report rel_error if not converged
Apr 17, 2024
800cc5a
EM: max_nsamples_em opt to limit samples used
alyst Jun 13, 2024
74fc62b
EM: optimize mean handling
alyst Aug 11, 2024
9bdf384
test_grad/hess(): check that alt calls give same results
May 29, 2024
07afbf3
start_simple(): code cleanup
alyst Aug 11, 2024
d614b5b
start_simple(): start vals for lat and obs means
Jul 9, 2024
78ef453
observed_vars(RAMMatrices; order): rows/cols order
alyst Dec 22, 2024
a4d3eca
observed_var_indices(::RAMMatrices; order=:columns)
Sep 22, 2024
99d5bc2
move sparse mtx utils to new file
alyst Dec 22, 2024
01d5d03
EM: min_eigval kw for regularization
alyst Dec 22, 2024
22173ae
fix batch_sym_inv_updates() ws
alyst Dec 22, 2024
edbce8a
reorder_observed_vars!(spec) method
alyst Feb 2, 2025
51c98e0
vech() and vechinds() functions
alyst Dec 24, 2024
18fd3cf
SemImplied/SemLossFun: drop meanstructure kwarg
alyst Feb 4, 2025
8c5a857
RAMMatrices(): ctor to replace params
May 27, 2024
d4ebb8e
RAMSymbolic: rename _func to _eval!
alyst Dec 24, 2024
cc0bacf
md ws fixes
alyst Dec 23, 2024
cdc7ac1
use `@printf` to limit signif digits printed
alyst Dec 24, 2024
0faabc1
ML/FIML: workaround generic_matmul issue
alyst Dec 24, 2024
e8e8389
refactor Sem, SemEnsemble, SemLoss
alyst Feb 3, 2025
bfa8d36
ProxAlgo: fix doc typo
alyst Dec 24, 2024
e6c64f5
test/Proximal: move usings to the central file
alyst Dec 24, 2024
feeab50
tests: move usings in the top file
alyst Dec 24, 2024
57c99b0
remove multigroup2 tests
alyst Dec 24, 2024
0721165
tests: revert kwless ctors
alyst Dec 24, 2024
486984b
BlackBoxOptim.jl backend support
alyst Dec 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,10 @@ LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
NLSolversBase = "d41bc354-129a-5804-8e4c-c37616107c6c"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
PackageExtensionCompat = "65ce6f38-6b18-4e1d-a461-8949797d7930"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Expand Down Expand Up @@ -46,9 +48,12 @@ Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
test = ["Test"]

[weakdeps]
BlackBoxOptim = "a134a8b2-14d6-55f6-9291-3336d3ab0209"
NLopt = "76087f3c-5699-56af-9a33-bf431cd00edd"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
ProximalAlgorithms = "140ffc9f-1907-541a-a177-7475e0a401e9"

[extensions]
SEMNLOptExt = "NLopt"
SEMProximalOptExt = "ProximalAlgorithms"
SEMBlackBoxOptimExt = ["BlackBoxOptim", "Optimisers"]
4 changes: 2 additions & 2 deletions docs/src/tutorials/concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ Available loss functions are
- [`SemRidge`](@ref): ridge regularization

## The optimizer part aka `SemOptimizer`
The optimizer part of a model connects to the numerical optimization backend used to fit the model.
It can be used to control options like the optimization algorithm, linesearch, stopping criteria, etc.
The optimizer part of a model connects to the numerical optimization backend used to fit the model.
It can be used to control options like the optimization algorithm, linesearch, stopping criteria, etc.
There are currently two available backends, [`SemOptimizerOptim`](@ref) connecting to the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) backend, and [`SemOptimizerNLopt`](@ref) connecting to the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) backend.
For more information about the available options see also the tutorials about [Using Optim.jl](@ref) and [Using NLopt.jl](@ref), as well as [Constrained optimization](@ref).

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/construction/build_by_parts.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)
```

Expand Down
20 changes: 10 additions & 10 deletions docs/src/tutorials/fitting/fitting.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,19 @@ model_fit = sem_fit(model)

# output

Fitted Structural Equation Model
===============================================
--------------------- Model -------------------
Fitted Structural Equation Model
===============================================
--------------------- Model -------------------

Structural Equation Model
- Loss Functions
Structural Equation Model
- Loss Functions
SemML
- Fields
observed: SemObservedData
implied: RAM
optimizer: SemOptimizerOptim
- Fields
observed: SemObservedData
implied: RAM
optimizer: SemOptimizerOptim

------------- Optimization result -------------
------------- Optimization result -------------

* Status: success

Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials/meanstructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)
```

Expand Down Expand Up @@ -78,7 +78,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)
```

Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials/regularization/regularization.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Setup

For ridge regularization, you can simply use `SemRidge` as an additional loss function
For ridge regularization, you can simply use `SemRidge` as an additional loss function
(for example, a model with the loss functions `SemML` and `SemRidge` corresponds to ridge-regularized maximum likelihood estimation).

For lasso, elastic net and (far) beyond, we provide the `ProximalSEM` package. You can install it and load it alongside `StructuralEquationModels`:
Expand Down Expand Up @@ -39,7 +39,7 @@ using ProximalOperators
## `SemOptimizerProximal`

`ProximalSEM` provides a new "building block" for the optimizer part of a model, called `SemOptimizerProximal`.
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
Those can handle, amongst other things, various forms of regularization.

It can be used as
Expand Down Expand Up @@ -87,7 +87,7 @@ end

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars
)

Expand Down
12 changes: 6 additions & 6 deletions docs/src/tutorials/specification/graph_interface.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Graph interface

## Workflow
## Workflow
As discussed before, when using the graph interface, you can specify your model as a graph

```julia
Expand All @@ -17,7 +17,7 @@ latent_vars = ...

partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)

model = Sem(
Expand All @@ -32,7 +32,7 @@ In general, there are two different types of parameters: **directed** and **indi
We allow multiple variables on both sides of an arrow, for example `x → [y z]` or `[a b] → [c d]`. The later specifies element wise edges; that is its the same as `a → c; b → d`. If you want edges corresponding to the cross-product, we have the double lined arrow `[a b] ⇒ [c d]`, corresponding to `a → c; a → d; b → c; b → d`. The undirected arrows ↔ (element-wise) and ⇔ (crossproduct) behave the same way.

!!! note "Unicode symbols in julia"
The `→` symbol is a unicode symbol allowed in julia (among many others; see this [list](https://docs.julialang.org/en/v1/manual/unicode-input/)). You can enter it in the julia REPL or the vscode IDE by typing `\to` followed by hitting `tab`. Similarly,
The `→` symbol is a unicode symbol allowed in julia (among many others; see this [list](https://docs.julialang.org/en/v1/manual/unicode-input/)). You can enter it in the julia REPL or the vscode IDE by typing `\to` followed by hitting `tab`. Similarly,
- `←` = `\leftarrow`,
- `↔` = `\leftrightarrow`,
- `⇒` = `\Rightarrow`,
Expand All @@ -54,7 +54,7 @@ graph = @StenoGraph begin
ξ₃ ↔ fixed(1.0)*ξ₃
end
```
would
would
- fix the directed effects from `ξ₁` to `x1` and from `ξ₂` to `x2` to `1`
- leave the directed effect from `ξ₃` to `x7` free but instead restrict the variance of `ξ₃` to `1`
- give the effect from `ξ₁` to `x3` the label `:a` (which can be convenient later if you want to retrieve information from your model about that specific parameter)
Expand All @@ -66,7 +66,7 @@ As you saw above and in the [A first model](@ref) example, the graph object need
```julia
partable = ParameterTable(
graph,
latent_vars = latent_vars,
latent_vars = latent_vars,
observed_vars = observed_vars)
```

Expand All @@ -85,7 +85,7 @@ The variable names (`:x1`) have to be symbols, the syntax `:something` creates a
_(latent_vars) ⇔ _(latent_vars)
end
```
creates undirected effects coresponding to
creates undirected effects coresponding to
1. the variances of all observed variables and
2. the variances plus covariances of all latent variables
So if you want to work with a subset of variables, simply specify a vector of symbols `somevars = [...]`, and inside the graph specification, refer to them as `_(somevars)`.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/specification/parameter_table.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,5 @@ As lavaan also uses parameter tables to store model specifications, we are worki

## Convert from and to RAMMatrices

To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(rammatrices)`.
To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(rammatrices)`.
To convert an object of type `ParameterTable` to RAMMatrices, you can use `ram_matrices = RAMMatrices(partable)`.
26 changes: 13 additions & 13 deletions docs/src/tutorials/specification/ram_matrices.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# RAMMatrices interface

Models can also be specified by an object of type `RAMMatrices`.
Models can also be specified by an object of type `RAMMatrices`.
The RAM (reticular action model) specification corresponds to three matrices; the `A` matrix containing all directed parameters, the `S` matrix containing all undirected parameters, and the `F` matrix filtering out latent variables from the model implied covariance.

The model implied covariance matrix for the observed variables of a SEM is then computed as
Expand Down Expand Up @@ -56,9 +56,9 @@ A =[0 0 0 0 0 0 0 0 0 0 0 1.0 0 0
θ = Symbol.(:θ, 1:31)

spec = RAMMatrices(;
A = A,
S = S,
F = F,
A = A,
S = S,
F = F,
params = θ,
colnames = [:x1, :x2, :x3, :y1, :y2, :y3, :y4, :y5, :y6, :y7, :y8, :ind60, :dem60, :dem65]
)
Expand All @@ -71,9 +71,9 @@ model = Sem(

Let's look at this step by step:

First, we specify the `A`, `S` and `F`-Matrices.
For a free parameter, we write a `Symbol` like `:θ1` (or any other symbol we like) to the corresponding place in the respective matrix, to constrain parameters to be equal we just use the same `Symbol` in the respective entries.
To fix a parameter (as in the `A`-Matrix above), we just write down the number we want to fix it to.
First, we specify the `A`, `S` and `F`-Matrices.
For a free parameter, we write a `Symbol` like `:θ1` (or any other symbol we like) to the corresponding place in the respective matrix, to constrain parameters to be equal we just use the same `Symbol` in the respective entries.
To fix a parameter (as in the `A`-Matrix above), we just write down the number we want to fix it to.
All other entries are 0.

Second, we specify a vector of symbols containing our parameters:
Expand All @@ -82,14 +82,14 @@ Second, we specify a vector of symbols containing our parameters:
θ = Symbol.(:θ, 1:31)
```

Third, we construct an object of type `RAMMatrices`, passing our matrices and parameters, as well as the column names of our matrices.
Third, we construct an object of type `RAMMatrices`, passing our matrices and parameters, as well as the column names of our matrices.
Those are quite important, as they will be used to rearrange your data to match it to your `RAMMatrices` specification.

```julia
spec = RAMMatrices(;
A = A,
S = S,
F = F,
A = A,
S = S,
F = F,
params = θ,
colnames = [:x1, :x2, :x3, :y1, :y2, :y3, :y4, :y5, :y6, :y7, :y8, :ind60, :dem60, :dem65]
)
Expand All @@ -109,7 +109,7 @@ According to the RAM, model implied mean values of the observed variables are co
```math
\mu = F(I-A)^{-1}M
```
where `M` is a vector of mean parameters. To estimate the means of the observed variables in our example (and set the latent means to `0`), we would specify the model just as before but add
where `M` is a vector of mean parameters. To estimate the means of the observed variables in our example (and set the latent means to `0`), we would specify the model just as before but add

```julia
...
Expand All @@ -128,5 +128,5 @@ spec = RAMMatrices(;

## Convert from and to ParameterTables

To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(ram_matrices)`.
To convert a RAMMatrices object to a ParameterTable, simply use `partable = ParameterTable(ram_matrices)`.
To convert an object of type `ParameterTable` to RAMMatrices, you can use `ram_matrices = RAMMatrices(partable)`.
49 changes: 49 additions & 0 deletions ext/SEMBlackBoxOptimExt/AdamMutation.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# mutate by moving in the gradient direction
mutable struct AdamMutation{M <: AbstractSem, O, S} <: MutationOperator
model::M
optim::O
opt_state::S
params_fraction::Float64

function AdamMutation(model::AbstractSem, params::AbstractDict)
optim = RAdam(params[:AdamMutation_eta], params[:AdamMutation_beta])
params_fraction = params[:AdamMutation_params_fraction]
opt_state = Optimisers.init(optim, Vector{Float64}(undef, nparams(model)))

Check warning on line 11 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L8-L11

Added lines #L8 - L11 were not covered by tests

new{typeof(model), typeof(optim), typeof(opt_state)}(

Check warning on line 13 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L13

Added line #L13 was not covered by tests
model,
optim,
opt_state,
params_fraction,
)
end
end

Base.show(io::IO, op::AdamMutation) =

Check warning on line 22 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L22

Added line #L22 was not covered by tests
print(io, "AdamMutation(", op.optim, " state[3]=", op.opt_state[3], ")")

"""
Default parameters for `AdamMutation`.
"""
const AdamMutation_DefaultOptions = ParamsDict(
:AdamMutation_eta => 1E-1,
:AdamMutation_beta => (0.99, 0.999),
:AdamMutation_params_fraction => 0.25,
)

function BlackBoxOptim.apply!(m::AdamMutation, v::AbstractVector{<:Real}, target_index::Int)
grad = similar(v)
obj = SEM.evaluate!(0.0, grad, nothing, m.model, v)
@inbounds for i in eachindex(grad)
(rand() > m.params_fraction) && (grad[i] = 0.0)
end

Check warning on line 39 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L34-L39

Added lines #L34 - L39 were not covered by tests

m.opt_state, dv = Optimisers.apply!(m.optim, m.opt_state, v, grad)
if (m.opt_state[3][1] <= 1E-20) || !isfinite(obj) || any(!isfinite, dv)
m.opt_state = Optimisers.init(m.optim, v)

Check warning on line 43 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L41-L43

Added lines #L41 - L43 were not covered by tests
else
v .-= dv

Check warning on line 45 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L45

Added line #L45 was not covered by tests
end

return v

Check warning on line 48 in ext/SEMBlackBoxOptimExt/AdamMutation.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/AdamMutation.jl#L48

Added line #L48 was not covered by tests
end
89 changes: 89 additions & 0 deletions ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
############################################################################################
### connect to BlackBoxOptim.jl as backend
############################################################################################

"""
"""
struct SemOptimizerBlackBoxOptim <: SemOptimizer{:BlackBoxOptim}
lower_bound::Float64 # default lower bound
variance_lower_bound::Float64 # default variance lower bound
lower_bounds::Union{Dict{Symbol, Float64}, Nothing}

upper_bound::Float64 # default upper bound
upper_bounds::Union{Dict{Symbol, Float64}, Nothing}
end

function SemOptimizerBlackBoxOptim(;

Check warning on line 16 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L16

Added line #L16 was not covered by tests
lower_bound::Float64 = -1000.0,
lower_bounds::Union{AbstractDict{Symbol, Float64}, Nothing} = nothing,
variance_lower_bound::Float64 = 0.001,
upper_bound::Float64 = 1000.0,
upper_bounds::Union{AbstractDict{Symbol, Float64}, Nothing} = nothing,
kwargs...,
)
if variance_lower_bound < 0.0
throw(ArgumentError("variance_lower_bound must be non-negative"))

Check warning on line 25 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L24-L25

Added lines #L24 - L25 were not covered by tests
end
return SemOptimizerBlackBoxOptim(

Check warning on line 27 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L27

Added line #L27 was not covered by tests
lower_bound,
variance_lower_bound,
lower_bounds,
upper_bound,
upper_bounds,
)
end

SEM.SemOptimizer{:BlackBoxOptim}(args...; kwargs...) =

Check warning on line 36 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L36

Added line #L36 was not covered by tests
SemOptimizerBlackBoxOptim(args...; kwargs...)

SEM.algorithm(optimizer::SemOptimizerBlackBoxOptim) = optimizer.algorithm
SEM.options(optimizer::SemOptimizerBlackBoxOptim) = optimizer.options

Check warning on line 40 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L39-L40

Added lines #L39 - L40 were not covered by tests

struct SemModelBlackBoxOptimProblem{M <: AbstractSem} <:
OptimizationProblem{ScalarFitnessScheme{true}}
model::M
fitness_scheme::ScalarFitnessScheme{true}
search_space::ContinuousRectSearchSpace
end

function BlackBoxOptim.search_space(model::AbstractSem)
optim = model.optimizer::SemOptimizerBlackBoxOptim
varparams = Set(SEM.variance_params(model.implied.ram_matrices))
return ContinuousRectSearchSpace(

Check warning on line 52 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L49-L52

Added lines #L49 - L52 were not covered by tests
[
begin
def = in(p, varparams) ? optim.variance_lower_bound : optim.lower_bound
isnothing(optim.lower_bounds) ? def : get(optim.lower_bounds, p, def)

Check warning on line 56 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L55-L56

Added lines #L55 - L56 were not covered by tests
end for p in SEM.params(model)
],
[
begin
def = optim.upper_bound
isnothing(optim.upper_bounds) ? def : get(optim.upper_bounds, p, def)

Check warning on line 62 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L61-L62

Added lines #L61 - L62 were not covered by tests
end for p in SEM.params(model)
],
)
end

function SemModelBlackBoxOptimProblem(

Check warning on line 68 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L68

Added line #L68 was not covered by tests
model::AbstractSem,
optimizer::SemOptimizerBlackBoxOptim,
)
SemModelBlackBoxOptimProblem(model, ScalarFitnessScheme{true}(), search_space(model))

Check warning on line 72 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L72

Added line #L72 was not covered by tests
end

BlackBoxOptim.fitness(params::AbstractVector, wrapper::SemModelBlackBoxOptimProblem) =

Check warning on line 75 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L75

Added line #L75 was not covered by tests
return SEM.evaluate!(0.0, nothing, nothing, wrapper.model, params)

# sem_fit method
function SEM.sem_fit(

Check warning on line 79 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L79

Added line #L79 was not covered by tests
optimizer::SemOptimizerBlackBoxOptim,
model::AbstractSem,
start_params::AbstractVector;
MaxSteps::Integer = 50000,
kwargs...,
)
problem = SemModelBlackBoxOptimProblem(model, optimizer)
res = bboptimize(problem; MaxSteps, kwargs...)
return SemFit(best_fitness(res), best_candidate(res), nothing, model, res)

Check warning on line 88 in ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl

View check run for this annotation

Codecov / codecov/patch

ext/SEMBlackBoxOptimExt/BlackBoxOptim.jl#L86-L88

Added lines #L86 - L88 were not covered by tests
end
Loading