Skip to content

Inconsistent sensitivities with multiple parameters #272

@klamike

Description

@klamike

I've been working on a project with DiffOpt where I need to query the sensitivity of variables wrt parameters. I managed to make a MWE of the issue with just JuMP and DiffOpt, but I'm still not sure what the underlying problem is:


Consider the problem below where for 0 < p < 1, there is exactly one optimal active set -> the x ≥ p constraint(s) are active and the bounds on x are inactive. So I expect that the W extracted below should be exactly the identity matrix (scaled by the perturbation).

Note I am using the new POI integration from #262.

For one parameter (P = 1), everything works as expected:

using JuMP, DiffOpt, Ipopt

P = 1

m = Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer, with_parametric_opt_interface=true))
@variable(m, 0.0  x[1:P]  1.0)
@variable(m, p[1:P]  Parameter.(0.5))
@constraint(m, x .≥ p)
@objective(m, Min, sum(x))
optimize!(m)
@assert is_solved_and_feasible(m)

W = zeros(length(x), length(p))
for i in 1:length(p)
    DiffOpt.empty_input_sensitivities!(m)
    δ = zeros(length(p))
    δ[i] = 1.0  # to compute sensitivity wrt p[i] 
    MOI.set.(m, DiffOpt.ForwardConstraintSet(), ParameterRef.(p), Parameter.(δ))
    DiffOpt.forward_differentiate!(m)
    for j in 1:length(x)
        W[j, i] = MOI.get(m, DiffOpt.ForwardVariablePrimal(), x[j])
    end
end

W
# 1×1 Matrix{Float64}:
#  1.0

But setting P = 2, we get:

# 2×2 Matrix{Float64}:
#   1.0  1.0
#  -0.0  1.0

where it seems the first column is correct, but there is an extra 1 in the second column first row. With larger P the same pattern continues, e.g. for P=5:

# 5×5 Matrix{Float64}:
#  1.0  1.0  1.0  1.0  1.0
#  0.0  1.0  1.0  1.0  1.0
#  0.0  0.0  1.0  1.0  1.0
#  0.0  0.0  0.0  1.0  1.0
#  0.0  0.0  0.0  0.0  1.0

Interestingly, the same script but using the new NLP backend in #260 does match what I expect! To force the use of the NLP backend, I switch to that branch and pass with_parametric_opt_interface=false:

m = Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer, with_parametric_opt_interface=false))

and the result is as expected:

# 5×5 Matrix{Float64}:
#  1.0  0.0  0.0  0.0  0.0
#  0.0  1.0  0.0  0.0  0.0
#  0.0  0.0  1.0  0.0  0.0
#  0.0  0.0  0.0  1.0  0.0
#  0.0  0.0  0.0  0.0  1.0

I also tried switching to Clarabel and adding a conic constraint to force the ConicProgram backend - same output as QuadraticProgram:

ConicProgram version
using Clarabel

P = 2

m = Model(() -> DiffOpt.diff_optimizer(Clarabel.Optimizer, with_parametric_opt_interface=true))
@variable(m, 0.0  x[1:P]  1.0)
@variable(m, y)
@constraint(m, [y; 1; x[1]]  MOI.ExponentialCone())
@variable(m, p[1:P]  Parameter.(0.5))
@constraint(m, x .≥ p)
@objective(m, Min, sum(x))
optimize!(m)
@assert is_solved_and_feasible(m)

# ...same code to compute W...
# 2×2 Matrix{Float64}:
#  1.0          1.0
# -8.68782e-10  1.0

cc @andrewrosemberg

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions