OptimizationIpopt.jl
OptimizationIpopt.jl
is a wrapper package that integrates Ipopt.jl
with the Optimization.jl
ecosystem. This allows you to use the powerful Ipopt (Interior Point OPTimizer) solver through Optimization.jl's unified interface.
Ipopt is a software package for large-scale nonlinear optimization designed to find (local) solutions of mathematical optimization problems of the form:
\[\begin{aligned} \min_{x \in \mathbb{R}^n} \quad & f(x) \\ \text{s.t.} \quad & g_L \leq g(x) \leq g_U \\ & x_L \leq x \leq x_U \end{aligned}\]
where $f(x): \mathbb{R}^n \to \mathbb{R}$ is the objective function, $g(x): \mathbb{R}^n \to \mathbb{R}^m$ are the constraint functions, and the vectors $g_L$ and $g_U$ denote the lower and upper bounds on the constraints, and the vectors $x_L$ and $x_U$ are the bounds on the variables $x$.
Installation: OptimizationIpopt.jl
To use this package, install the OptimizationIpopt package:
import Pkg;
Pkg.add("OptimizationIpopt");
Methods
OptimizationIpopt.jl provides the IpoptOptimizer
algorithm, which wraps the Ipopt.jl solver for use with Optimization.jl. This is an interior-point algorithm that uses line search filter methods and is particularly effective for:
- Large-scale nonlinear problems
- Problems with nonlinear constraints
- Problems requiring high accuracy solutions
Algorithm Requirements
IpoptOptimizer
requires:
- Gradient information (via automatic differentiation or user-provided)
- Hessian information (can be approximated or provided)
- Constraint Jacobian (for constrained problems)
- Constraint Hessian (for constrained problems)
The algorithm supports:
- Box constraints via
lb
andub
in theOptimizationProblem
- General nonlinear equality and inequality constraints via
lcons
anducons
Basic Usage
using Optimization, OptimizationIpopt
# Create optimizer with default settings
opt = IpoptOptimizer()
# Or configure Ipopt-specific options
opt = IpoptOptimizer(
acceptable_tol = 1e-8,
mu_strategy = "adaptive"
)
# Solve the problem
sol = solve(prob, opt)
Options and Parameters
Common Interface Options
The following options can be passed as keyword arguments to solve
and follow the common Optimization.jl interface:
maxiters
: Maximum number of iterations (overrides Ipopt'smax_iter
)maxtime
: Maximum wall time in seconds (overrides Ipopt'smax_wall_time
)abstol
: Absolute tolerance (not directly used by Ipopt)reltol
: Convergence tolerance (overrides Ipopt'stol
)verbose
: Control output verbosity (overrides Ipopt'sprint_level
)false
or0
: No outputtrue
or5
: Standard output- Integer values 0-12: Different verbosity levels
IpoptOptimizer Constructor Options
Ipopt-specific options are passed to the IpoptOptimizer
constructor. The most commonly used options are available as struct fields:
Termination Options
acceptable_tol::Float64 = 1e-6
: Acceptable convergence tolerance (relative)acceptable_iter::Int = 15
: Number of acceptable iterations before terminationdual_inf_tol::Float64 = 1.0
: Desired threshold for dual infeasibilityconstr_viol_tol::Float64 = 1e-4
: Desired threshold for constraint violationcompl_inf_tol::Float64 = 1e-4
: Desired threshold for complementarity conditions
Linear Solver Options
linear_solver::String = "mumps"
: Linear solver to use- Default: "mumps" (included with Ipopt)
- HSL solvers: "ma27", "ma57", "ma86", "ma97" (require separate installation)
- Others: "pardiso", "spral" (require separate installation)
linear_system_scaling::String = "none"
: Method for scaling linear system. Use "mc19" for HSL solvers.
NLP Scaling Options
nlp_scaling_method::String = "gradient-based"
: Scaling method for NLP- Options: "none", "user-scaling", "gradient-based", "equilibration-based"
nlp_scaling_max_gradient::Float64 = 100.0
: Maximum gradient after scaling
Barrier Parameter Options
mu_strategy::String = "monotone"
: Update strategy for barrier parameter ("monotone", "adaptive")mu_init::Float64 = 0.1
: Initial value for barrier parametermu_oracle::String = "quality-function"
: Oracle for adaptive mu strategy
Hessian Options
hessian_approximation::String = "exact"
: How to approximate the Hessian"exact"
: Use exact Hessian"limited-memory"
: Use L-BFGS approximation
limited_memory_max_history::Int = 6
: History size for L-BFGSlimited_memory_update_type::String = "bfgs"
: Quasi-Newton update formula ("bfgs", "sr1")
Line Search Options
line_search_method::String = "filter"
: Line search method ("filter", "penalty")accept_every_trial_step::String = "no"
: Accept every trial step (disables line search)
Output Options
print_timing_statistics::String = "no"
: Print detailed timing informationprint_info_string::String = "no"
: Print algorithm info string
Warm Start Options
warm_start_init_point::String = "no"
: Use warm start from previous solution
Restoration Phase Options
expect_infeasible_problem::String = "no"
: Enable if problem is expected to be infeasible
Additional Options Dictionary
For Ipopt options not available as struct fields, use the additional_options
dictionary:
opt = IpoptOptimizer(
linear_solver = "ma57",
additional_options = Dict(
"derivative_test" => "first-order",
"derivative_test_tol" => 1e-4,
"fixed_variable_treatment" => "make_parameter",
"alpha_for_y" => "primal"
)
)
The full list of available options is documented in the Ipopt Options Reference.
Option Priority
Options follow this priority order (highest to lowest):
- Common interface arguments passed to
solve
(e.g.,reltol
,maxiters
) - Options in
additional_options
dictionary - Struct field values in
IpoptOptimizer
Example with multiple option sources:
opt = IpoptOptimizer(
acceptable_tol = 1e-6, # Struct field
mu_strategy = "adaptive", # Struct field
linear_solver = "ma57", # Struct field (needs HSL)
print_timing_statistics = "yes", # Struct field
additional_options = Dict(
"alpha_for_y" => "primal", # Not a struct field
"max_iter" => 500 # Will be overridden by maxiters below
)
)
sol = solve(prob, opt;
maxiters = 1000, # Overrides max_iter in additional_options
reltol = 1e-8 # Sets Ipopt's tol
)
Examples
Basic Unconstrained Optimization
The Rosenbrock function can be minimized using IpoptOptimizer
:
using Optimization, OptimizationIpopt
using Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
# Ipopt requires gradient information
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p)
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999999999899
0.9999999999999792
Box-Constrained Optimization
Adding box constraints to limit the search space:
using Optimization, OptimizationIpopt
using Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
optfunc = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p;
lb = [-1.0, -1.0],
ub = [1.5, 1.5])
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999942690103
0.9999999885017182
Nonlinear Constrained Optimization
Solving problems with nonlinear equality and inequality constraints:
using Optimization, OptimizationIpopt
using Zygote
# Objective: minimize x[1]^2 + x[2]^2
objective(x, p) = x[1]^2 + x[2]^2
# Constraint: x[1]^2 + x[2]^2 - 2*x[1] = 0 (equality)
# and x[1] + x[2] >= 1 (inequality)
function constraints(res, x, p)
res[1] = x[1]^2 + x[2]^2 - 2*x[1] # equality constraint
res[2] = x[1] + x[2] # inequality constraint
end
x0 = [0.5, 0.5]
optfunc = OptimizationFunction(objective, AutoZygote(); cons = constraints)
# First constraint is equality (lcons = ucons = 0)
# Second constraint is inequality (lcons = 1, ucons = Inf)
prob = OptimizationProblem(optfunc, x0;
lcons = [0.0, 1.0],
ucons = [0.0, Inf])
sol = solve(prob, IpoptOptimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.29289321506718563
0.7071067774417749
Using Limited-Memory BFGS Approximation
For large-scale problems where computing the exact Hessian is expensive:
using Optimization, OptimizationIpopt
using Zygote
# Large-scale problem
n = 100
rosenbrock_nd(x, p) = sum(p[2] * (x[i+1] - x[i]^2)^2 + (p[1] - x[i])^2 for i in 1:n-1)
x0 = zeros(n)
p = [1.0, 100.0]
# Using automatic differentiation for gradients only
optfunc = OptimizationFunction(rosenbrock_nd, AutoZygote())
prob = OptimizationProblem(optfunc, x0, p)
# Use L-BFGS approximation for Hessian
sol = solve(prob, IpoptOptimizer(
hessian_approximation = "limited-memory",
limited_memory_max_history = 10);
maxiters = 1000)
retcode: Success
u: 100-element Vector{Float64}:
1.0000000000127078
1.0000000000075149
0.9999999999811232
1.0000000000194003
0.9999999999711495
0.9999999999696987
1.0000000000091513
0.9999999999911049
0.9999999999979793
0.9999999999944585
⋮
0.9999999999661779
0.999999999996109
1.0000000000029101
1.0000000002029186
1.000000000298021
1.0000000005183187
1.0000000009206664
1.0000000019803021
1.0000000039715904
Portfolio Optimization Example
A practical example of portfolio optimization with constraints:
using Optimization, OptimizationIpopt
using Zygote
using LinearAlgebra
# Portfolio optimization: minimize risk subject to return constraint
n_assets = 5
μ = [0.05, 0.10, 0.15, 0.08, 0.12] # Expected returns
Σ = [0.05 0.01 0.02 0.01 0.00; # Covariance matrix
0.01 0.10 0.03 0.02 0.01;
0.02 0.03 0.15 0.02 0.03;
0.01 0.02 0.02 0.08 0.02;
0.00 0.01 0.03 0.02 0.06]
target_return = 0.10
# Objective: minimize portfolio variance
portfolio_risk(w, p) = dot(w, Σ * w)
# Constraints: sum of weights = 1, expected return >= target
function portfolio_constraints(res, w, p)
res[1] = sum(w) - 1.0 # Sum to 1 (equality)
res[2] = dot(μ, w) - target_return # Minimum return (inequality)
end
optfunc = OptimizationFunction(portfolio_risk, AutoZygote();
cons = portfolio_constraints)
w0 = fill(1.0/n_assets, n_assets)
prob = OptimizationProblem(optfunc, w0;
lb = zeros(n_assets), # No short selling
ub = ones(n_assets), # No single asset > 100%
lcons = [0.0, 0.0], # Equality and inequality constraints
ucons = [0.0, Inf])
sol = solve(prob, IpoptOptimizer();
reltol = 1e-8,
verbose = 5)
println("Optimal weights: ", sol.u)
println("Expected return: ", dot(μ, sol.u))
println("Portfolio variance: ", sol.objective)
This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.1.
Number of nonzeros in equality constraint Jacobian...: 5
Number of nonzeros in inequality constraint Jacobian.: 5
Number of nonzeros in Lagrangian Hessian.............: 15
Total number of variables............................: 5
variables with only lower bounds: 0
variables with lower and upper bounds: 5
variables with only upper bounds: 0
Total number of equality constraints.................: 1
Total number of inequality constraints...............: 1
inequality constraints with only lower bounds: 1
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.1200000e-02 0.00e+00 3.43e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 3.2177561e-02 0.00e+00 1.30e-02 -1.7 1.78e-02 - 9.93e-01 1.00e+00h 1
2 3.0759152e-02 0.00e+00 4.26e-02 -2.5 5.87e-02 - 9.79e-01 1.00e+00f 1
3 2.9846484e-02 0.00e+00 2.83e-08 -2.5 1.07e-01 - 1.00e+00 1.00e+00f 1
4 2.8068833e-02 0.00e+00 2.47e-02 -3.8 3.60e-02 - 8.65e-01 1.00e+00f 1
5 2.7401391e-02 0.00e+00 1.50e-09 -3.8 2.41e-02 - 1.00e+00 1.00e+00f 1
6 2.7234938e-02 0.00e+00 1.46e-04 -5.7 1.03e-02 - 9.79e-01 1.00e+00f 1
7 2.7232343e-02 0.00e+00 1.84e-11 -5.7 1.85e-03 - 1.00e+00 1.00e+00f 1
8 2.7230500e-02 0.00e+00 2.51e-14 -8.6 1.40e-04 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 8
(scaled) (unscaled)
Objective...............: 2.7230500043271131e-02 2.7230500043271131e-02
Dual infeasibility......: 2.5091040356528538e-14 2.5091040356528538e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 6.4320485361913048e-09 6.4320485361913048e-09
Overall NLP error.......: 6.4320485361913048e-09 6.4320485361913048e-09
Number of objective function evaluations = 9
Number of objective gradient evaluations = 9
Number of equality constraint evaluations = 9
Number of inequality constraint evaluations = 9
Number of equality constraint Jacobian evaluations = 9
Number of inequality constraint Jacobian evaluations = 9
Number of Lagrangian Hessian evaluations = 8
Total seconds in IPOPT = 3.890
EXIT: Optimal Solution Found.
Optimal weights: [0.23290714113956607, 0.16055161296302964, 0.09670863193102121, 0.08466825825418357, 0.42516435571219957]
Expected return: 0.09999999648873308
Portfolio variance: 0.02723050004327113
Tips and Best Practices
Scaling: Ipopt performs better when variables and constraints are well-scaled. Consider normalizing your problem if variables have very different magnitudes.
Initial Points: Provide good initial guesses when possible. Ipopt is a local optimizer and the solution quality depends on the starting point.
Hessian Approximation: For large problems or when Hessian computation is expensive, use
hessian_approximation = "limited-memory"
in theIpoptOptimizer
constructor.Linear Solver Selection: The choice of linear solver can significantly impact performance. For large problems, consider using HSL solvers (ma27, ma57, ma86, ma97). Note that HSL solvers require separate installation - see the Ipopt.jl documentation for setup instructions. The default MUMPS solver works well for small to medium problems.
Constraint Formulation: Ipopt handles equality constraints well. When possible, formulate constraints as equalities rather than pairs of inequalities.
Warm Starting: When solving a sequence of similar problems, use the solution from the previous problem as the initial point for the next. You can enable warm starting with
IpoptOptimizer(warm_start_init_point = "yes")
.
References
For more detailed information about Ipopt's algorithms and options, consult: