MathOptInterface.jl
MathOptInterface is a Julia abstraction layer to interface with a variety of mathematical optimization solvers.
Installation: OptimizationMOI.jl
To use this package, install the OptimizationMOI package:
import Pkg;
Pkg.add("OptimizationMOI");Details
As of now, the Optimization interface to MathOptInterface implements only the maxtime common keyword argument.
OptimizationMOI supports an argument mtkize which takes a boolean (default to false) that allows automatic symbolic expression generation, this allows using any AD backend with solvers or interfaces such as AmplNLWriter that require the expression graph of the objective and constraints. This always happens automatically in the case of the AutoModelingToolkitadtype.
An optimizer which supports the MathOptInterface API can be called directly if no optimizer options have to be defined.
For example, using the Ipopt.jl optimizer:
using OptimizationMOI, Ipopt
sol = solve(prob, Ipopt.Optimizer())The optimizer options are handled in one of two ways. They can either be set via OptimizationMOI.MOI.OptimizerWithAttributes() or as keyword arguments to solve.
For example, using the Ipopt.jl optimizer:
using OptimizationMOI, Ipopt
opt = OptimizationMOI.MOI.OptimizerWithAttributes(Ipopt.Optimizer,
"option_name" => option_value, ...)
sol = solve(prob, opt)
sol = solve(prob, Ipopt.Optimizer(); option_name = option_value, ...)Optimizers
Ipopt.jl (MathOptInterface)
Ipopt.Optimizer- The full list of optimizer options can be found in the Ipopt Documentation
KNITRO.jl (MathOptInterface)
KNITRO.Optimizer- The full list of optimizer options can be found in the KNITRO Documentation
Juniper.jl (MathOptInterface)
Juniper.Optimizer- Juniper requires a nonlinear optimizer to be set via the
nl_solveroption, which must be a MathOptInterface-based optimizer. See the Juniper documentation for more detail.
using Optimization, OptimizationMOI, Juniper, Ipopt
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
_p = [1.0, 100.0]
f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(f, x0, _p)
opt = OptimizationMOI.MOI.OptimizerWithAttributes(Juniper.Optimizer,
"nl_solver" => OptimizationMOI.MOI.OptimizerWithAttributes(Ipopt.Optimizer,
"print_level" => 0))
sol = solve(prob, opt)retcode: Success
u: 2-element Vector{Float64}:
0.9999999999999899
0.9999999999999792Using Integer Constraints
The following shows how to use integer linear programming within Optimization. We will solve the classical Knapsack Problem using Juniper.jl.
Juniper requires a nonlinear optimizer to be set via the
nl_solveroption, which must be a MathOptInterface-based optimizer. See the Juniper documentation for more detail.The integer domain is inferred based on the bounds of the variable:
- Setting the lower bound to zero and the upper bound to one corresponds to
MOI.ZeroOne()or a binary decision variable - Providing other or no bounds corresponds to
MOI.Integer()
- Setting the lower bound to zero and the upper bound to one corresponds to
v = [1.0, 2.0, 4.0, 3.0]
w = [5.0, 4.0, 3.0, 2.0]
W = 4.0
u0 = [0.0, 0.0, 0.0, 1.0]
optfun = OptimizationFunction((u, p) -> -v'u, cons = (res, u, p) -> res .= w'u,
Optimization.AutoForwardDiff())
optprob = OptimizationProblem(optfun, u0; lb = zero.(u0), ub = one.(u0),
int = ones(Bool, length(u0)),
lcons = [-Inf;], ucons = [W;])
nl_solver = OptimizationMOI.MOI.OptimizerWithAttributes(Ipopt.Optimizer,
"print_level" => 0)
minlp_solver = OptimizationMOI.MOI.OptimizerWithAttributes(Juniper.Optimizer,
"nl_solver" => nl_solver)
res = solve(optprob, minlp_solver)retcode: Success
u: 4-element Vector{Float64}:
0.0
0.0
1.0
0.0