Solving the Rosenbrock Problem in >10 Ways
This example is a demonstration of many different solvers to demonstrate the flexibility of Optimization.jl. This is a gauntlet of many solvers to get a feel for common workflows of the package and give copy-pastable starting points.
This example uses many different solvers of Optimization.jl. Each solver subpackage needs to be installed separate. For example, for the details on the installation and usage of OptimizationOptimJL.jl package, see the Optim.jl page.
The objective of this exercise is to determine the $(x, y)$ value pair that minimizes the result of a Rosenbrock function $f$ with some parameter values $a$ and $b$. The Rosenbrock function is useful for testing because it is known a priori to have a global minimum at $(a, a^2)$.
\[f(x,\,y;\,a,\,b) = \left(a - x\right)^2 + b \left(y - x^2\right)^2\]
The Optimization.jl interface expects functions to be defined with a vector of optimization arguments $\bar{x}$ and a vector of parameters $\bar{p}$, i.e.:
\[f(\bar{x},\,\bar{p}) = \left(p_1 - x_1\right)^2 + p_2 \left(x_2 - x_1^2\right)^2\]
Parameters $a$ and $b$ are captured in a vector $\bar{p}$ and assigned some arbitrary values to produce a particular Rosenbrock function to be minimized.
\[\bar{p} = \begin{bmatrix} a \\ b \end{bmatrix} = \begin{bmatrix} 1 \\ 100 \end{bmatrix}\]
The original $x$ and $y$ domains are captured in a vector $\bar{x}$.
\[\bar{x} = \begin{bmatrix} x \\ y \end{bmatrix}\]
An initial estimate $\bar{x}_0$ of the minima location is required to initialize the optimizer.
\[\bar{x}_0 = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}\]
An optimization problem can now be defined and solved to estimate the values for $\bar{x}$ that minimize the output of this function.
# Define the problem to solve
using Optimization, ForwardDiff, Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
_p = [1.0, 100.0]
f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
l1 = rosenbrock(x0, _p)
prob = OptimizationProblem(f, x0, _p)
OptimizationProblem. In-place: true
u0: 2-element Vector{Float64}:
0.0
0.0
Optim.jl Solvers
Start with some derivative-free optimizers
using OptimizationOptimJL
sol = solve(prob, SimulatedAnnealing())
prob = OptimizationProblem(f, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
sol = solve(prob, SAMIN())
l1 = rosenbrock(x0, _p)
prob = OptimizationProblem(rosenbrock, x0, _p)
sol = solve(prob, NelderMead())
retcode: Success
u: 2-element Vector{Float64}:
0.9999634355313174
0.9999315506115275
Now a gradient-based optimizer with forward-mode automatic differentiation
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, x0, _p)
sol = solve(prob, BFGS())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999999373614
0.999999999868622
Now a second order optimizer using Hessians generated by forward-mode automatic differentiation
sol = solve(prob, Newton())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999999999994
0.9999999999999989
Now a second order Hessian-free optimizer
sol = solve(prob, Optim.KrylovTrustRegion())
retcode: Success
u: 2-element Vector{Float64}:
0.999999999999108
0.9999999999981819
Now derivative-based optimizers with various constraints
cons = (res, x, p) -> res .= [x[1]^2 + x[2]^2]
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = cons)
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf])
sol = solve(prob, IPNewton()) # Note that -Inf < x[1]^2 + x[2]^2 < Inf is always true
prob = OptimizationProblem(optf, x0, _p, lcons = [-5.0], ucons = [10.0])
sol = solve(prob, IPNewton()) # Again, -5.0 < x[1]^2 + x[2]^2 < 10.0
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [Inf],
lb = [-500.0, -500.0], ub = [50.0, 50.0])
sol = solve(prob, IPNewton())
prob = OptimizationProblem(optf, x0, _p, lcons = [0.5], ucons = [0.5],
lb = [-500.0, -500.0], ub = [50.0, 50.0])
sol = solve(prob, IPNewton())
# Notice now that x[1]^2 + x[2]^2 ≈ 0.5:
res = zeros(1)
cons(res, sol.u, _p)
println(res)
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│ So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│ an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/Optimization.jl/Optimization.jl/lib/OptimizationBase/src/cache.jl:49
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│ So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│ an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/Optimization.jl/Optimization.jl/lib/OptimizationBase/src/cache.jl:49
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│ So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│ an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/Optimization.jl/Optimization.jl/lib/OptimizationBase/src/cache.jl:49
┌ Warning: The selected optimization algorithm requires second order derivatives, but `SecondOrder` ADtype was not provided.
│ So a `SecondOrder` with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│ an explicit `SecondOrder` ADtype is recommended.
└ @ OptimizationBase ~/work/Optimization.jl/Optimization.jl/lib/OptimizationBase/src/cache.jl:49
[0.49999999999999994]
function con_c(res, x, p)
res .= [x[1]^2 + x[2]^2]
end
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = con_c)
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf], ucons = [0.25^2])
sol = solve(prob, IPNewton()) # -Inf < cons_circ(sol.u, _p) = 0.25^2
retcode: Success
u: 2-element Vector{Float64}:
0.24327905408863862
0.05757865786675858
Evolutionary.jl Solvers
using OptimizationEvolutionary
sol = solve(prob, CMAES(μ = 40, λ = 100), abstol = 1e-15) # -Inf < cons_circ(sol.u, _p) = 0.25^2
retcode: Success
u: 2-element Vector{Float64}:
0.2433304637935718
0.057361009318221905
IPOPT through OptimizationMOI
using OptimizationMOI, Ipopt
function con2_c(res, x, p)
res .= [x[1]^2 + x[2]^2, x[2] * sin(x[1]) - x[1]]
end
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote(); cons = con2_c)
prob = OptimizationProblem(optf, x0, _p, lcons = [-Inf, -Inf], ucons = [100.0, 100.0])
sol = solve(prob, Ipopt.Optimizer())
retcode: Success
u: 2-element Vector{Float64}:
0.9999999999080653
0.9999999998157655
Now let's switch over to OptimizationOptimisers with reverse-mode AD
import OptimizationOptimisers
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
prob = OptimizationProblem(optf, x0, _p)
sol = solve(prob, OptimizationOptimisers.Adam(0.05), maxiters = 1000, progress = false)
retcode: Default
u: 2-element Vector{Float64}:
0.999957450694706
0.9999163934196296
Try out CMAEvolutionStrategy.jl's evolutionary methods
using OptimizationCMAEvolutionStrategy
sol = solve(prob, CMAEvolutionStrategyOpt())
retcode: Failure
u: 2-element Vector{Float64}:
0.9999999929021683
0.9999999925659593
Now try a few NLopt.jl solvers with symbolic differentiation via ModelingToolkit.jl
using OptimizationNLopt, ModelingToolkit
optf = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
prob = OptimizationProblem(optf, x0, _p)
sol = solve(prob, Opt(:LN_BOBYQA, 2))
sol = solve(prob, Opt(:LD_LBFGS, 2))
retcode: Success
u: 2-element Vector{Float64}:
0.9999999999894374
0.9999999999844783
Add some box constraints and solve with a few NLopt.jl methods
prob = OptimizationProblem(optf, x0, _p, lb = [-1.0, -1.0], ub = [0.8, 0.8])
sol = solve(prob, Opt(:LD_LBFGS, 2))
sol = solve(prob, Opt(:G_MLSL_LDS, 2), local_method = Opt(:LD_LBFGS, 2), maxiters = 10000) #a global optimizer with random starts of local optimization
retcode: MaxIters
u: 2-element Vector{Float64}:
0.8
0.6400000000000001
BlackBoxOptim.jl Solvers
using OptimizationBBO
prob = Optimization.OptimizationProblem(rosenbrock, [0.0, 0.3], _p, lb = [-1.0, 0.2],
ub = [0.8, 0.43])
sol = solve(prob, BBO_adaptive_de_rand_1_bin()) # -1.0 ≤ x[1] ≤ 0.8, 0.2 ≤ x[2] ≤ 0.43
retcode: MaxIters
u: 2-element Vector{Float64}:
0.6577248384365078
0.43
And this is only a small subset of what Optimization.jl has to offer!