# Solving the Rosenbrock Problem in >10 Ways

This tutorial is a demonstration of many different solvers to demonstrate the flexibility of Optimization.jl. This is a gauntlet of many solvers to get a feel for common workflows of the package and give copy-pastable starting points.

Note

This example uses many different solvers of Optimization.jl. Each solver subpackage needs to be installed separate. For example, for the details on the installation and usage of OptimizationOptimJL.jl package, see the Optim.jl page.

# Define the problem to solve
using Optimization, ForwardDiff, Zygote, Test, Random

rosenbrock(x, p) =  (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
_p  = [1.0, 100.0]

f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
l1 = rosenbrock(x0, _p)
prob = OptimizationProblem(f, x0, _p)

## Optim.jl Solvers

using OptimizationOptimJL

sol = solve(prob, SimulatedAnnealing())
prob = OptimizationProblem(f, x0, _p, lb=[-1.0, -1.0], ub=[0.8, 0.8])
sol = solve(prob, SAMIN())

l1 = rosenbrock(x0)
prob = OptimizationProblem(rosenbrock, x0)
sol = solve(prob, NelderMead())

# Now a gradient-based optimizer with forward-mode automatic differentiation

optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff();cons= cons)
prob = OptimizationProblem(optf, x0)
sol = solve(prob, BFGS())

# Now a second order optimizer using Hessians generated by forward-mode automatic differentiation

sol = solve(prob, Newton())

# Now a second order Hessian-free optimizer

sol = solve(prob, Optim.KrylovTrustRegion())

# Now derivative-based optimizers with various constraints

cons = (x,p) -> [x[1]^2 + x[2]^2]
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff();cons= cons)
prob = OptimizationProblem(optf, x0)
sol = solve(prob, IPNewton())

prob = OptimizationProblem(optf, x0, lcons = [-Inf], ucons = [Inf])
sol = solve(prob, IPNewton())

prob = OptimizationProblem(optf, x0, lcons = [-5.0], ucons = [10.0])
sol = solve(prob, IPNewton())

prob = OptimizationProblem(optf, x0, lcons = [-Inf], ucons = [Inf],
lb = [-500.0,-500.0], ub=[50.0,50.0])
sol = solve(prob, IPNewton())

function con2_c(x,p)
[x[1]^2 + x[2]^2, x[2]*sin(x[1])-x[1]]
end

optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff();cons= con2_c)
prob = OptimizationProblem(optf, x0, lcons = [-Inf,-Inf], ucons = [Inf,Inf])
sol = solve(prob, IPNewton())

cons_circ = (x,p) -> [x[1]^2 + x[2]^2]
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff();cons= cons_circ)
prob = OptimizationProblem(optf, x0, lcons = [-Inf], ucons = [0.25^2])
sol = solve(prob, IPNewton())

# Now let's switch over to OptimizationOptimisers with reverse-mode AD

using OptimizationOptimisers
optf = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
prob = OptimizationProblem(optf, x0)
sol = solve(prob, Adam(0.05), maxiters = 1000, progress = false)

## Try out CMAEvolutionStrategy.jl's evolutionary methods

using OptimizationCMAEvolutionStrategy
sol = solve(prob, CMAEvolutionStrategyOpt())

## Now try a few NLopt.jl solvers with symbolic differentiation via ModelingToolkit.jl

using OptimizationNLopt, ModelingToolkit
optf = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
prob = OptimizationProblem(optf, x0)

sol = solve(prob, Opt(:LN_BOBYQA, 2))
sol = solve(prob, Opt(:LD_LBFGS, 2))

## Add some box constarints and solve with a few NLopt.jl methods

prob = OptimizationProblem(optf, x0, lb=[-1.0, -1.0], ub=[0.8, 0.8])
sol = solve(prob, Opt(:LD_LBFGS, 2))
sol = solve(prob, Opt(:G_MLSL_LDS, 2), nstart=2, local_method = Opt(:LD_LBFGS, 2), maxiters=10000)

## Evolutionary.jl Solvers

using OptimizationEvolutionary
sol = solve(prob, CMAES(μ =40 , λ = 100),abstol=1e-15)

## BlackBoxOptim.jl Solvers

using OptimizationBBO
prob = Optimization.OptimizationProblem(rosenbrock, x0, lb=[-1.0, -1.0], ub=[0.8, 0.8])
sol = solve(prob, BBO())

And this is only a small subset of what Optimization.jl has to offer!