SimpleOptimization.jl

SimpleOptimization.jl provides lightweight loop-unrolled optimization algorithms for the SciML ecosystem. It is designed for small-scale optimization problems where low overhead is critical.

Installation: SimpleOptimization.jl

To use this package, install the SimpleOptimization package:

import Pkg;
Pkg.add("SimpleOptimization");

Methods

SimpleOptimization.SimpleBFGSType
SimpleBFGS()

A lightweight, loop-unrolled BFGS optimization algorithm. This algorithm is designed for small-scale unconstrained optimization problems where low overhead is critical.

Description

SimpleBFGS implements the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method. It builds an approximation to the inverse Hessian matrix using gradient information, achieving superlinear convergence for smooth objective functions.

Internally, it wraps SimpleBroyden from SimpleNonlinearSolve.jl to find the root of the gradient (i.e., the stationary point of the objective).

Example

using SimpleOptimization, Optimization, ForwardDiff

rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
x0 = zeros(2)
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, x0)
sol = solve(prob, SimpleBFGS())
source
SimpleOptimization.SimpleLBFGSType
SimpleLBFGS(; threshold::Union{Val, Int} = Val(10))

A lightweight, loop-unrolled Limited-memory BFGS (L-BFGS) optimization algorithm. This algorithm is designed for small-scale unconstrained optimization problems where low overhead is critical.

Arguments

  • threshold: The number of past iterations to store for approximating the inverse Hessian. Default is Val(10). Can be specified as either a Val type for compile-time optimization or an Int.

Description

SimpleLBFGS uses a limited-memory approximation to the BFGS update, storing only the last threshold iterations of gradient information. This makes it memory-efficient for problems with many variables while still achieving superlinear convergence.

Internally, it wraps SimpleLimitedMemoryBroyden from SimpleNonlinearSolve.jl to find the root of the gradient (i.e., the stationary point of the objective).

Example

using SimpleOptimization, Optimization, ForwardDiff

rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
x0 = zeros(2)
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, x0)
sol = solve(prob, SimpleLBFGS())
source

Example

The Rosenbrock function can be optimized using SimpleBFGS as follows:

using SimpleOptimization, Optimization, ForwardDiff
rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = nothing
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, x0, p)
sol = solve(prob, SimpleBFGS())
retcode: MaxIters
u: 2-element Vector{Float64}:
 -0.6198370384991395
  0.3885188209825609