Finding Maxima and Minima of ODEs Solutions

Setup

In this tutorial, we will show how to use Optimization.jl to find the maxima and minima of solutions. Let's take a look at the double pendulum:

#Constants and setup
using OrdinaryDiffEq
initial = [0.01, 0.01, 0.01, 0.01]
tspan = (0.0, 100.0)

#Define the problem
function double_pendulum_hamiltonian(udot, u, p, t)
    α = u[1]
    lα = u[2]
    β = u[3]
    lβ = u[4]
    udot .= [2(lα - (1 + cos(β))lβ) / (3 - cos(2β)),
        -2sin(α) - sin(α + β),
        2(-(1 + cos(β))lα + (3 + 2cos(β))lβ) / (3 - cos(2β)),
        -sin(α + β) - 2sin(β) * (((lα - lβ)lβ) / (3 - cos(2β))) +
        2sin(2β) * ((lα^2 - 2(1 + cos(β))lα * lβ + (3 + 2cos(β))lβ^2) / (3 - cos(2β))^2)]
end

#Pass to solvers
poincare = ODEProblem(double_pendulum_hamiltonian, initial, tspan)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
timespan: (0.0, 100.0)
u0: 4-element Vector{Float64}:
 0.01
 0.01
 0.01
 0.01
sol = solve(poincare, Tsit5())
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 193-element Vector{Float64}:
   0.0
   0.08332584852065579
   0.24175300745938438
   0.4389533584376291
   0.6797301452523561
   0.964762990756671
   1.3179426345818903
   1.7031226293414234
   2.067850440442258
   2.471790033442068
   ⋮
  95.84572751485327
  96.35778725202667
  96.9291302913926
  97.44679130983907
  97.96248135692701
  98.51183049955068
  99.0608234876853
  99.58284175370028
 100.0
u: 193-element Vector{Vector{Float64}}:
 [0.01, 0.01, 0.01, 0.01]
 [0.009170687380405334, 0.006669000455384281, 0.012420525490765841, 0.008266408515192909]
 [0.007673275251908229, 0.00037461731659180626, 0.016442590263187704, 0.004636827445198725]
 [0.006125974386969531, -0.00730545037634504, 0.019967371147127152, -0.00033649811007591606]
 [0.004966110634454907, -0.016308516887941955, 0.021440659467349597, -0.006705037355788195]
 [0.004795568376382594, -0.026238104439258024, 0.01882432472008982, -0.013913365232392866]
 [0.0060546801759868134, -0.037124553877602676, 0.010055700596830796, -0.02103812861952017]
 [0.007900784515124328, -0.046676070191193235, -0.0026735827338297265, -0.02518303641688081]
 [0.008276510390197665, -0.052784334185385265, -0.012731547350727448, -0.025258040181386776]
 [0.005523496259016676, -0.05525250413162683, -0.016843881833516653, -0.02189896263998639]
 ⋮
 [-0.014886772678742981, 0.04233265138937294, 0.01362826417877648, 0.01802907800298128]
 [-0.008190329756021365, 0.05442262290581612, 0.009448089553574289, 0.017740072819351936]
 [0.004124618400934384, 0.05674881943069961, -0.0051540767334029, 0.01759697869976223]
 [0.01307968530383876, 0.048077115531537845, -0.01377064757661266, 0.018286646614435426]
 [0.015316049163607244, 0.031631086647704046, -0.008957068278558522, 0.017118425813261014]
 [0.011115526939324764, 0.009929157201689038, 0.007297372148518435, 0.010353434698455214]
 [0.005713892197924783, -0.011787357667090829, 0.020508041686825178, -0.002310410571356933]
 [0.004211436807344068, -0.029911132614685865, 0.018750485544995192, -0.015650663390414776]
 [0.005741239585228796, -0.04165385987261451, 0.007413270237967241, -0.023348978472272797]

In time, the solution looks like:

using Plots;
gr();
plot(sol, vars = [(0, 3), (0, 4)], leg = false, plotdensity = 10000)
Example block output

while it has the well-known phase-space plot:

plot(sol, vars = (3, 4), leg = false)
Example block output

Local Optimization

Let's find out what some of the local maxima and minima are. Optim.jl can be used to minimize functions, and the solution type has a continuous interpolation which can be used. Let's look for the local optima for the 4th variable around t=20. Thus, our optimization function is:

f(t, _) = sol(first(t), idxs = 4)
f (generic function with 1 method)

first(t) is the same as t[1] which transforms the array of size 1 into a number. idxs=4 is the same as sol(first(t))[4] but does the calculation without a temporary array and thus is faster. To find a local minimum, we can solve the optimization problem where the loss function is f:

using Optimization, OptimizationNLopt, ForwardDiff
optf = OptimizationFunction(f, Optimization.AutoForwardDiff())
min_guess = 18.0
optprob = OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt = solve(optprob, NLopt.LD_LBFGS())
retcode: Success
u: 1-element Vector{Float64}:
 18.63212745038073

From this printout, we see that the minimum is at t=18.63 and the value is -2.79e-2. We can get these in code-form via:

println(opt.u)
[18.63212745038073]

To get the maximum, we just minimize the negative of the function:

fminus(t, _) = -sol(first(t), idxs = 4)

optf = OptimizationFunction(fminus, Optimization.AutoForwardDiff())
min_guess = 22.0
optprob2 = OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt2 = solve(optprob2, NLopt.LD_LBFGS())
retcode: Success
u: 1-element Vector{Float64}:
 23.29760651436087

Let's add the maxima and minima to the plots:

plot(sol, vars = (0, 4), plotdensity = 10000)
scatter!([opt.u], [opt.minimum], label = "Local Min")
scatter!([opt2.u], [-opt2.minimum], label = "Local Max")
Example block output

Global Optimization

If we instead want to find global maxima and minima, we need to look somewhere else. There are many choices for this. A pure Julia option are the BlackBoxOptim solvers within Optimization.jl, but I will continue the story with the OptimizationNLopt methods. To do this, we simply swap out to one of the global optimizers in the list. Let's try GN_ORIG_DIRECT_L:

gopt = solve(optprob, NLopt.GN_ORIG_DIRECT_L())
gopt2 = solve(optprob2, NLopt.GN_ORIG_DIRECT_L())

@show gopt.u, gopt2.u
([76.29050925238306], [47.460303383501625])
plot(sol, vars = (0, 4), plotdensity = 10000)
scatter!([gopt.u], [gopt.minimum], label = "Global Min")
scatter!([gopt2.u], [-gopt2.minimum], label = "Global Max")
Example block output