Finding Maxima and Minima of ODEs Solutions

Setup

In this tutorial, we will show how to use Optimization.jl to find the maxima and minima of solutions. Let's take a look at the double pendulum:

#Constants and setup
using OrdinaryDiffEq
initial = [0.01, 0.01, 0.01, 0.01]
tspan = (0.0, 100.0)

#Define the problem
function double_pendulum_hamiltonian(udot, u, p, t)
    α = u[1]
    lα = u[2]
    β = u[3]
    lβ = u[4]
    udot .= [2(lα - (1 + cos(β))lβ) / (3 - cos(2β)),
        -2sin(α) - sin(α + β),
        2(-(1 + cos(β))lα + (3 + 2cos(β))lβ) / (3 - cos(2β)),
        -sin(α + β) - 2sin(β) * (((lα - lβ)lβ) / (3 - cos(2β))) +
        2sin(2β) * ((lα^2 - 2(1 + cos(β))lα * lβ + (3 + 2cos(β))lβ^2) / (3 - cos(2β))^2)]
end

#Pass to solvers
poincare = ODEProblem(double_pendulum_hamiltonian, initial, tspan)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
timespan: (0.0, 100.0)
u0: 4-element Vector{Float64}:
 0.01
 0.01
 0.01
 0.01
sol = solve(poincare, Tsit5())
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 193-element Vector{Float64}:
   0.0
   0.08332584852065579
   0.24175300587841853
   0.4389533535703127
   0.6797301355043014
   0.9647629844957191
   1.3179426427778789
   1.7031226589934565
   2.0678504564408504
   2.4717900471019023
   ⋮
  95.8457265154543
  96.35778609875159
  96.92912892765023
  97.44679036707718
  97.96248017978454
  98.5118292417693
  99.06082273312332
  99.58284082787185
 100.0
u: 193-element Vector{Vector{Float64}}:
 [0.01, 0.01, 0.01, 0.01]
 [0.009170687380405334, 0.006669000455384281, 0.012420525490765841, 0.008266408515192909]
 [0.007673275265972504, 0.00037461737897660443, 0.016442590227730397, 0.004636827483318277]
 [0.00612597441923929, -0.007305450189721187, 0.019967371084231893, -0.00033649798308968556]
 [0.004966110662711129, -0.016308516533738052, 0.021440659476204722, -0.0067050370984004706]
 [0.0047955683664552265, -0.026238104231339165, 0.018824324827053686, -0.013913365084526933]
 [0.006054680216573235, -0.03712455410888036, 0.010055700343225196, -0.02103812875155613]
 [0.007900784624609717, -0.04667607081473081, -0.0026735836995593456, -0.02518303657189279]
 [0.008276510353924278, -0.052784334378941075, -0.012731547682294859, -0.02525804011008267]
 [0.005523496102562298, -0.05525250412790727, -0.016843881819771457, -0.021898962485296488]
 ⋮
 [-0.014886778911438246, 0.042332620369772894, 0.013628258608953175, 0.018029076762559125]
 [-0.008190351596162148, 0.054422605450369234, 0.00944811277121419, 0.017740074304871633]
 [0.004124588963417958, 0.05674882926735976, -0.005154041839395091, 0.01759697731920931]
 [0.013079674415302153, 0.04807713954653318, -0.013770643022910718, 0.01828664595114757]
 [0.01531605221618201, 0.031631130205698375, -0.008957094529053794, 0.017118433263410206]
 [0.0111155405283548, 0.009929208333814244, 0.007297331934230866, 0.010353457826205769]
 [0.005713897643562418, -0.011787329261525656, 0.020508032523607993, -0.002310390783308045]
 [0.0042114355418556605, -0.029911103571309838, 0.01875050255817474, -0.015650642104010543]
 [0.005741239574253372, -0.041653859884274974, 0.0074132702648349305, -0.023348978443051824]

In time, the solution looks like:

using Plots;
gr();
plot(sol, vars = [(0, 3), (0, 4)], leg = false, plotdensity = 10000)
Example block output

while it has the well-known phase-space plot:

plot(sol, vars = (3, 4), leg = false)
Example block output

Local Optimization

Let's find out what some of the local maxima and minima are. Optim.jl can be used to minimize functions, and the solution type has a continuous interpolation which can be used. Let's look for the local optima for the 4th variable around t=20. Thus, our optimization function is:

f(t, _) = sol(first(t), idxs = 4)
f (generic function with 1 method)

first(t) is the same as t[1] which transforms the array of size 1 into a number. idxs=4 is the same as sol(first(t))[4] but does the calculation without a temporary array and thus is faster. To find a local minimum, we can solve the optimization problem where the loss function is f:

using Optimization, OptimizationNLopt, ForwardDiff
optf = OptimizationFunction(f, Optimization.AutoForwardDiff())
min_guess = 18.0
optprob = OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt = solve(optprob, NLopt.LD_LBFGS())
retcode: Success
u: 1-element Vector{Float64}:
 18.632127451895908

From this printout, we see that the minimum is at t=18.63 and the value is -2.79e-2. We can get these in code-form via:

println(opt.u)
[18.632127451895908]

To get the maximum, we just minimize the negative of the function:

fminus(t, _) = -sol(first(t), idxs = 4)

optf = OptimizationFunction(fminus, Optimization.AutoForwardDiff())
min_guess = 22.0
optprob2 = OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt2 = solve(optprob2, NLopt.LD_LBFGS())
retcode: Success
u: 1-element Vector{Float64}:
 23.297606513035003

Let's add the maxima and minima to the plots:

plot(sol, vars = (0, 4), plotdensity = 10000)
scatter!([opt.u], [opt.minimum], label = "Local Min")
scatter!([opt2.u], [-opt2.minimum], label = "Local Max")
Example block output

Global Optimization

If we instead want to find global maxima and minima, we need to look somewhere else. There are many choices for this. A pure Julia option are the BlackBoxOptim solvers within Optimization.jl, but I will continue the story with the OptimizationNLopt methods. To do this, we simply swap out to one of the global optimizers in the list. Let's try GN_ORIG_DIRECT_L:

gopt = solve(optprob, NLopt.GN_ORIG_DIRECT_L())
gopt2 = solve(optprob2, NLopt.GN_ORIG_DIRECT_L())

@show gopt.u, gopt2.u
([76.29050923810225], [47.460303383855695])
plot(sol, vars = (0, 4), plotdensity = 10000)
scatter!([gopt.u], [gopt.minimum], label = "Global Min")
scatter!([gopt2.u], [-gopt2.minimum], label = "Global Max")
Example block output