Finding Maxima and Minima of ODEs Solutions
Setup
In this tutorial, we will show how to use Optimization.jl to find the maxima and minima of solutions. Let's take a look at the double pendulum:
#Constants and setup
using OrdinaryDiffEq
initial = [0.01, 0.01, 0.01, 0.01]
tspan = (0.0, 100.0)
#Define the problem
function double_pendulum_hamiltonian(udot, u, p, t)
α = u[1]
lα = u[2]
β = u[3]
lβ = u[4]
udot .= [2(lα - (1 + cos(β))lβ) / (3 - cos(2β)),
-2sin(α) - sin(α + β),
2(-(1 + cos(β))lα + (3 + 2cos(β))lβ) / (3 - cos(2β)),
-sin(α + β) - 2sin(β) * (((lα - lβ)lβ) / (3 - cos(2β))) +
2sin(2β) * ((lα^2 - 2(1 + cos(β))lα * lβ + (3 + 2cos(β))lβ^2) / (3 - cos(2β))^2)]
end
#Pass to solvers
poincare = ODEProblem(double_pendulum_hamiltonian, initial, tspan)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
Non-trivial mass matrix: false
timespan: (0.0, 100.0)
u0: 4-element Vector{Float64}:
0.01
0.01
0.01
0.01
sol = solve(poincare, Tsit5())
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 193-element Vector{Float64}:
0.0
0.08332584852065579
0.24175300745938438
0.4389533584376291
0.6797301452523561
0.964762990756671
1.3179426345818903
1.7031226293414234
2.067850440442258
2.471790033442068
⋮
95.84572622037093
96.35778580494008
96.92912862815517
97.44679020089208
97.962480003273
98.51182914418953
99.06082277412563
99.58284090343096
100.0
u: 193-element Vector{Vector{Float64}}:
[0.01, 0.01, 0.01, 0.01]
[0.009170687380405334, 0.006669000455384281, 0.012420525490765841, 0.008266408515192909]
[0.007673275251908229, 0.00037461731659180626, 0.016442590263187704, 0.004636827445198725]
[0.006125974386969531, -0.00730545037634504, 0.019967371147127152, -0.00033649811007591606]
[0.004966110634454907, -0.016308516887941955, 0.021440659467349597, -0.006705037355788195]
[0.004795568376382594, -0.026238104439258024, 0.01882432472008982, -0.013913365232392866]
[0.0060546801759868134, -0.037124553877602676, 0.010055700596830796, -0.02103812861952017]
[0.007900784515124328, -0.046676070191193235, -0.0026735827338297265, -0.02518303641688081]
[0.008276510390197665, -0.052784334185385265, -0.012731547350727448, -0.025258040181386776]
[0.005523496259016676, -0.05525250413162683, -0.016843881833516653, -0.02189896263998639]
⋮
[-0.014886780755074483, 0.0423326112113231, 0.013628256972596746, 0.018029076395286777]
[-0.008190357160823272, 0.054422601004052266, 0.00944811868771892, 0.01774007468154841]
[0.004124582499850814, 0.056748831427944076, -0.005154034179303506, 0.01759697701502946]
[0.013079672496321117, 0.04807714377971255, -0.013770642221163655, 0.018286645834196917]
[0.015316052673807829, 0.03163113673729952, -0.00895709846506657, 0.017118434380168503]
[0.01111554158399036, 0.009929212301288258, 0.007297328810987398, 0.010353459619077222]
[0.005713897353248815, -0.011787330805085164, 0.020508033007999964, -0.00231039185827179]
[0.00421143564846667, -0.0299111059432519, 0.018750501161658873, -0.015650643837007226]
[0.00574123957372723, -0.04165385988521964, 0.007413270266107195, -0.023348978440615988]
In time, the solution looks like:
using Plots;
gr();
plot(sol, vars = [(0, 3), (0, 4)], leg = false, plotdensity = 10000)
while it has the well-known phase-space plot:
plot(sol, vars = (3, 4), leg = false)
Local Optimization
Let's find out what some of the local maxima and minima are. Optim.jl can be used to minimize functions, and the solution type has a continuous interpolation which can be used. Let's look for the local optima for the 4th variable around t=20
. Thus, our optimization function is:
f(t, _) = sol(first(t), idxs = 4)
f (generic function with 1 method)
first(t)
is the same as t[1]
which transforms the array of size 1 into a number. idxs=4
is the same as sol(first(t))[4]
but does the calculation without a temporary array and thus is faster. To find a local minimum, we can solve the optimization problem where the loss function is f
:
using Optimization, OptimizationNLopt, ForwardDiff
optf = OptimizationFunction(f, Optimization.AutoForwardDiff())
min_guess = 18.0
optprob = OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt = solve(optprob, NLopt.LD_LBFGS())
retcode: Success
u: 1-element Vector{Float64}:
18.632127452169026
From this printout, we see that the minimum is at t=18.63
and the value is -2.79e-2
. We can get these in code-form via:
println(opt.u)
[18.632127452169026]
To get the maximum, we just minimize the negative of the function:
fminus(t, _) = -sol(first(t), idxs = 4)
optf = OptimizationFunction(fminus, Optimization.AutoForwardDiff())
min_guess = 22.0
optprob2 = OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt2 = solve(optprob2, NLopt.LD_LBFGS())
retcode: Success
u: 1-element Vector{Float64}:
23.297606511715237
Let's add the maxima and minima to the plots:
plot(sol, vars = (0, 4), plotdensity = 10000)
scatter!([opt.u], [opt.minimum], label = "Local Min")
scatter!([opt2.u], [-opt2.minimum], label = "Local Max")
Global Optimization
If we instead want to find global maxima and minima, we need to look somewhere else. There are many choices for this. A pure Julia option are the BlackBoxOptim solvers within Optimization.jl, but I will continue the story with the OptimizationNLopt methods. To do this, we simply swap out to one of the global optimizers in the list. Let's try GN_ORIG_DIRECT_L
:
gopt = solve(optprob, NLopt.GN_ORIG_DIRECT_L())
gopt2 = solve(optprob2, NLopt.GN_ORIG_DIRECT_L())
@show gopt.u, gopt2.u
([76.29050924754415], [47.46030337854464])
plot(sol, vars = (0, 4), plotdensity = 10000)
scatter!([gopt.u], [gopt.minimum], label = "Global Min")
scatter!([gopt2.u], [-gopt2.minimum], label = "Global Max")