Finding Maxima and Minima of ODEs Solutions
Setup
In this tutorial, we will show how to use Optimization.jl to find the maxima and minima of solutions. Let's take a look at the double pendulum:
#Constants and setup
import OrdinaryDiffEq as ODE
initial = [0.01, 0.01, 0.01, 0.01]
tspan = (0.0, 100.0)
#Define the problem
function double_pendulum_hamiltonian(udot, u, p, t)
α, lα, β, lβ = u
udot .= [2(lα - (1 + cos(β))lβ) / (3 - cos(2β)),
-2sin(α) - sin(α + β),
2(-(1 + cos(β))lα + (3 + 2cos(β))lβ) / (3 - cos(2β)),
-sin(α + β) - 2sin(β) * (((lα - lβ)lβ) / (3 - cos(2β))) +
2sin(2β) * ((lα^2 - 2(1 + cos(β))lα * lβ + (3 + 2cos(β))lβ^2) / (3 - cos(2β))^2)]
end
#Pass to solvers
poincare = ODE.ODEProblem(double_pendulum_hamiltonian, initial, tspan)ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
Non-trivial mass matrix: false
timespan: (0.0, 100.0)
u0: 4-element Vector{Float64}:
0.01
0.01
0.01
0.01sol = ODE.solve(poincare, ODE.Tsit5())retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 193-element Vector{Float64}:
0.0
0.08332584852065579
0.24175300745938438
0.4389533584376291
0.6797301452523561
0.964762990756671
1.3179426345818903
1.7031226293414234
2.067850440442258
2.471790033442068
⋮
95.84572622037093
96.35778580494008
96.92912862815517
97.44679020089208
97.962480003273
98.51182914418953
99.06082277412563
99.58284090343096
100.0
u: 193-element Vector{Vector{Float64}}:
[0.01, 0.01, 0.01, 0.01]
[0.00917068738040534, 0.0066690004553842845, 0.012420525490765832, 0.008266408515192912]
[0.007673275251908225, 0.00037461731659181634, 0.016442590263187715, 0.004636827445198736]
[0.006125974386969523, -0.007305450376345027, 0.019967371147127173, -0.00033649811007590614]
[0.00496611063445489, -0.016308516887941934, 0.021440659467349642, -0.006705037355788192]
[0.004795568376382592, -0.026238104439258003, 0.01882432472008982, -0.013913365232392868]
[0.006054680175986795, -0.03712455387760267, 0.010055700596830837, -0.021038128619520178]
[0.007900784515124333, -0.046676070191193256, -0.0026735827338297356, -0.02518303641688084]
[0.008276510390197698, -0.05278433418538531, -0.01273154735072751, -0.025258040181386828]
[0.00552349625901673, -0.05525250413162688, -0.01684388183351678, -0.021898962639986416]
⋮
[-0.014886780755074326, 0.04233261121132293, 0.013628256972596446, 0.018029076395286437]
[-0.008190357160822982, 0.054422601004052044, 0.009448118687718218, 0.01774007468154825]
[0.004124582499851026, 0.05674883142794397, -0.005154034179304099, 0.01759697701502962]
[0.013079672496321039, 0.04807714377971241, -0.013770642221163556, 0.018286645834197146]
[0.015316052673807534, 0.03163113673729949, -0.008957098465065936, 0.01711843438016862]
[0.011115541583990083, 0.009929212301288386, 0.007297328810987995, 0.010353459619077155]
[0.005713897353248702, -0.011787330805084924, 0.020508033008000165, -0.002310391858271953]
[0.004211435648466794, -0.02991110594325168, 0.01875050116165857, -0.01565064383700735]
[0.0057412395737275225, -0.041653859885219476, 0.007413270266106503, -0.02334897844061601]In time, the solution looks like:
import Plots;
Plots.gr();
Plots.plot(sol, vars = [(0, 3), (0, 4)], leg = false, plotdensity = 10000)while it has the well-known phase-space plot:
Plots.plot(sol, vars = (3, 4), leg = false)Local Optimization
Let's find out what some of the local maxima and minima are. Optim.jl can be used to minimize functions, and the solution type has a continuous interpolation which can be used. Let's look for the local optima for the 4th variable around t=20. Thus, our optimization function is:
f(t, _) = sol(first(t), idxs = 4)f (generic function with 1 method)first(t) is the same as t[1] which transforms the array of size 1 into a number. idxs=4 is the same as sol(first(t))[4] but does the calculation without a temporary array and thus is faster. To find a local minimum, we can solve the optimization problem where the loss function is f:
import Optimization as OPT, OptimizationNLopt as OptNL, ForwardDiff
optf = OPT.OptimizationFunction(f, OPT.AutoForwardDiff())
min_guess = 18.0
optprob = OPT.OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt = OPT.solve(optprob, OptNL.NLopt.LD_LBFGS())retcode: Success
u: 1-element Vector{Float64}:
18.632127452169023From this printout, we see that the minimum is at t=18.63 and the value is -2.79e-2. We can get these in code-form via:
println(opt.u)[18.632127452169023]To get the maximum, we just minimize the negative of the function:
fminus(t, _) = -sol(first(t), idxs = 4)
optf = OPT.OptimizationFunction(fminus, OPT.AutoForwardDiff())
min_guess = 22.0
optprob2 = OPT.OptimizationProblem(optf, [min_guess], lb = [0.0], ub = [100.0])
opt2 = OPT.solve(optprob2, OptNL.NLopt.LD_LBFGS())retcode: Success
u: 1-element Vector{Float64}:
23.29760651171523Let's add the maxima and minima to the plots:
Plots.plot(sol, vars = (0, 4), plotdensity = 10000)
Plots.scatter!([opt.u], [opt.minimum], label = "Local Min")
Plots.scatter!([opt2.u], [-opt2.minimum], label = "Local Max")Global Optimization
If we instead want to find global maxima and minima, we need to look somewhere else. There are many choices for this. A pure Julia option are the BlackBoxOptim solvers within Optimization.jl, but I will continue the story with the OptimizationNLopt methods. To do this, we simply swap out to one of the global optimizers in the list. Let's try GN_ORIG_DIRECT_L:
gopt = OPT.solve(optprob, OptNL.NLopt.GN_ORIG_DIRECT_L())
gopt2 = OPT.solve(optprob2, OptNL.NLopt.GN_ORIG_DIRECT_L())
@show gopt.u, gopt2.u([76.29050924671797], [47.46030338460317])Plots.plot(sol, vars = (0, 4), plotdensity = 10000)
Plots.scatter!([gopt.u], [gopt.minimum], label = "Global Min")
Plots.scatter!([gopt2.u], [-gopt2.minimum], label = "Global Max")