Neural Second Order Ordinary Differential Equation
The neural ODE focuses and finding a neural network such that:
\[u^\prime = NN(u)\]
However, often in physics-based modeling, the key object is not the velocity but the acceleration: knowing the acceleration tells you the force field and thus the generating process for the dynamical system. Thus what we want to do is find the force, i.e.:
\[u^{\prime\prime} = NN(u)\]
(Note that in order to be the acceleration, we should divide the output of the neural network by the mass!)
An example of training a neural network on a second order ODE is as follows:
using DifferentialEquations,
Flux, Optimization, OptimizationFlux, RecursiveArrayTools,
Random
u0 = Float32[0.0; 2.0]
du0 = Float32[0.0; 0.0]
tspan = (0.0f0, 1.0f0)
t = range(tspan[1], tspan[2], length = 20)
model = Flux.Chain(Flux.Dense(2, 50, tanh), Flux.Dense(50, 2))
p, re = Flux.destructure(model)
ff(du, u, p, t) = re(p)(u)
prob = SecondOrderODEProblem{false}(ff, du0, u0, tspan, p)
function predict(p)
Array(solve(prob, Tsit5(), p = p, saveat = t))
end
correct_pos = Float32.(transpose(hcat(collect(0:0.05:1)[2:end], collect(2:-0.05:1)[2:end])))
function loss_n_ode(p)
pred = predict(p)
sum(abs2, correct_pos .- pred[1:2, :]), pred
end
data = Iterators.repeated((), 1000)
opt = ADAM(0.01)
l1 = loss_n_ode(p)
callback = function (p, l, pred)
println(l)
l < 0.01
end
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss_n_ode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, p)
res = Optimization.solve(optprob, opt; callback = callback, maxiters = 1000)u: 252-element Vector{Float32}:
-0.43591887
0.3576625
-0.40733948
-5.4005423
2.4831076
0.153516
-1.9599228
-0.4100895
8.220131
3.5779502
⋮
0.8941624
0.34454745
2.2267768
0.06137892
2.102718
-0.37736806
-0.63214725
0.115625426
0.32148984