Non-autonomous Linear ODE / Lie Group ODE Solvers
Non-autonomous linear ODE solvers focus on equations in the general form of
\[u^\prime = A(u,p,t)u\]
and utilize the Lie group structure of the solution to accelerate the numerical methods and capture certain properties in the solution process. One common simplification is for solvers to require state-independent operators, which implies the form:
\[u^\prime = A(t)u\]
Another type of solver is needed when the operators are state-dependent, i.e.
\[u^\prime = A(u)u\]
Others specifically require linearity, i.e.
\[u^\prime = Au\]
where $A$ is a constant operator.
Recommendations
It is recommended to always specialize on the properties of the operator as much as possible.
Standard ODE Integrators
The standard ODE integrators will work on Non-autonomous linear ODE problems via an automatic transformation to a first-order ODE. See the ODE solvers page for more details.
Specialized OrdinaryDiffEq.jl Integrators
Unless otherwise specified, the OrdinaryDiffEq algorithms all come with a 3rd order Hermite polynomial interpolation. The algorithms denoted as having a “free” interpolation means that no extra steps are required for the interpolation. For the non-free higher order interpolating functions, the extra steps are computed lazily (i.e. not during the solve).
Note that all of these methods are fixed timestep unless otherwise specified.
Exponential Methods for Linear and Affine Problems
These methods require that $A$ is constant.
LinearExponential
- Exact solution formula for linear, time-independent problems.
Options:
krylov
- symbol. One of- :off (default) - cache the operator beforehand. Requires
Matrix(A)
method defined for the operatorA
. - :simple - uses simple Krylov approximations with fixed subspace size
m
. - :adaptive - uses adaptive Krylov approximations with internal timestepping.
- :off (default) - cache the operator beforehand. Requires
m
- integer, default:10
. Controls the size of Krylov subspace ifkrylov=:simple
, and the initial subspace size ifkrylov=:adaptive
.iop
- integer, default:0
. If not zero, determines the length of the incomplete orthogonalization procedure (IOP) [1]. Note that if the linear operator/Jacobian is hermitian, then the Lanczos algorithm will always be used and the IOP setting is ignored.
import DifferentialEquations as DE, SciMLOperators
_A = [2 -1; -3 -5] / 5
A = SciMLOperators.MatrixOperator(_A)
prob = DE.ODEProblem(A, [1.0, -1.0], (1.0, 6.0))
sol = DE.solve(prob, DE.LinearExponential())
retcode: Success
Interpolation: 3rd order Hermite
t: 2-element Vector{Float64}:
1.0
6.0
u: 2-element Vector{Vector{Float64}}:
[1.0, -1.0]
[11.923375744980987, -4.833128734161084]
LinearExponential is exact, and thus it uses dt=tspan[2]-tspan[1]
by default. The interpolation used is inexact (3rd order Hermite). Thus, values generated by the interpolation (via sol(t)
or saveat
) will be inexact with increasing error as the size of the time span grows. To counteract this, directly set dt
or use tstops
instead of saveat
. For more information, see this issue.
State-Independent Solvers
These methods require $A$ is only dependent on the independent variable, i.e. $A(t)$.
MagnusMidpoint
- Second order Magnus Midpoint method.MagnusLeapfrog
- Second order Magnus Leapfrog method.MagnusGauss4
- Fourth order Magnus method approximated using a two stage Gauss quadrature.MagnusGL4
- Fourth order Magnus method approximated using Gauss-Legendre quadrature.MagnusNC6
- Sixth order Magnus method approximated using Newton-Cotes quadrature.MagnusGL6
- Sixth order Magnus method approximated using Gauss-Legendre quadrature.MagnusNC8
- Eighth order Magnus method approximated using Newton-Cotes quadrature.MagnusGL8
- Eighth order Magnus method approximated using Gauss-Legendre quadrature.
Example:
function update_func(A, u, p, t)
A[1, 1] = cos(t)
A[2, 1] = sin(t)
A[1, 2] = -sin(t)
A[2, 2] = cos(t)
end
A = SciMLOperators.MatrixOperator(ones(2, 2), update_func! = update_func)
prob = DE.ODEProblem(A, ones(2), (1.0, 6.0))
sol = DE.solve(prob, DE.MagnusGL6(), dt = 1 / 10)
retcode: Success
Interpolation: 3rd order Hermite
t: 51-element Vector{Float64}:
1.0
1.1
1.2000000000000002
1.3000000000000003
1.4000000000000004
1.5000000000000004
1.6000000000000005
1.7000000000000006
1.8000000000000007
1.9000000000000008
⋮
5.199999999999998
5.299999999999998
5.399999999999998
5.499999999999997
5.599999999999997
5.699999999999997
5.799999999999996
5.899999999999996
6.0
u: 51-element Vector{Vector{Float64}}:
[1.0, 1.0]
[0.9560322602141308, 1.1380593390257319]
[0.8837222756549167, 1.2712953135220313]
[0.7836511349460553, 1.3924887039087297]
[0.6585912321071353, 1.4945430086037104]
[0.5134387044387003, 1.571248308402489]
[0.35484956094894243, 1.617994244496135]
[0.19061749478320592, 1.6323079788211152]
[0.02888576977499996, 1.61412035172606]
[-0.12267917859524083, 1.5657147744869353]
⋮
[0.16494715181316175, 0.1905076190653477]
[0.19016510949800758, 0.18488701621917158]
[0.2169166263088886, 0.17939687440026228]
[0.24554653627671563, 0.17419950752424804]
[0.27642938801560907, 0.16953149318836896]
[0.3099648774418325, 0.1657250462419898]
[0.34656684607451377, 0.1632335402833922]
[0.3866431012103115, 0.16266037043424691]
[0.4305628420420225, 0.1647892218429148]
The initial values for $A$ are irrelevant in this and similar cases, as the update_func
immediately overwrites them. Starting with ones(2,2)
is just a convenient way to get a mutable 2x2 matrix.
State-Dependent Solvers
These methods can be used when $A$ is dependent on the state variables, i.e. $A(u)$.
CayleyEuler
- First order method using Cayley transformations.LieEuler
- First order Lie Euler method.RKMK2
- Second order Runge–Kutta–Munthe-Kaas method.RKMK4
- Fourth order Runge–Kutta–Munthe-Kaas method.LieRK4
- Fourth order Lie Runge-Kutta method.CG2
- Second order Crouch–Grossman method.CG4a
- Fourth order Crouch-Grossman method.MagnusAdapt4
- Fourth Order Adaptive Magnus method.
Example:
function update_func(A, u, p, t)
A[1, 1] = 0
A[2, 1] = sin(u[1])
A[1, 2] = -1
A[2, 2] = 0
end
A = SciMLOperators.MatrixOperator(ones(2, 2), update_func! = update_func)
prob = DE.ODEProblem(A, ones(2), (0, 30.0))
sol = DE.solve(prob, DE.LieRK4(), dt = 1 / 4)
retcode: Success
Interpolation: 3rd order Hermite
t: 121-element Vector{Float64}:
0.0
0.25
0.5
0.75
1.0
1.25
1.5
1.75
2.0
2.25
⋮
28.0
28.25
28.5
28.75
29.0
29.25
29.5
29.75
30.0
u: 121-element Vector{Vector{Float64}}:
[1.0, 1.0]
[0.7273968579551914, 1.165794872747704]
[0.4243104556165744, 1.245921012686004]
[0.10953160297897349, 1.2654872693089263]
[-0.2070137554757648, 1.268154152637577]
[-0.5272224527692215, 1.3028146975278565]
[-0.8648991354908158, 1.4149945285244705]
[-1.2443994566875354, 1.6427671570858633]
[-1.6975604074674682, 2.0038463754183535]
[-2.253771131746041, 2.449176910606261]
⋮
[-2.666388875329117, 2.695483734288957]
[-3.3611719087644802, 2.7811209839661144]
[-4.00818756163553, 2.2967303664591125]
[-4.470316530373345, 1.3459658518794089]
[-4.6662629941102205, 0.2081263142861609]
[-4.572896781744218, -0.9494989022112765]
[-4.200152270252201, -1.9958503858923495]
[-3.605214015523253, -2.676737830734178]
[-2.910961832526005, -2.781287635056372]
The above example solves a non-stiff Non-Autonomous Linear ODE with a state dependent operator, using the LieRK4
method. Similarly, a stiff Non-Autonomous Linear ODE with state dependent operators can be solved using specialized adaptive algorithms, like MagnusAdapt4
.
Example:
function update_func(A, u, p, t)
A[1, 1] = 0
A[2, 1] = 1
A[1, 2] = -2 * (1 - cos(u[2]) - u[2] * sin(u[2]))
A[2, 2] = 0
end
A = SciMLOperators.MatrixOperator(ones(2, 2), update_func! = update_func)
prob = DE.ODEProblem(A, ones(2), (30, 150.0))
sol = DE.solve(prob, DE.MagnusAdapt4())
retcode: Success
Interpolation: 3rd order Hermite
t: 592-element Vector{Float64}:
30.0
30.051554410975918
30.143272324356687
30.24752472070872
30.375696919265216
30.51085579574041
30.650905721943833
30.788186152068086
30.919523331292588
31.04078867897698
⋮
147.9627372871622
148.1386528112805
148.34620646242558
148.59559259387922
148.88475647118528
149.1635812507815
149.44956028353877
149.73645739558768
150.0
u: 592-element Vector{Vector{Float64}}:
[1.0, 1.0]
[1.0418828143197263, 1.052612101560573]
[1.1297712807976779, 1.152064966997934]
[1.2527691759961586, 1.2760334939949005]
[1.4404455425326401, 1.448198336035908]
[1.6784196445701856, 1.6585827741003034]
[1.941152392917019, 1.9121745862108426]
[2.136510769508699, 2.1934881696861144]
[2.1250337179679515, 2.4764214499238]
[1.7927261533453667, 2.717694253559194]
⋮
[1.3382784412748316, -1.3478234307411343]
[1.1224318076060493, -1.1324822613208805]
[0.954703113230035, -0.9183588340024612]
[0.8437841599121938, -0.6957608308469877]
[0.7910831357260508, -0.4608016455247957]
[0.7783133967858951, -0.242508280489208]
[0.7772133921074555, -0.02016783962082333]
[0.7777533720881296, 0.20284566489295414]
[0.7858794479965167, 0.40856434168898387]
Time and State-Dependent Operators
These methods can be used when $A$ is dependent on both time and state variables, i.e. $A(u,t)$
CG3
- Third order Crouch-Grossman method.