# Adjoint Sensitivity Analysis of Continuous Functionals

The automatic differentiation tutorial demonstrated how to use AD packages like ForwardDiff.jl and Zygote.jl to compute derivatives of differential equation solutions with respect to initial conditions and parameters. The subsequent direct sensitivity analysis tutorial showed how to directly use the SciMLSensitivity.jl internals to define and solve the augmented differential equation systems which are used in the automatic differentiation process.

While these internal functions give more flexibility, the previous demonstration focused on a case which was possible via automatic differentiation: discrete cost functionals. What is meant by discrete cost functionals is differentiation of a cost which uses a finite number of time points. In the automatic differentiation case, these finite time points are the points returned by solve, i.e. those chosen by the saveat option in the solve call. In the direct adjoint sensitivity tooling, these were the time points chosen by the ts vector.

However, there is an expanded set of cost functionals supported by SciMLSensitivity.jl, continuous cost functionals, which are not possible through automatic differentiation interfaces. In an abstract sense, a continuous cost functional is a total cost $G$ defined as the integral of the instantanious cost $g$ at all time points. In other words, the total cost is defined as:

$$$G(u,p)=G(u(\cdot,p))=\int_{t_{0}}^{T}g(u(t,p),p)dt$$$

Notice that this cost function cannot accurately be computed using only estimates of u at discrete time points. The purpose of this tutorial is to demonstrate how such cost functionals can be easily evaluated using the direct sensitivity analysis interfaces.

## Example: Continuous Functionals with Forward Sensitivity Analysis via Interpolation

Evaluating continuous cost functionals with forward sensitivity analysis is rather straightforward since one can simply use the fact that the solution from ODEForwardSensitivityProblem is continuous when dense=true. For example,

using OrdinaryDiffEq, SciMLSensitivity

function f(du,u,p,t)
du[1] = dx = p[1]*u[1] - p[2]*u[1]*u[2]
du[2] = dy = -p[3]*u[2] + u[1]*u[2]
end

p = [1.5,1.0,3.0]
prob = ODEForwardSensitivityProblem(f,[1.0;1.0],(0.0,10.0),p)
sol = solve(prob,DP8())
retcode: Success
Interpolation: specialized 7th order interpolation
t: 29-element Vector{Float64}:
0.0
0.0008156803234081559
0.005709762263857092
0.0350742539065507
0.21126120376271237
0.7310736576107115
1.540222712617339
1.8813610466661779
2.152579392464959
2.4063311841696478
⋮
7.063355055637446
7.725935939669195
8.248979441432317
8.558003582514011
8.826370842927059
9.171011754187145
9.493946497917326
9.834929283472757
10.0
u: 29-element Vector{Vector{Float64}}:
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[1.000408588538797, 0.9983701355642965, 0.000816013510644824, 3.322154010506493e-7, -0.0008153483114736799, -3.320348233838799e-7, 3.3244140163739216e-7, -0.0008143507848015046]
[1.0028915156848317, 0.9886535575433295, 0.005726241237749595, 1.6146661657253676e-5, -0.005693685298331232, -1.6085381411877104e-5, 1.622392391840203e-5, -0.005644946211107334]
[1.0189121274128703, 0.9325571368196612, 0.035730565614057214, 0.0005806610224735032, -0.034509633035320036, -0.0005673264615114402, 0.0005982168608307318, -0.03270217921302392]
[1.1547444753248808, 0.6650474487943194, 0.24252997273953153, 0.01620058797923608, -0.198498945739975, -0.014142604007354084, 0.019606496433572655, -0.13955624899729338]
[1.9959026411822718, 0.30725094819300236, 1.3911332658550328, 0.12370946057242008, -0.7599570019860626, -0.07996649485520504, 0.22938885567774317, -0.20775306589835096]
[5.30194567946053, 0.4275038176301537, 6.007173244666811, 1.378441778030569, -2.283870425183431, -0.623462248893163, 1.6987114297214083, -0.3708646913283792]
[6.8702723310901295, 1.2712401591810842, 2.482449626359105, 6.439151700037799, -1.3710505647874744, -2.7849768648683777, 3.2715017511539424, -0.46731359361629526]
[5.679446513843018, 3.3399703081159786, -12.310270279196507, 12.730860582111236, 2.90563205459914, -6.735838408409488, 2.6735629496567994, 0.8629101364446949]
[2.8838282002507136, 4.57403857987998, -11.50192475754792, 1.5722474815418457, 3.149560282389645, -5.129125042632567, 0.2620888410181659, 1.5914280086423986]
⋮
[1.4188559373191922, 0.45747117721990466, 4.0345730575991805, -1.6292340104788547, -0.02328531538568726, -0.22674039853297237, 0.9917652543963326, -0.638337264202254]
[3.105286408605591, 0.25723901264027416, 12.030706505671914, 0.3657288649741977, -0.3594670814358994, -0.1562962149538488, 2.9860724103159155, -0.2135072543485681]
[5.721950106631197, 0.5147791449780601, 19.322713832885665, 5.150407774561523, -0.8979267203135737, -0.4767887879979121, 5.4868266280623645, 0.4525830529452884]
[6.9045644791116585, 1.4959711323293163, 0.8111460438513347, 21.257023017125366, -0.885529900291283, -1.837556476410295, 3.431317057907666, 3.210786998858488]
[5.251408633416557, 3.677081894045012, -40.30711600763158, 31.383646964911545, 0.415505603414129, -4.822192363404264, -4.808988602487445, 6.282699534622149]
[1.9684993733166822, 4.224446642146764, -19.72010623548457, -13.790594696972407, 0.6970930173849572, -4.432962960251689, -3.4673013009932574, -1.767994652510742]
[1.0719720028424078, 2.5266939022673727, -4.294296713681639, -16.729797806817487, 0.3407022900162246, -2.2485501409033666, -0.8127688495258327, -3.42797696441485]
[0.9574127494597999, 1.2680817248527774, 0.6626271434785644, -9.021542819507147, 0.21249816908491037, -1.0143392771168123, 0.21400382117613845, -2.2552162974026566]
[1.026505547286025, 0.9095251254958595, 2.1626974566892803, -6.256489916332188, 0.1883893214244694, -0.6976152811241049, 0.5638188536930522, -1.7090441864606456]

gives a continuous solution sol(t) with the derivative at each time point. This can then be used to define a continuous cost function via Integrals.jl, though the derivative would need to be defined by hand using the extra sensitivity terms.

## Example: Continuous Adjoints on an Energy Functional

Continuous adjoints on a continuous functional are more automatic than forward mode. In this case we'd like to calculate the adjoint sensitivity of the scalar energy functional:

$$$G(u,p)=\int_{0}^{T}\frac{\sum_{i=1}^{n}u_{i}^{2}(t)}{2}dt$$$

which is:

g(u,p,t) = (sum(u).^2) ./ 2
g (generic function with 1 method)

Notice that the gradient of this function with respect to the state u is:

function dg(out,u,p,t)
out[1]= u[1] + u[2]
out[2]= u[1] + u[2]
end
dg (generic function with 1 method)

To get the adjoint sensitivities, we call:

prob = ODEProblem(f,[1.0;1.0],(0.0,10.0),p)
sol = solve(prob,DP8())
res = adjoint_sensitivities(sol,Vern9(),dgdu_continuous=dg,g=g,abstol=1e-8,reltol=1e-8)
([-57.4304576527421, -14.286740920907548], [21.070509833784705 -101.3664068223125 63.16462173070765])

Notice that we can check this against autodifferentiation and numerical differentiation as follows:

using QuadGK, ForwardDiff, Calculus
function G(p)
tmp_prob = remake(prob,p=p)
sol = solve(tmp_prob,Vern9(),abstol=1e-14,reltol=1e-14)
res3 = Calculus.gradient(G,[1.5,1.0,3.0])