ODE-Specialized Physics-Informed Neural Network (PINN) Solver
NeuralPDE.NNODE
— TypeNNODE(chain, opt, init_params = nothing; strategy = nothing, autodiff = false,
batch = true, param_estim = false, additional_loss = nothing,
dataset = [], estim_collocate = false, kwargs...)
Algorithm for solving ordinary differential equations using a neural network. This is a specialization of the physics-informed neural network which is used as a solver for a standard ODEProblem
.
Note that NNODE only supports ODEs which are written in the out-of-place form, i.e. du = f(u,p,t)
, and not f(du,u,p,t)
. If not declared out-of-place, then the NNODE will exit with an error.
Positional Arguments
chain
: A neural network architecture, defined as aLux.AbstractLuxLayer
orFlux.Chain
.Flux.Chain
will be converted toLux
usingadapt(FromFluxAdaptor(), chain)
.opt
: The optimizer to train the neural network.init_params
: The initial parameter of the neural network. By default, this isnothing
which thus uses the random initialization provided by the neural network library.
Keyword Arguments
additional_loss
: A function additional_loss(phi, θ) where phi are the neural network trial solutions, θ are the weights of the neural network(s).dataset
: Is either an empty Vector or a nested Vector of the form[x̂, t, W]
wherex̂
are dependant variable observations,t
are time points andW
are quadrature weights for domain. The dataset is used to compute a L2 loss against the data and also for the Data Quadrature loss function. For multiple dependant variables, there will be multiple vectors with the last two vectors in dataset still being fort
,W
. Is empty by default assuming a forward problem is being solved.autodiff
: The switch between automatic and numerical differentiation for the PDE operators. The reverse mode of the loss function is always automatic differentiation (via Zygote), this is only for the derivative in the loss function (the derivative with respect to time).batch
: The batch size for the loss computation. Defaults totrue
, means the neural network is applied at a row vector of valuest
simultaneously, i.e. it's the batch size for the neural network evaluations. This requires a neural network compatible with batched data.false
means which means the application of the neural network is done at individual time points one at a time. This is not applicable toQuadratureTraining
wherebatch
is passed in thestrategy
which is the number of points it can parallelly compute the integrand.param_estim
: Boolean to indicate whether parameters of the differential equations are learnt along with parameters of the neural network.strategy
: The training strategy used to choose the points for the evaluations. Default ofnothing
means thatQuadratureTraining
with QuadGK is used if nodt
is given, andGridTraining
is used withdt
if given.estim_collocate
: A boolean value to indicate whether to use the Data Quadrature loss function or not. This is only relevant for ODE parameter estimation.kwargs
: Extra keyword arguments are splatted to the Optimization.jlsolve
call.
Examples
u0 = [1.0, 1.0]
ts = [t for t in 1:100]
(u_, t_) = (analytical_func(ts), ts)
function additional_loss(phi, θ)
return sum(sum(abs2, [phi(t, θ) for t in t_] .- u_)) / length(u_)
end
alg = NNODE(chain, opt, additional_loss = additional_loss)
f(u,p,t) = cos(2pi*t)
tspan = (0.0, 1.0)
u0 = 0.0
prob = ODEProblem(linear, u0 ,tspan)
chain = Lux.Chain(Lux.Dense(1, 5, Lux.σ), Lux.Dense(5, 1))
opt = OptimizationOptimisers.Adam(0.1)
sol = solve(prob, NNODE(chain, opt), verbose = true, abstol = 1e-10, maxiters = 200)
Solution Notes
Note that the solution is evaluated at fixed time points according to standard output handlers such as saveat
and dt
. However, the neural network is a fully continuous solution so sol(t)
is an accurate interpolation (up to the neural network training result). In addition, the OptimizationSolution
is returned as sol.k
for further analysis.
References
Lagaris, Isaac E., Aristidis Likas, and Dimitrios I. Fotiadis. "Artificial neural networks for solving ordinary and partial differential equations." IEEE Transactions on Neural Networks 9, no. 5 (1998): 987-1000.
Bayesian inference with PINNs
NeuralPDE.BNNODE
— TypeBNNODE(chain, kernel = HMC; strategy = nothing, draw_samples = 2000,
priorsNNw = (0.0, 2.0), param = [nothing], l2std = [0.05],
phystd = [0.05], phynewstd = (ode_params)->[0.05], dataset = [], physdt = 1 / 20.0,
MCMCargs = (; n_leapfrog=30), nchains = 1, init_params = nothing,
Adaptorkwargs = (; Adaptor = StanHMCAdaptor, targetacceptancerate = 0.8,
Metric = DiagEuclideanMetric),
Integratorkwargs = (Integrator = Leapfrog,), autodiff = false, estim_collocate = false, progress = false, verbose = false)
Algorithm for solving ordinary differential equations using a Bayesian neural network. This is a specialization of the physics-informed neural network which is used as a solver for a standard ODEProblem
.
Note that BNNODE only supports ODEs which are written in the out-of-place form, i.e. du = f(u,p,t)
, and not f(du,u,p,t)
. If not declared out-of-place, then the BNNODE will exit with an error.
Positional Arguments
chain
: A neural network architecture, defined as aLux.AbstractLuxLayer
.kernel
: Choice of MCMC Sampling Algorithm. Defaults toAdvancedHMC.HMC
Keyword Arguments
(refer NeuralPDE.ahmc_bayesian_pinn_ode
keyword arguments.)
Example
linear = (u, p, t) -> -u / p[1] + exp(t / p[2]) * cos(t)
tspan = (0.0, 10.0)
u0 = 0.0
p = [5.0, -5.0]
prob = ODEProblem(linear, u0, tspan, p)
linear_analytic = (u0, p, t) -> exp(-t / 5) * (u0 + sin(t))
sol = solve(prob, Tsit5(); saveat = 0.05)
u = sol.u[1:100]
time = sol.t[1:100]
x̂ = u .+ (u .* 0.2) .* randn(size(u))
dataset = [x̂, time, 0.05 .* ones(length(time))]
chainlux = Lux.Chain(Lux.Dense(1, 6, tanh), Lux.Dense(6, 6, tanh), Lux.Dense(6, 1))
alg = BNNODE(chainlux; draw_samples = 2000, l2std = [0.05], phystd = [0.05],
priorsNNw = (0.0, 3.0), progress = true)
sol_lux = solve(prob, alg)
# with parameter estimation
alg = BNNODE(chainlux; dataset, draw_samples = 2000, l2std = [0.05], phystd = [0.05],
priorsNNw = (0.0, 10.0), param = [Normal(6.5, 0.5), Normal(-3, 0.5)],
progress = true)
sol_lux_pestim = solve(prob, alg)
Solution Notes
Note that the solution is evaluated at fixed time points according to the strategy chosen. ensemble solution is evaluated and given at steps of saveat
. Dataset should only be provided when ODE parameter Estimation is being done. The neural network is a fully continuous solution so BPINNsolution
is an accurate interpolation (up to the neural network training result). In addition, the BPINNstats
is returned as sol.fullsolution
for further analysis.
References
Liu Yanga, Xuhui Menga, George Em Karniadakis. "B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and Inverse PDE Problems with Noisy Data".
Kevin Linka, Amelie Schäfer, Xuhui Meng, Zongren Zou, George Em Karniadakis, Ellen Kuhl "Bayesian Physics Informed Neural Networks for real-world nonlinear dynamical systems".