Investigating symbolic_discretize with the PhysicsInformedNN Discretizer for the 1-D Burgers' Equation
Let's consider the Burgers' equation:
\[\begin{gather*} ∂_t u + u ∂_x u - (0.01 / \pi) ∂_x^2 u = 0 \, , \quad x \in [-1, 1], t \in [0, 1] \, , \\ u(0, x) = - \sin(\pi x) \, , \\ u(t, -1) = u(t, 1) = 0 \, , \end{gather*}\]
with Physics-Informed Neural Networks. Here is an example of using the low-level API:
using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, LineSearches
using DomainSets: Interval
using IntervalSets: leftendpoint, rightendpoint
@parameters t, x
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2
#2D PDE
eq = Dt(u(t, x)) + u(t, x) * Dx(u(t, x)) - (0.01 / pi) * Dxx(u(t, x)) ~ 0
# Initial and boundary conditions
bcs = [u(0, x) ~ -sin(pi * x),
u(t, -1) ~ 0.0,
u(t, 1) ~ 0.0,
u(t, -1) ~ u(t, 1)]
# Space and time domains
domains = [t ∈ Interval(0.0, 1.0),
x ∈ Interval(-1.0, 1.0)]
# Neural network
chain = Chain(Dense(2, 16, σ), Dense(16, 16, σ), Dense(16, 1))
strategy = QuadratureTraining(; abstol = 1e-6, reltol = 1e-6, batch = 200)
indvars = [t, x]
depvars = [u(t, x)]
@named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)
discretization = PhysicsInformedNN(chain, strategy)
sym_prob = symbolic_discretize(pde_system, discretization)
phi = sym_prob.phi
pde_loss_functions = sym_prob.loss_functions.pde_loss_functions
bc_loss_functions = sym_prob.loss_functions.bc_loss_functions
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p.u), pde_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bc_loss_functions))
return false
end
loss_functions = [pde_loss_functions; bc_loss_functions]
loss_function(θ, p) = sum(map(l -> l(θ), loss_functions))
f_ = OptimizationFunction(loss_function, AutoZygote())
prob = OptimizationProblem(f_, sym_prob.flat_init_params)
res = solve(prob, BFGS(linesearch = BackTracking()); maxiters = 3000)retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [6.5508132670308274 1.0007389387358094; 3.7355536095248927 -1.291966978083712; … ; -1.0667428599902955 -2.0466454365511195; -0.11372908509320284 1.2998829950322601], bias = [1.8314955219124511, 0.3933748802909988, -3.809688348040168, 8.732273411508121, -0.28246842677842887, -9.291331739392206, 6.667549316204989, 8.20738159375512, 1.2819053906294942, -8.790590863528587, -4.772009738438021, 3.8246124365330054, -12.10556011624776, 7.662877635267148, 2.074966590096807, -1.446745706977607]), layer_2 = (weight = [-1.2572135856711895 -0.5988982173263889 … -0.607410998177566 -0.5594921331597847; -0.9291447329938034 -0.9531050172001077 … 1.4068092285434954 -2.1999940862114564; … ; 0.053839328280029686 0.951830286229552 … -0.14109465164067583 -5.699243992458126; -1.4850201516450416 -4.94577117056521 … 2.4678155482661186 -3.6400832425902347], bias = [-1.240215057060323, 0.3630932238032279, -0.5628714458605003, -0.38778774571356817, -1.1408404531992626, -1.3582965043122124, 1.2137960127367016, 0.40377305561099147, 2.9893410224127397, 1.389960744592512, 0.5652383729620164, 2.0036307692507047, 2.3691778637673724, 0.2711269677929159, 1.0763869853284045, -0.3414368051047954]), layer_3 = (weight = [0.0825204591316206 1.1366995393393189 … 15.573108910120744 -6.518815297016034], bias = [-1.2613305839502953]))And some analysis:
using Plots
ts, xs = [leftendpoint(d.domain):0.01:rightendpoint(d.domain) for d in domains]
u_predict_contourf = reshape([first(phi([t, x], res.u)) for t in ts for x in xs],
length(xs), length(ts))
plot(ts, xs, u_predict_contourf, linetype = :contourf, title = "predict")
u_predict = [[first(phi([t, x], res.u)) for x in xs] for t in ts]
p1 = plot(xs, u_predict[3], title = "t = 0.1");
p2 = plot(xs, u_predict[11], title = "t = 0.5");
p3 = plot(xs, u_predict[end], title = "t = 1");
plot(p1, p2, p3)