Investigating symbolic_discretize with the PhysicsInformedNN Discretizer for the 1-D Burgers' Equation

Let's consider the Burgers' equation:

\[\begin{gather*} ∂_t u + u ∂_x u - (0.01 / \pi) ∂_x^2 u = 0 \, , \quad x \in [-1, 1], t \in [0, 1] \, , \\ u(0, x) = - \sin(\pi x) \, , \\ u(t, -1) = u(t, 1) = 0 \, , \end{gather*}\]

with Physics-Informed Neural Networks. Here is an example of using the low-level API:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, LineSearches
using ModelingToolkit: Interval, infimum, supremum

@parameters t, x
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2

#2D PDE
eq = Dt(u(t, x)) + u(t, x) * Dx(u(t, x)) - (0.01 / pi) * Dxx(u(t, x)) ~ 0

# Initial and boundary conditions
bcs = [u(0, x) ~ -sin(pi * x),
    u(t, -1) ~ 0.0,
    u(t, 1) ~ 0.0,
    u(t, -1) ~ u(t, 1)]

# Space and time domains
domains = [t ∈ Interval(0.0, 1.0),
    x ∈ Interval(-1.0, 1.0)]

# Neural network
chain = Chain(Dense(2, 16, σ), Dense(16, 16, σ), Dense(16, 1))
strategy = QuadratureTraining(; abstol = 1e-6, reltol = 1e-6, batch = 200)

indvars = [t, x]
depvars = [u(t, x)]
@named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)

discretization = PhysicsInformedNN(chain, strategy)
sym_prob = symbolic_discretize(pde_system, discretization)

phi = sym_prob.phi

pde_loss_functions = sym_prob.loss_functions.pde_loss_functions
bc_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
    println("loss: ", l)
    println("pde_losses: ", map(l_ -> l_(p.u), pde_loss_functions))
    println("bcs_losses: ", map(l_ -> l_(p.u), bc_loss_functions))
    return false
end

loss_functions = [pde_loss_functions; bc_loss_functions]

loss_function(θ, p) = sum(map(l -> l(θ), loss_functions))

f_ = OptimizationFunction(loss_function, AutoZygote())
prob = OptimizationProblem(f_, sym_prob.flat_init_params)

res = solve(prob, BFGS(linesearch = BackTracking()); maxiters = 3000)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [3.425784853225025 -0.4315426422245669; 0.8030441152993155 -6.328796916576611; … ; 0.44011299838233175 1.8201956625987643; 5.689707386542253 -10.679847517909668], bias = [3.0608793106465773, 7.986947035522262, 1.978724047798871, -6.2913875236096795, 4.831885041079236, 6.101653361926029, -2.6510578085915464, 1.4823065579327348, -2.6900462601987383, 1.4716507117393034, 1.4639646735469358, 4.433683241379413, 1.2151061523135682, -1.2907302648095902, -2.552940779463008, -7.38529364579385]), layer_2 = (weight = [-0.9304295025207361 -1.2591291401215436 … 0.6502358512755306 -0.9047410855055651; 3.515626429984736 -3.852057115210315 … 1.8676171447240704 4.327765788019104; … ; 2.805343782071697 -2.7047823385286054 … 0.24710122053808523 -0.25011360383441267; 2.0272275161582933 0.14742670532422453 … 0.6861625988615895 -1.6252998557162828], bias = [-0.10370905028835184, 0.6200581733630273, -1.286693174703923, -0.44558686154581856, -2.6418343613396162, 1.0429565769117046, -0.6768938929658597, 1.507993634281997, -1.8260955824221028, -0.09474491868187686, 0.352424383104778, -0.7990040050700586, -0.9581275579924249, 0.013121705380667013, -1.1429232665690976, 1.584705064101253]), layer_3 = (weight = [0.9289129619912189 10.452359345813587 … -2.466631334164439 0.8051039833376759], bias = [-2.3312117066359175]))

And some analysis:

using Plots

ts, xs = [infimum(d.domain):0.01:supremum(d.domain) for d in domains]
u_predict_contourf = reshape([first(phi([t, x], res.u)) for t in ts for x in xs],
    length(xs), length(ts))
plot(ts, xs, u_predict_contourf, linetype = :contourf, title = "predict")

u_predict = [[first(phi([t, x], res.u)) for x in xs] for t in ts]
p1 = plot(xs, u_predict[3], title = "t = 0.1");
p2 = plot(xs, u_predict[11], title = "t = 0.5");
p3 = plot(xs, u_predict[end], title = "t = 1");
plot(p1, p2, p3)
Example block output