Investigating symbolic_discretize
with the PhysicsInformedNN
Discretizer for the 1-D Burgers' Equation
Let's consider the Burgers' equation:
\[\begin{gather*} ∂_t u + u ∂_x u - (0.01 / \pi) ∂_x^2 u = 0 \, , \quad x \in [-1, 1], t \in [0, 1] \, , \\ u(0, x) = - \sin(\pi x) \, , \\ u(t, -1) = u(t, 1) = 0 \, , \end{gather*}\]
with Physics-Informed Neural Networks. Here is an example of using the low-level API:
using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, LineSearches
using ModelingToolkit: Interval, infimum, supremum
@parameters t, x
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2
#2D PDE
eq = Dt(u(t, x)) + u(t, x) * Dx(u(t, x)) - (0.01 / pi) * Dxx(u(t, x)) ~ 0
# Initial and boundary conditions
bcs = [u(0, x) ~ -sin(pi * x),
u(t, -1) ~ 0.0,
u(t, 1) ~ 0.0,
u(t, -1) ~ u(t, 1)]
# Space and time domains
domains = [t ∈ Interval(0.0, 1.0),
x ∈ Interval(-1.0, 1.0)]
# Neural network
chain = Chain(Dense(2, 16, σ), Dense(16, 16, σ), Dense(16, 1))
strategy = QuadratureTraining(; abstol = 1e-6, reltol = 1e-6, batch = 200)
indvars = [t, x]
depvars = [u(t, x)]
@named pde_system = PDESystem(eq, bcs, domains, indvars, depvars)
discretization = PhysicsInformedNN(chain, strategy)
sym_prob = symbolic_discretize(pde_system, discretization)
phi = sym_prob.phi
pde_loss_functions = sym_prob.loss_functions.pde_loss_functions
bc_loss_functions = sym_prob.loss_functions.bc_loss_functions
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p.u), pde_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bc_loss_functions))
return false
end
loss_functions = [pde_loss_functions; bc_loss_functions]
loss_function(θ, p) = sum(map(l -> l(θ), loss_functions))
f_ = OptimizationFunction(loss_function, AutoZygote())
prob = OptimizationProblem(f_, sym_prob.flat_init_params)
res = solve(prob, BFGS(linesearch = BackTracking()); maxiters = 3000)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [-0.15157729958281696 -1.5304399303804384; 1.9753980920528478 -4.126734717393059; … ; -0.7356965753241866 -1.3091224080733808; 1.9541589798682255 6.246827485159523], bias = [0.9667974690177452, -2.3677895284827852, -2.4283408085750176, 0.27767674098598333, 5.928295936713129, 1.8793150483997827, -0.652300141790219, -4.390989187825131, 2.56198196069863, -0.18582789497882082, -4.05607288768939, 6.3566821839407925, 1.7307559405722577, 0.3211537060424949, 0.7575493747204746, -1.7701221492785797]), layer_2 = (weight = [0.011384799268164651 1.1809215728918663 … 1.6829571202688007 -0.14938178418799267; -0.746534087456128 0.22103719442943734 … 0.4969928088026082 -0.4992240954831915; … ; 0.1963502376200682 0.4374831700782459 … 0.8123560358146109 0.13690437447109938; 1.7654733057320509 0.35142021638507925 … -2.8194675600538917 4.41182040158797], bias = [0.6250379062960976, 0.24680852140017692, 0.5127808010228657, 0.6011130422716648, -0.6795642916212078, -0.665176382672152, -0.8770874851691729, -2.091085503783298, 0.27974916411140416, -0.5138482496440949, -0.09192293057674925, 0.6202231855469239, -0.16313958330776276, -1.597314600035097, 0.1307415581044089, -0.03300130072650812]), layer_3 = (weight = [0.25894416403547177 -0.24536228152864156 … -4.074265610410057 3.9485908395552856], bias = [0.1990949396522396]))
And some analysis:
using Plots
ts, xs = [infimum(d.domain):0.01:supremum(d.domain) for d in domains]
u_predict_contourf = reshape([first(phi([t, x], res.u)) for t in ts for x in xs],
length(xs), length(ts))
plot(ts, xs, u_predict_contourf, linetype = :contourf, title = "predict")
u_predict = [[first(phi([t, x], res.u)) for x in xs] for t in ts]
p1 = plot(xs, u_predict[3], title = "t = 0.1");
p2 = plot(xs, u_predict[11], title = "t = 0.5");
p3 = plot(xs, u_predict[end], title = "t = 1");
plot(p1, p2, p3)