The NeuralPDE discretize function allows for specifying adaptive loss function strategy which improve training performance by reweighing the equations as necessary to ensure the boundary conditions are well-statisfied, even in ill-conditioned scenarios. The following are the options for the adaptive_loss:

NeuralPDE.NonAdaptiveLossType
NonAdaptiveLoss{T}(; pde_loss_weights = 1,
bc_loss_weights = 1,
additional_loss_weights = 1)

A way of loss weighting the components of the loss function in the total sum that does not change during optimization

NeuralPDE.GradientScaleAdaptiveLossType
GradientScaleAdaptiveLoss(reweight_every;
weight_change_inertia = 0.9,
pde_loss_weights = 1,
bc_loss_weights = 1,
additional_loss_weights = 1)

A way of adaptively reweighting the components of the loss function in the total sum such that BCi loss weights are scaled by the exponential moving average of max(|∇pdeloss|)/mean(|∇bciloss|) )

Positional Arguments

• reweight_every: how often to reweight the BC loss functions, measured in iterations. Reweighting is somewhat expensive since it involves evaluating the gradient of each component loss function,

Keyword Arguments

• weight_change_inertia: a real number that represents the inertia of the exponential moving average of the BC weight changes,

References

Understanding and mitigating gradient pathologies in physics-informed neural networks Sifan Wang, Yujun Teng, Paris Perdikaris https://arxiv.org/abs/2001.04536v1

NeuralPDE.MiniMaxAdaptiveLossType
function MiniMaxAdaptiveLoss(reweight_every;
pde_loss_weights = 1,
bc_loss_weights = 1,
additional_loss_weights = 1)

A way of adaptively reweighting the components of the loss function in the total sum such that the loss weights are maximized by an internal optimiser, which leads to a behavior where loss functions that have not been satisfied get a greater weight,

Positional Arguments

• reweight_every: how often to reweight the PDE and BC loss functions, measured in iterations. reweighting is cheap since it re-uses the value of loss functions generated during the main optimisation loop

Keyword Arguments

• pde_max_optimiser: a Flux.Optimise.AbstractOptimiser that is used internally to maximize the weights of the PDE loss functions
• bc_max_optimiser: a Flux.Optimise.AbstractOptimiser that is used internally to maximize the weights of the BC loss functions

References

Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism Levi McClenny, Ulisses Braga-Neto https://arxiv.org/abs/2009.04544