SciMLBenchmarks.jl holds webpages, pdfs, and notebooks showing the benchmarks for the SciML Scientific Machine Learning Software ecosystem, including:
- Benchmarks of equation solver implementations
- Speed and robustness comparisons of methods for parameter estimation / inverse problems
- Training universal differential equations (and subsets like neural ODEs)
- Training of physics-informed neural networks (PINNs)
- Surrogate comparisons, including radial basis functions, neural operators (DeepONets, Fourier Neural Operators), and more
These benchmarks are meant to represent good optimized coding style. Benchmarks are preferred to be run on the provided open benchmarking hardware for full reproducibility (though in some cases, such as with language barriers, this can be difficult). Each benchmark is documented with the compute devices used along with package versions for necessary reproduction.
If any of the code from any of the languages can be improved, please open a pull request.
Static outputs in pdf, markdown, and html reside in SciMLBenchmarksOutput.
- Multi-Language Wrapper Benchmarks
- ODE Solver Multi-Language Wrapper Package Work-Precision Benchmarks (MATLAB, SciPy, Julia, deSolve (R))
- Torchdiffeq vs DifferentialEquations.jl (/ DiffEqFlux.jl) Benchmarks
- torchdiffeq vs Julia DiffEqFlux Neural ODE Training Benchmark
- torchsde vs DifferentialEquations.jl / DiffEqFlux.jl
- JITCODE vs SciPy vs DifferentialEquations.jl on large network dynamics
- DifferentialEquations.jl vs Mujuco and DiffTaichi
- DiffEqFlux.jl / DifferentialEquations.jl vs Jax on an epidemic model
- DifferentialEquations.jl vs SciPy vs NumbaLSODA on a stiff ODE
- DifferentialEquations.jl vs SciPy vs NumbaLSODA
- Brusselator Stiff Partial Differential Equation Benchmark: Julia DifferentialEquations.jl vs Python SciPy
- Non-stiff Ordinary Differential Equations (ODEs)
- Stiff Ordinary Differential Equations (ODEs)
- Differential-Algebraic Equations (DAEs)
- Method of Lines PDEs
- Filament PDE Discretization Work-Precision Diagrams
- Allen-Cahn Finite Difference Work-Precision Diagrams
- Allen-Cahn Pseudospectral Work-Precision Diagrams
- Burger's Finite Difference Work-Precision Diagrams
- Burger's Pseudospectral Work-Precision Diagrams
- KdV Finite Difference Work-Precision Diagrams
- KdV Pseudospectral Work-Precision Diagrams
- Kuramoto–Sivashinsky Finite Difference Work-Precision Diagrams
- Kuramoto–Sivashinsky Pseudospectral Work-Precision Diagrams
- Dynamical ODEs
- N-body problems
- Nonstiff SDEs
- Stiff SDEs
- Nonstiff DDEs
- Stiff DDEs
- Jump Equations
- Parameter Estimation
- Lorenz Equation Parameter Estimation by Optimization Methods
- Bayesian Lotka-Volterra Parameter Estimation
- Bayesian Lorenz Equation Estimation
- Bayesian FitzHugh-Nagumo Equation Estimation
- Lotka Volterra Equation Parameter Estimation by Optimization Methods
- FitzHugh-Nagumo Equation Parameter Estimation by Optimization Methods
- Physics-Informed Neural Network (Neural Network PDE Solver) Cost Function Benchmarks
- Allen-Cahn PDE Physics-Informed Neural Network (PINN) Loss Function Error vs Time Benchmarks
- Diffusion Equation Physics-Informed Neural Network (PINN) Loss Function Error vs Time Benchmarks
- Hamilton-Jacobi PDE Physics-Informed Neural Network (PINN) Loss Function Error vs Time Benchmarks
- Level Set PDE Physics-Informed Neural Network (PINN) Loss Function Error vs Time Benchmarks
- Nernst-Planck PDE Physics-Informed Neural Network (PINN) Loss Function Error vs Time Benchmarks
- Physics-Informed Neural Network (Neural Network PDE Solver) Optimizer Benchmarks
- Diffusion Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks
- 1D Nernst-Planck Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks
- Allen-Cahn Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks
- Berger's Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks
- Hamilton-Jacobi Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks
- Poisson Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks
The following tests were developed for the paper Adaptive Methods for Stochastic Differential Equations via Natural Embeddings and Rejection Sampling with Memory. These notebooks track their latest developments.
The following is a quick summary of the benchmarks. These paint broad strokes over the set of tested equations and some specific examples may differ.
- OrdinaryDiffEq.jl's methods are the most efficient by a good amount
Vernmethods tend to do the best in every benchmark of this category
- At lower tolerances,
Tsit5does well consistently.
- ARKODE and Hairer's
dop853perform very similarly, but are both far less efficient than the
- The multistep methods,
lsoda, tend to not do very well.
- The ODEInterface multistep method
ddeabmdoes not do as well as the other multistep methods.
- ODE.jl's methods are not able to consistently solve the problems.
- Fixed time step methods are less efficient than the adaptive methods.
- In this category, the best methods are much more problem dependent.
- For smaller problems:
TRBDF2tend to be the most efficient at high tolerances.
Rodas5tend to be the most efficient at low tolerances.
- For larger problems (Filament PDE):
FBDFdoes the best at all normal tolerances.
- The ESDIRK methods like
KenCarp4can come close.
radauis always the most efficient when tolerances go to the low extreme (
- Fixed time step methods tend to diverge on every tested problem because the high stiffness results in divergence of the Newton solvers.
- ARKODE is very inconsistent and requires a lot of tweaking in order to not diverge on many of the tested problems. When it doesn't diverge, the similar algorithms in OrdinaryDiffEq.jl (
KenCarp4) are much more efficient in most cases.
- ODE.jl and GeometricIntegrators.jl fail to converge on any of the tested problems.
- Higher order (generally order >=6) symplectic integrators are much more efficient than the lower order counterparts.
- For high accuracy, using a symplectic integrator is not preferred. Their extra cost is not necessary since the other integrators are able to not drift simply due to having low enough error.
- In this class, the
DPRKNmethods are by far the most efficient. The
Vernmethods do well for not being specific to the domain.
- For simple 1-dimensional SDEs at low accuracy, the
RKMilmethods can do well. Beyond that, they are simply outclassed.
SRImethods both are very similar within-class on the simple SDEs.
SRA3is the most efficient when applicable and the tolerances are low.
- Generally, only low accuracy is necessary to get to sampling error of the mean.
- The adaptive method is very conservative with error estimates.
- The high order adaptive methods (
SRIW1) generally do well on stiff problems.
- The "standard" low-order implicit methods,
ImplicitRK, do not do well on all stiff problems. Some exceptions apply to well-behaved problems like the Stochastic Heat Equation.
- The efficiency ranking tends to match the ODE Tests, but the cutoff from low to high tolerance is lower.
Tsit5does well in a large class of problems here.
Vernmethods do well in low tolerance cases.
- The Rosenbrock methods, specifically
Rodas5, perform well.
- Broadly two different approaches have been used, Bayesian Inference and Optimisation algorithms.
- In general it seems that the optimisation algorithms perform more accurately but that can be attributed to the larger number of data points being used in the optimisation cases, Bayesian approach tends to be slower of the two and hence lesser data points are used, accuracy can increase if proper data is used.
- Within the different available optimisation algorithms, BBO from the BlackBoxOptim package and GNCRS2LM for the global case while LDSLSQP,LNBOBYQA and LN_NELDERMEAD for the local case from the NLopt package perform the best.
- Another algorithm being used is the QuadDIRECT algorithm, it gives very good results in the shorter problem case but doesn't do very well in the case of the longer problems.
- The choice of global versus local optimization make a huge difference in the timings. BBO tends to find the correct solution for a global optimization setup. For local optimization, most methods in NLopt, like :LN_BOBYQA, solve the problem very fast but require a good initial condition.
- The different backends options available for Bayesian method offer some tradeoffs beteween time, accuracy and control. It is observed that sufficiently high accuracy can be observed with any of the backends with the fine tuning of stepsize, constraints on the parameters, tightness of the priors and number of iterations being passed.
To run the tutorials interactively via Jupyter notebooks and benchmark on your own machine
- Run Weave for the file (or folder) you are interested in
- Activate the appropriate environment
- Open and run the notebook.
Note: Since notebooks default to looking for a Project.toml file at the same level or parent folder, you might need to move the notebook to the folder with the appropriate Project.toml.
]activate . ]instantiate using SciMLBenchmarks SciMLBenchmarks.weave_file("benchmarks/Jumps", "Diffusion_CTRW.jmd", [:notebook]) ]activate benchmarks/Jumps
Diffusion_CTRW.ipynb to "benchmarks/Jumps" and open the notebook.
All of the files are generated from the Weave.jl files in the
benchmarks folder. To run the generation process, do for example:
]activate SciMLBenchmarks # Get all of the packages using SciMLBenchmarks SciMLBenchmarks.weave_file("NonStiffODE","linear_wpd.jmd")
To generate all of the files in a folder, for example, run:
To generate all of the notebooks, do:
Each of the benchmarks displays the computer characteristics at the bottom of the benchmark. Since performance-necessary computations are normally performed on compute clusters, the official benchmarks use a workstation with an AMD EPYC 7502 32-Core Processor @ 2.50GHz to match the performance characteristics of a standard node in a high performance computing (HPC) cluster or cloud computing setup.
To see benchmark results before merging, click into the BuildKite, click onto Artifacts, and then investigate the trained results.