Parallel Ensemble Simulations Interface
Performing Monte Carlo simulations, solving with a predetermined set of initial conditions, and GPU-parallelizing a parameter search all fall under the ensemble simulation interface. This interface allows one to declare a template AbstractSciMLProblem to parallelize, tweak the template in trajectories
for many trajectories, solve each in parallel batches, reduce the solutions down to specific answers, and compute summary statistics on the results.
Performing an Ensemble Simulation
Building a Problem
SciMLBase.EnsembleProblem
— Typestruct EnsembleProblem{T, T2, T3, T4, T5} <: SciMLBase.AbstractEnsembleProblem
Defines a structure to manage an ensemble (batch) of problems. Each field controls how the ensemble behaves during simulation.
Arguments
- `prob`: The original base problem to replicate or modify.
- `prob_func`: A function that defines how to generate each subproblem.
- `output_func`: A function to post-process each individual simulation result.
- `reduction`: A function to combine results from all simulations.
- `u_init`: The initial container used to accumulate the results.
- `safetycopy`: Whether to copy the problem when creating subproblems (to avoid unintended modifications).
Solving the Problem
SciMLBase.__solve
— Methodsim = solve(enprob, alg, ensemblealg = EnsembleThreads(), kwargs...)
Solves the ensemble problem enprob
with the algorithm alg
using the ensembler ensemblealg
.
The keyword arguments take in the arguments for the common solver interface and will pass them to the solver. The ensemblealg
is optional, and will default to EnsembleThreads()
. The special keyword arguments to note are:
trajectories
: The number of simulations to run. This argument is required.batch_size
: The size of the batches on which the reductions are applies. Defaults totrajectories
.pmap_batch_size
: The size of thepmap
batches. Default isbatch_size÷100 > 0 ? batch_size÷100 : 1
EnsembleAlgorithms
The choice of ensemble algorithm allows for control over how the multiple trajectories are handled. Currently, the ensemble algorithm types are:
SciMLBase.EnsembleSerial
— Typestruct EnsembleSerial <: SciMLBase.BasicEnsembleAlgorithm
Basic ensemble solver which uses no parallelism and runs the problems in serial
SciMLBase.EnsembleThreads
— Typestruct EnsembleThreads <: SciMLBase.BasicEnsembleAlgorithm
The default. This uses multithreading. It's local (single computer, shared memory) parallelism only. Lowest parallelism overhead for small problems.
SciMLBase.EnsembleDistributed
— Typestruct EnsembleDistributed <: SciMLBase.BasicEnsembleAlgorithm
Uses pmap
internally. It will use as many processors as you have Julia processes. To add more processes, use addprocs(n)
. These processes can be placed onto multiple different machines in order to paralleize across an entire cluster via passwordless SSH. See Julia's documentation for more details.
Recommended for the case when each trajectory calculation isn't “too quick” (at least about a millisecond each?), where the calculations of a given problem allocate memory, or when you have a very large ensemble. This can be true even on a single shared memory system because distributed process use separate garbage collectors and thus can be even faster than EnsembleThreads if the computation is complex enough.
SciMLBase.EnsembleSplitThreads
— Typestruct EnsembleSplitThreads <: SciMLBase.BasicEnsembleAlgorithm
A mixture of distributed computing with threading. The optimal version of this is to have a process on each node of a computer and then multithread on each system. However, this ensembler will simply use the node setup provided by the Julia Distributed processes, and thus it is recommended that you setup the processes in this fashion before using this ensembler. See Julia's Distributed documentation for more information
DiffEq Only (ODEProblem, SDEProblem)
GPU Manufacturer | GPU Kernel Language | Julia Support Package | Backend Type |
---|---|---|---|
NVIDIA | CUDA | CUDA.jl | CUDA.CUDABackend() |
AMD | ROCm | AMDGPU.jl | AMDGPU.ROCBackend() |
Intel | OneAPI | OneAPI.jl | oneAPI.oneAPIBackend() |
Apple (M-Series) | Metal | Metal.jl | Metal.MetalBackend() |
EnsembleGPUArray()
- Requires installing andusing DiffEqGPU
. This uses a GPU for computing the ensemble with hyperparallelism. It will automatically recompile your Julia functions to the GPU. A standard GPU sees a 5x performance increase over a 16 core Xeon CPU. However, there are limitations on what functions can auto-compile in this fashion, please see DiffEqGPU for more detailsEnsembleGPUKernel()
- Requires installing andusing DiffEqGPU
. This uses a GPU for computing the ensemble with hyperparallelism by building a custom GPU kernel. This can have drastically less overhead (for example, achieving 15x accelerating against Jax and PyTorch, see this paper for more details) but has limitations on what kinds of problems are compatible. See DiffEqGPU for more details
Choosing an Ensembler
For example, EnsembleThreads()
is invoked by:
solve(ensembleprob, alg, EnsembleThreads(); trajectories = 1000)
Solution Type
The resulting type is a EnsembleSimulation
, which includes the array of solutions.
Plot Recipe
There is a plot recipe for a AbstractEnsembleSimulation
which composes all of the plot recipes for the component solutions. The keyword arguments are passed along. A useful argument to use is linealpha
which will change the transparency of the plots. An additional argument is idxs
which allows you to choose which components of the solution to plot. For example, if the differential equation is a vector of 9 values, idxs=1:2:9
will plot only the solutions of the odd components. Another additional argument is zcolors
(an alias of marker_z
) which allows you to pass a zcolor
for each series. For details about zcolor
see the Series documentation for Plots.jl.
Analyzing an Ensemble Experiment
Analysis tools are included for generating summary statistics and summary plots for a EnsembleSimulation
.
To use this functionality, import the analysis module via:
using SciMLBase.EnsembleAnalysis
Time steps vs time points
For the summary statistics, there are two types. You can either summarize by time steps or by time points. Summarizing by time steps assumes that the time steps are all the same time point, i.e. the integrator used a fixed dt
or the values were saved using saveat
. Summarizing by time points requires interpolating the solution.
SciMLBase.EnsembleAnalysis.get_timestep
— Functionget_timestep(sim, i)
Returns an iterator of each simulation at time step i
SciMLBase.EnsembleAnalysis.get_timepoint
— Functionget_timepoint(sim, t)
Returns an iterator of each simulation at time point t
SciMLBase.EnsembleAnalysis.componentwise_vectors_timestep
— Functioncomponentwise_vectors_timestep(sim, i)
Returns a vector of each simulation at time step i
SciMLBase.EnsembleAnalysis.componentwise_vectors_timepoint
— Functioncomponentwise_vectors_timepoint(sim, t)
Returns a vector of each simulation at time point t
Summary Statistics Functions
Single Time Statistics
The available functions for time steps are:
SciMLBase.EnsembleAnalysis.timestep_mean
— Functiontimestep_mean(sim, i)
Computes the mean of each component at time step i
SciMLBase.EnsembleAnalysis.timestep_median
— Functiontimestep_median(sim, i)
Computes the median of each component at time step i
SciMLBase.EnsembleAnalysis.timestep_quantile
— Functiontimestep_quantile(sim, q, i)
Computes the quantile q of each component at time step i
SciMLBase.EnsembleAnalysis.timestep_meanvar
— Functiontimestep_meanvar(sim, i)
Computes the mean and variance of each component at time step i
SciMLBase.EnsembleAnalysis.timestep_meancov
— Functiontimestep_meancov(sim, i, j)
Computes the mean at i and j, and the covariance, for each component
SciMLBase.EnsembleAnalysis.timestep_meancor
— Functiontimestep_meancor(sim, i, j)
Computes the mean at i and j, and the correlation, for each component
SciMLBase.EnsembleAnalysis.timestep_weighted_meancov
— Functiontimestep_weighted_meancov(sim, W, i, j)
Computes the mean at i and j, and the weighted covariance W, for each component
The available functions for time points are:
SciMLBase.EnsembleAnalysis.timepoint_mean
— Functiontimepoint_mean(sim, t)
Computes the mean of each component at time t
SciMLBase.EnsembleAnalysis.timepoint_median
— Functiontimepoint_median(sim, t)
Computes the median of each component at time t
SciMLBase.EnsembleAnalysis.timepoint_quantile
— Functiontimepoint_quantile(sim, q, t)
Computes the quantile q of each component at time t
SciMLBase.EnsembleAnalysis.timepoint_meanvar
— Functiontimepoint_meanvar(sim, t)
Computes the mean and variance of each component at time t
SciMLBase.EnsembleAnalysis.timepoint_meancov
— Functiontimepoint_meancov(sim, t1, t2)
Computes the mean at t1 and t2, the covariance, for each component
SciMLBase.EnsembleAnalysis.timepoint_meancor
— Functiontimepoint_meancor(sim, t1, t2)
Computes the mean at t1 and t2, the correlation, for each component
SciMLBase.EnsembleAnalysis.timepoint_weighted_meancov
— Functiontimepoint_weighted_meancov(sim, W, t1, t2)
Computes the mean at t1 and t2, the weighted covariance W, for each component
Full Timeseries Statistics
Additionally, the following functions are provided for analyzing the full timeseries. The mean
and meanvar
versions return a DiffEqArray
which can be directly plotted. The meancov
and meancor
return a matrix of tuples, where the tuples are the (mean_t1,mean_t2,cov or cor)
.
The available functions for the time steps are:
Missing docstring for timeseries_steps_mean
. Check Documenter's build log for details.
Missing docstring for timeseries_steps_median
. Check Documenter's build log for details.
Missing docstring for timeseries_steps_quantile
. Check Documenter's build log for details.
Missing docstring for timeseries_steps_meanvar
. Check Documenter's build log for details.
Missing docstring for timeseries_steps_meancov
. Check Documenter's build log for details.
Missing docstring for timeseries_steps_meancor
. Check Documenter's build log for details.
Missing docstring for timeseries_steps_weighted_meancov
. Check Documenter's build log for details.
The available functions for the time points are:
SciMLBase.EnsembleAnalysis.timeseries_point_mean
— Functiontimeseries_point_mean(sim, ts)
Computes the mean at each time point in ts
SciMLBase.EnsembleAnalysis.timeseries_point_median
— Functiontimeseries_point_median(sim, ts)
Computes the median at each time point in ts
SciMLBase.EnsembleAnalysis.timeseries_point_quantile
— Functiontimeseries_point_quantile(sim, q, ts)
Computes the quantile q at each time point in ts
SciMLBase.EnsembleAnalysis.timeseries_point_meanvar
— Functiontimeseries_point_meanvar(sim, ts)
Computes the mean and variance at each time point in ts
SciMLBase.EnsembleAnalysis.timeseries_point_meancov
— Functiontimeseries_point_meancov(sim, ts)
Computes the covariance matrix and means at each time point in ts
timeseries_point_meancov(sim, ts1, ts2)
SciMLBase.EnsembleAnalysis.timeseries_point_meancor
— Functiontimeseries_point_meancor(sim, ts)
Computes the correlation matrix and means at each time point in ts
timeseries_point_meancor(sim, ts1, ts2)
SciMLBase.EnsembleAnalysis.timeseries_point_weighted_meancov
— Functiontimeseries_point_weighted_meancov(sim, W, ts)
Computes the weighted covariance matrix and means at each time point in ts
timeseries_point_weighted_meancov(sim, W, ts1, ts2)
EnsembleSummary
SciMLBase.EnsembleSummary
— Typestruct EnsembleSummary{T, N, Tt, S, S2, S3, S4, S5} <: SciMLBase.AbstractEnsembleSolution{T, N, S}
The EnsembleSummary
type is included to help with analyzing the general summary statistics. Two constructors are provided:
EnsembleSummary(sim; quantiles = [0.05, 0.95])
EnsembleSummary(sim, ts; quantiles = [0.05, 0.95])
The first produces a (mean,var)
summary at each time step. As with the summary statistics, this assumes that the time steps are all the same. The second produces a (mean,var)
summary at each time point t
in ts
. This requires the ability to interpolate the solution. Quantile is used to determine the qlow
and qhigh
quantiles at each timepoint. It defaults to the 5% and 95% quantiles.
Plot Recipe
The EnsembleSummary
comes with a plot recipe for visualizing the summary statistics. The extra keyword arguments are:
idxs
: the solution components to plot. Defaults to plotting all components.error_style
: The style for plotting the error. Defaults toribbon
. Other choices are:bars
for error bars and:none
for no error bars.ci_type
: Defaults to:quantile
which has(qlow,qhigh)
quantiles whose limits were determined when constructing theEnsembleSummary
. Gaussian CI1.96*(standard error of the mean)
can be set usingci_type=:SEM
.
One useful argument is fillalpha
which controls the transparency of the ribbon around the mean.
Example 1: Solving an ODE With Different Initial Conditions
Random Initial Conditions
Let's test the sensitivity of the linear ODE to its initial condition. To do this, we would like to solve the linear ODE 100 times and plot what the trajectories look like. Let's start by opening up some extra processes so that way the computation will be parallelized. Here we will choose to use distributed parallelism, which means that the required functions must be made available to all processes. This can be achieved with @everywhere
macro:
using Distributed
using OrdinaryDiffEq
using Plots
addprocs()
@everywhere using OrdinaryDiffEq
Now let's define the linear ODE, which is our base problem:
# Linear ODE which starts at 0.5 and solves from t=0.0 to t=1.0
prob = ODEProblem((u, p, t) -> 1.01u, 0.5, (0.0, 1.0))
For our ensemble simulation, we would like to change the initial condition around. This is done through the prob_func
. This function takes in the base problem and modifies it to create the new problem that the trajectory actually solves. The prob_func
has the signature prob_func(prob, i, repeat)
where:
prob
is the base problem to be modifiedi
is the unique trajectory index (1
totrajectories
)repeat
is the repeat iteration number (starts at1
, increments ifoutput_func
returnedrerun=true
)
Here, we will take the base problem, multiply the initial condition by a rand()
, and use that for calculating the trajectory:
@everywhere function prob_func(prob, i, repeat)
remake(prob, u0 = rand() * prob.u0)
end
Now we build and solve the EnsembleProblem
with this base problem and prob_func
:
ensemble_prob = EnsembleProblem(prob, prob_func = prob_func)
sim = solve(ensemble_prob, Tsit5(), EnsembleDistributed(), trajectories = 10)
We can use the plot recipe to plot what the 10 ODEs look like:
plot(sim, linealpha = 0.4)
We note that if we wanted to find out what the initial condition was for a given trajectory, we can retrieve it from the solution. sim[i]
returns the i
th solution object. sim[i].prob
is the problem that specific trajectory solved, and sim[i].prob.u0
would then be the initial condition used in the i
th trajectory.
Note: If the problem has callbacks, the functions for the condition
and affect!
must be named functions (not anonymous functions).
Using multithreading
The previous ensemble simulation can also be parallelized using a multithreading approach, which will make use of the different cores within a single computer. Because the memory is shared across the different threads, it is not necessary to use the @everywhere
macro. Instead, the same problem can be implemented simply as:
using OrdinaryDiffEq
prob = ODEProblem((u, p, t) -> 1.01u, 0.5, (0.0, 1.0))
function prob_func(prob, i, repeat)
remake(prob, u0 = rand() * prob.u0)
end
ensemble_prob = EnsembleProblem(prob, prob_func = prob_func)
sim = solve(ensemble_prob, Tsit5(), EnsembleThreads(), trajectories = 10)
using Plots;
plot(sim);
The number of threads to be used has to be defined outside of Julia, in the environmental variable JULIA_NUM_THREADS
(see Julia's documentation for details).
Pre-Determined Initial Conditions
Often, you may already know what initial conditions you want to use. This can be specified by the i
argument of the prob_func
. This i
is the unique index of each trajectory. So, if we have trajectories=100
, then we have i
as some index in 1:100
, and it's different for each trajectory.
So, if we wanted to use a grid of evenly spaced initial conditions from 0
to 1
, we could simply index the linspace
type:
initial_conditions = range(0, stop = 1, length = 100)
function prob_func(prob, i, repeat)
remake(prob, u0 = initial_conditions[i])
end
prob_func (generic function with 1 method)
It's worth noting that if you run this code successfully, there will be no visible output.
Example 2: Solving an SDE with Different Parameters
Let's solve the same SDE, but with varying parameters. Let's create a Lotka-Volterra system with multiplicative noise. Our Lotka-Volterra system will have as its drift component:
function f(du, u, p, t)
du[1] = p[1] * u[1] - p[2] * u[1] * u[2]
du[2] = -3 * u[2] + u[1] * u[2]
end
f (generic function with 1 method)
For our noise function, we will use multiplicative noise:
function g(du, u, p, t)
du[1] = p[3] * u[1]
du[2] = p[4] * u[2]
end
g (generic function with 1 method)
Now we build the SDE with these functions:
using StochasticDiffEq
p = [1.5, 1.0, 0.1, 0.1]
prob = SDEProblem(f, g, [1.0, 1.0], (0.0, 10.0), p)
SDEProblem with uType Vector{Float64} and tType Float64. In-place: true
Non-trivial mass matrix: false
timespan: (0.0, 10.0)
u0: 2-element Vector{Float64}:
1.0
1.0
This is the base problem for our study. What would like to do with this experiment is keep the same parameters in the deterministic component each time, but vary the parameters for the amount of noise using 0.3rand(2)
as our parameters. Once again, we do this with a prob_func
, and here we modify the parameters in prob.p
:
# `p` is a global variable, referencing it would be type unstable.
# Using a let block defines a small local scope in which we can
# capture that local `p` which isn't redefined anywhere in that local scope.
# This allows it to be type stable.
prob_func = let p = p
(prob, i, repeat) -> begin
x = 0.3rand(2)
remake(prob, p = [p[1], p[2], x[1], x[2]])
end
end
#1 (generic function with 1 method)
Now we solve the problem 10 times and plot all of the trajectories in phase space:
ensemble_prob = EnsembleProblem(prob, prob_func = prob_func)
sim = solve(ensemble_prob, SRIW1(), trajectories = 10)
using Plots;
plot(sim, linealpha = 0.6, color = :blue, idxs = (0, 1), title = "Phase Space Plot");
plot!(sim, linealpha = 0.6, color = :red, idxs = (0, 2), title = "Phase Space Plot")
We can then summarize this information with the mean/variance bounds using a EnsembleSummary
plot. We will take the mean/quantile at every 0.1
time units and directly plot the summary:
summ = EnsembleSummary(sim, 0:0.1:10)
plot(summ, fillalpha = 0.5)
Note that here we used the quantile bounds, which default to [0.05,0.95]
in the EnsembleSummary
constructor. We can change to standard error of the mean bounds using ci_type=:SEM
in the plot recipe.
Example 3: Using the Reduction to Halt When Estimator is Within Tolerance
In this problem, we will solve the equation just as many times as needed to get the standard error of the mean for the final time point below our tolerance 0.5
. Since we only care about the endpoint, we can tell the output_func
to discard the rest of the data.
function output_func(sol, i)
last(sol), false
end
output_func (generic function with 1 method)
Our prob_func
will simply randomize the initial condition:
using OrdinaryDiffEq
# Linear ODE which starts at 0.5 and solves from t=0.0 to t=1.0
prob = ODEProblem((u, p, t) -> 1.01u, 0.5, (0.0, 1.0))
function prob_func(prob, i, repeat)
remake(prob, u0 = rand() * prob.u0)
end
prob_func (generic function with 1 method)
Our reduction function will append the data from the current batch to the previous batch, and declare convergence if the standard error of the mean is calculated as sufficiently small:
using Statistics
function reduction(u, batch, I)
u = append!(u, batch)
finished = (var(u) / sqrt(last(I))) / mean(u) < 0.5
u, finished
end
reduction (generic function with 1 method)
Then we can define and solve the problem:
prob2 = EnsembleProblem(prob, prob_func = prob_func, output_func = output_func,
reduction = reduction, u_init = Vector{Float64}())
sim = solve(prob2, Tsit5(), trajectories = 10000, batch_size = 20)
EnsembleSolution Solution of length 20 with uType:
Float64
Since batch_size=20
, this means that every 20 simulations, it will take this batch, append the results to the previous batch, calculate (var(u)/sqrt(last(I)))/mean(u)
, and if that's small enough, exit the simulation. In this case, the simulation exits only after 20 simulations (i.e. after calculating the first batch). This can save a lot of time!
In addition to saving time by checking convergence, we can save memory by reducing between batches. For example, say we only care about the mean at the end once again. Instead of saving the solution at the end for each trajectory, we can instead save the running summation of the endpoints:
function reduction(u, batch, I)
u + sum(batch), false
end
prob2 = EnsembleProblem(prob, prob_func = prob_func, output_func = output_func,
reduction = reduction, u_init = 0.0)
sim2 = solve(prob2, Tsit5(), trajectories = 100, batch_size = 20)
EnsembleSolution Solution of length 1 with uType:
Float64
this will sum up the endpoints after every 20 solutions, and save the running sum. The final result will have sim2.u
as simply a number, and thus sim2.u/100
would be the mean.
Example 4: Using the Analysis Tools
In this example, we will show how to analyze a EnsembleSolution
. First, let's generate a 10 solution Monte Carlo experiment. For our problem, we will use a 4x2
system of linear stochastic differential equations:
function f(du, u, p, t)
for i in 1:length(u)
du[i] = 1.01 * u[i]
end
end
function σ(du, u, p, t)
for i in 1:length(u)
du[i] = 0.87 * u[i]
end
end
using StochasticDiffEq
prob = SDEProblem(f, σ, ones(4, 2) / 2, (0.0, 1.0)) #prob_sde_2Dlinear
SDEProblem with uType Matrix{Float64} and tType Float64. In-place: true
Non-trivial mass matrix: false
timespan: (0.0, 1.0)
u0: 4×2 Matrix{Float64}:
0.5 0.5
0.5 0.5
0.5 0.5
0.5 0.5
To solve this 10 times, we use the EnsembleProblem
constructor and solve with trajectories=10
. Since we wish to compare values at the timesteps, we need to make sure the steps all hit the same times. We thus set adaptive=false
and explicitly give a dt
.
prob2 = EnsembleProblem(prob)
sim = solve(prob2, SRIW1(), dt = 1 // 2^(3), trajectories = 10, adaptive = false)
EnsembleSolution Solution of length 10 with uType:
RODESolution{Float64, 3, Vector{Matrix{Float64}}, Nothing, Nothing, Vector{Float64}, DiffEqNoiseProcess.NoiseProcess{Float64, 3, Float64, Matrix{Float64}, Matrix{Float64}, Vector{Matrix{Float64}}, typeof(DiffEqNoiseProcess.INPLACE_WHITE_NOISE_DIST), typeof(DiffEqNoiseProcess.INPLACE_WHITE_NOISE_BRIDGE), Nothing, true, ResettableStacks.ResettableStack{Tuple{Float64, Matrix{Float64}, Matrix{Float64}}, true}, ResettableStacks.ResettableStack{Tuple{Float64, Matrix{Float64}, Matrix{Float64}}, true}, DiffEqNoiseProcess.RSWM{Float64}, Nothing, RandomNumbers.Xorshifts.Xoroshiro128Plus}, Nothing, SDEProblem{Matrix{Float64}, Tuple{Float64, Float64}, true, SciMLBase.NullParameters, Nothing, SDEFunction{true, SciMLBase.FullSpecialize, typeof(Main.f), typeof(Main.σ), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing}, typeof(Main.σ), Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}, Nothing}, StochasticDiffEq.SRIW1, StochasticDiffEq.LinearInterpolationData{Vector{Matrix{Float64}}, Vector{Float64}}, SciMLBase.DEStats, Nothing, Nothing}
Note that if you don't do the timeseries_steps
calculations, this code is compatible with adaptive timestepping. Using adaptivity is usually more efficient!
We can compute the mean and the variance at the 3rd timestep using:
using SciMLBase.EnsembleAnalysis
m, v = timestep_meanvar(sim, 3)
([0.6175071568158279 0.8330984155848069; 0.6405485051939539 0.7555917208935968; 0.7295876464502992 0.5467221336324132; 0.6519772783546165 0.5654660534187264], [0.10471137424678588 0.1831278261682711; 0.1061395850745103 0.08833510299208255; 0.13065607762015763 0.06431786008993498; 0.0308448968499812 0.061243850200474674])
or we can compute the mean and the variance at the t=0.5
using:
m, v = timepoint_meanvar(sim, 0.5)
([0.7785436711701275 0.957820905043207; 0.8092893743868665 0.9298035075615901; 0.93384788318319 0.8880220150595245; 0.8120083884597382 0.7611517009310337], [0.13450374310666324 0.38335894667135784; 0.29855110182274447 0.15184066065876825; 0.44260954093880867 0.3372469098502215; 0.32861265778894855 0.22680271985960848])
We can get a series for the mean and the variance at each time step using:
m_series, v_series = timeseries_steps_meanvar(sim)
(RecursiveArrayTools.DiffEqArray{Float64, 3, Vector{Matrix{Float64}}, Vector{Float64}, Nothing, SymbolicIndexingInterface.SymbolCache{Nothing, Nothing, Nothing, Vector{Any}, Dict{Any, Any}}, Nothing}([[0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.5294967921871593 0.5968861413596359; 0.5837422222108306 0.5967126895237701; 0.6664812086708413 0.5020333375689524; 0.5750647673749059 0.5756482614931071], [0.6175071568158279 0.8330984155848069; 0.6405485051939539 0.7555917208935968; 0.7295876464502992 0.5467221336324132; 0.6519772783546165 0.5654660534187264], [0.7633461295320745 0.9679242501499989; 0.7616127802943673 0.9245172070761997; 0.8491415933410448 0.7353688382455451; 0.8042856829013453 0.6603833417222302], [0.7785436711701275 0.957820905043207; 0.8092893743868665 0.9298035075615901; 0.93384788318319 0.8880220150595245; 0.8120083884597382 0.7611517009310337], [0.8636373166686907 1.0395261138520202; 0.9207978474464105 1.1043843037913472; 1.0147978049906263 1.1852370483441186; 0.7834103472711698 0.7901170989395218], [1.030916474371696 1.2581354465925165; 1.0834597958802545 1.1985586738953389; 1.1531275314822598 1.3861822413634994; 0.6907642145453926 0.8234370971700353], [1.1511770310810485 1.328953118493859; 1.1601961219617254 1.465693399608679; 0.9743535215858095 1.7973452371811296; 0.786439673946527 0.9748251306324959], [1.236442273460345 1.157302799634412; 1.2818616094340525 1.892542211976043; 1.1078029241522402 1.9891239527709286; 0.852708815456923 0.9690783026484787]], [0.0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0], nothing, SymbolicIndexingInterface.SymbolCache{Nothing, Nothing, Nothing, Vector{Any}, Dict{Any, Any}}(nothing, nothing, nothing, Any[], Dict{Any, Any}()), nothing), RecursiveArrayTools.DiffEqArray{Float64, 3, Vector{Matrix{Float64}}, Vector{Float64}, Nothing, SymbolicIndexingInterface.SymbolCache{Nothing, Nothing, Nothing, Vector{Any}, Dict{Any, Any}}, Nothing}([[0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0], [0.012032048742708173 0.03263064663194518; 0.014448490954633594 0.015506240086795341; 0.04394972935147243 0.03316466061919597; 0.01943206087084662 0.008444391152217002], [0.10471137424678588 0.1831278261682711; 0.1061395850745103 0.08833510299208255; 0.13065607762015763 0.06431786008993498; 0.0308448968499812 0.061243850200474674], [0.2708746383289427 0.39439792408697905; 0.2270842350967792 0.17444575341687144; 0.2928387496910283 0.17256282004624637; 0.18829501881488775 0.12123913051489726], [0.13450374310666324 0.38335894667135784; 0.29855110182274447 0.15184066065876825; 0.44260954093880867 0.3372469098502215; 0.32861265778894855 0.22680271985960848], [0.1790919653821149 0.6332352867825617; 0.324861370601097 0.2502192755358073; 0.8376908957087467 0.4919343631939206; 0.4293514094400792 0.23685076694015883], [0.3095858358712785 1.378492589414454; 0.3469288894215434 0.3928381097766577; 1.3887839177114478 0.6083944653775385; 0.17625819527263314 0.3910366961071237], [0.5411305436945031 1.6917371684449523; 0.5520522735607584 0.6908929704271832; 0.7872054451612365 1.329180696232039; 0.42108045592980886 0.5502327673164179], [0.8389700102471322 0.8462651253854907; 0.2598864452022994 1.6134965877249596; 0.9069485276268612 2.40430702644815; 0.3959497898872465 0.5573159099358276]], [0.0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0], nothing, SymbolicIndexingInterface.SymbolCache{Nothing, Nothing, Nothing, Vector{Any}, Dict{Any, Any}}(nothing, nothing, nothing, Any[], Dict{Any, Any}()), nothing))
or at chosen values of t
:
ts = 0:0.1:1
m_series = timeseries_point_mean(sim, ts)
t: 0.0:0.1:1.0
u: 11-element Vector{Matrix{Float64}}:
[0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5]
[0.5235974337497273 0.5775089130877087; 0.5669937777686646 0.5773701516190162; 0.6331849669366731 0.5016266700551619; 0.5600518138999246 0.5605186091944858]
[0.5823030109643604 0.7386135058947385; 0.6178259920007044 0.692040108345666; 0.7043450713385161 0.528846615207029; 0.6212122739627322 0.5695389366484787]
[0.6758427459023266 0.8870287494108835; 0.6889742152341192 0.823161915366638; 0.7774092252065974 0.622180815477666; 0.712900640173308 0.6034329687401279]
[0.7663856378596852 0.9659035811286405; 0.7711480991128672 0.9255744671732777; 0.866082851309474 0.7658994736083411; 0.8058302240130238 0.680537013563991]
[0.7785436711701275 0.957820905043207; 0.8092893743868667 0.9298035075615901; 0.9338478831831901 0.8880220150595244; 0.8120083884597383 0.7611517009310338]
[0.8466185875689781 1.0231850720902578; 0.8984961528345016 1.0694681445453955; 0.998607820629139 1.1257940416871999; 0.7891299555088834 0.7843240193378241]
[0.9640048112904939 1.170691713496318; 1.018395016506717 1.1608889258537423; 1.0977956408856062 1.3058041641557472; 0.7278226676357035 0.8101090978778298]
[1.079020697055437 1.2864625153530533; 1.114154326312843 1.305412564180675; 1.0816179275236795 1.5506474396905516; 0.7290343983058466 0.8839923105550195]
[1.1682300795569078 1.2946230547219697; 1.184529219456191 1.5510631620821518; 1.001043402099096 1.8357009802990896; 0.7996935022486062 0.9736757650356926]
[1.2364422734603449 1.157302799634412; 1.2818616094340527 1.892542211976043; 1.10780292415224 1.9891239527709288; 0.8527088154569229 0.9690783026484786]
Note that these mean and variance series can be directly plotted. We can compute covariance matrices similarly:
timeseries_steps_meancov(sim) # Use the time steps, assume fixed dt
timeseries_point_meancov(sim, 0:(1 // 2 ^ (3)):1, 0:(1 // 2 ^ (3)):1) # Use time points, interpolate
9×9 Matrix{Tuple{Matrix{Float64}, Matrix{Float64}, Matrix{Float64}}}:
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) … ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.529497 0.596886; 0.583742 0.596713; 0.666481 0.502033; 0.575065 0.575648], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.529497 0.596886; 0.583742 0.596713; 0.666481 0.502033; 0.575065 0.575648], [0.0454663 0.0425571; -0.00512637 0.0939586; 0.155076 -0.00555327; 0.018801 0.0130507])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.617507 0.833098; 0.640549 0.755592; 0.729588 0.546722; 0.651977 0.565466], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.617507 0.833098; 0.640549 0.755592; 0.729588 0.546722; 0.651977 0.565466], [0.138786 0.351283; 0.0607251 0.0985; 0.241843 -0.00935257; 0.0664117 0.0793244])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.763346 0.967924; 0.761613 0.924517; 0.849142 0.735369; 0.804286 0.660383], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.763346 0.967924; 0.761613 0.924517; 0.849142 0.735369; 0.804286 0.660383], [0.169035 0.501588; 0.0580782 0.342984; 0.470221 0.113593; 0.20552 0.132568])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.778544 0.957821; 0.809289 0.929804; 0.933848 0.888022; 0.812008 0.761152], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.778544 0.957821; 0.809289 0.929804; 0.933848 0.888022; 0.812008 0.761152], [0.206917 0.548706; 0.161687 0.356173; 0.60426 0.199165; 0.324017 0.245717])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [0.863637 1.03953; 0.920798 1.10438; 1.0148 1.18524; 0.78341 0.790117], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) … ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.863637 1.03953; 0.920798 1.10438; 1.0148 1.18524; 0.78341 0.790117], [0.281581 0.697618; 0.169973 0.56981; 0.853619 0.323489; 0.376617 0.278865])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [1.03092 1.25814; 1.08346 1.19856; 1.15313 1.38618; 0.690764 0.823437], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [1.03092 1.25814; 1.08346 1.19856; 1.15313 1.38618; 0.690764 0.823437], [0.376365 0.994136; 0.231594 0.707497; 1.0958 0.625543; 0.252489 0.42012])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [1.15118 1.32895; 1.1602 1.46569; 0.974354 1.79735; 0.78644 0.974825], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [1.15118 1.32895; 1.1602 1.46569; 0.974354 1.79735; 0.78644 0.974825], [0.656648 1.11237; 0.342289 0.95896; 0.826259 1.23539; 0.389454 0.537757])
([0.5 0.5; 0.5 0.5; 0.5 0.5; 0.5 0.5], [1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.0 0.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]) ([1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [1.23644 1.1573; 1.28186 1.89254; 1.1078 1.98912; 0.852709 0.969078], [0.83897 0.846265; 0.259886 1.6135; 0.906949 2.40431; 0.39595 0.557316])
For general analysis, we can build a EnsembleSummary
type.
summ = EnsembleSummary(sim)
EnsembleSolution Solution of length 9 with uType:
Float64
will summarize at each time step, while
summ = EnsembleSummary(sim, 0.0:0.1:1.0)
EnsembleSolution Solution of length 11 with uType:
Float64
will summarize at the 0.1
time points using the interpolations. To visualize the results, we can plot it. Since there are 8 components to the differential equation, this can get messy, so let's only plot the 3rd component:
using Plots;
plot(summ; idxs = 3);
We can change to errorbars instead of ribbons and plot two different indices:
plot(summ; idxs = (3, 5), error_style = :bars)
Or we can simply plot the mean of every component over time:
plot(summ; error_style = :none)