Models
Echo State Networks
ReservoirComputing.AdditiveEIESN — Type
AdditiveEIESN(in_dims, res_dims, out_dims, activation=tanh_fast;
input_activation=identity,
use_bias=true,
exc_recurrence_scale=0.9, inh_recurrence_scale=0.5, exc_output_scale=1.0,
inh_output_scale=1.0,init_reservoir=rand_sparse,
init_input=scaled_rand, init_bias=zeros32,
init_state=randn32,
readout_activation=identity,
state_modifiers=(),
kwargs...)Excitatory-Inhibitory Echo State Network (EIESN) with additive input (Panahi et al., 2025).
This model wraps AdditiveEIESNCell, where the input is added linearly outside the non-linearity with optional bias terms.
Equations
\[\begin{aligned} \mathbf{x}(t) &= b_{\mathrm{ex}} \, \phi_{\mathrm{ex}}\!\left( a_{\mathrm{ex}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{ex}}\right) - b_{\mathrm{inh}} \, \phi_{\mathrm{inh}}\!\left( a_{\mathrm{inh}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{inh}}\right) + g\!\left( \mathbf{W}_{\mathrm{in}} \mathbf{u}(t) + \mathbf{\beta}_{\mathrm{in}}\right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forAdditiveEIESNCell). Can be a single function or aTuple(excitatory, inhibitory). Default:tanh_fast.
Keyword arguments
input_activation: The non-linear function $g$ applied to the input. Default:identity.use_bias: Enable/disable bias vectors. Default:true.exc_recurrence_scale: Excitatory recurrence scaling factor. Default:0.9.inh_recurrence_scale: Inhibitory recurrence scaling factor. Default:0.5.exc_output_scale: Excitatory output scaling factor. Default:1.0.inh_output_scale: Inhibitory output scaling factor. Default:1.0.init_reservoir: Initializer for the reservoir matrix. Default:rand_sparse.init_input: Initializer for the input matrix. Default:scaled_rand.init_bias: Initializer for the bias vectors. Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.readout_activation: Activation for the linear readout. Default:identity.state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalAdditiveEIESNCell.states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout.
States
reservoir— states for the internalAdditiveEIESNCell(e.g.rng).states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.DeepESN — Type
DeepESN(in_dims, res_dims, out_dims,
activation=tanh; depth=2, leak_coefficient=1.0, init_reservoir=rand_sparse,
init_input=scaled_rand, init_bias=zeros32, init_state=randn32,
use_bias=false, state_modifiers=(), readout_activation=identity)Deep Echo State Network (Gallicchio and Micheli, 2017).
DeepESN composes, for L = length(res_dims) layers:
- a sequence of stateful
ESNCellwith widthsres_dims[ℓ], - zero or more per-layer
state_modifiers[ℓ]applied to the layer's state, and - a final
LinearReadoutfrom the last layer's features to the output.
Equations
\[\begin{aligned} \mathbf{x}^{(1)}(t) &= (1-\alpha_1)\, \mathbf{x}^{(1)}(t-1) + \alpha_1\, \phi_1\!\left(\mathbf{W}^{(1)}_{\text{in}}\, \mathbf{u}(t) + \mathbf{W}^{(1)}_r\, \mathbf{x}^{(1)}(t-1) + \mathbf{b}^{(1)} \right), \\ \mathbf{u}^{(1)}(t) &= \mathrm{Mods}_1\!\left(\mathbf{x}^{(1)}(t)\right), \\ \mathbf{x}^{(\ell)}(t) &= (1-\alpha_\ell)\, \mathbf{x}^{(\ell)}(t-1) + \alpha_\ell\, \phi_\ell\!\left(\mathbf{W}^{(\ell)}_{\text{in}}\, \mathbf{u}^{(\ell-1)}(t) + \mathbf{W}^{(\ell)}_r\, \mathbf{x}^{(\ell)}(t-1) + \mathbf{b}^{(\ell)} \right), \quad \ell = 2,\dots,L, \\ \mathbf{u}^{(\ell)}(t) &= \mathrm{Mods}_\ell\!\left(\mathbf{x}^{(\ell)}(t)\right), \quad \ell = 2,\dots,L, \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{u}^{(L)}(t) + \mathbf{b}_{\text{out}} \right). \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Vector of reservoir (hidden) dimensions per layer; its length sets the depthL.out_dims: Output dimension.activation: Reservoir activation(s). Either a single function (broadcast to all layers) or a vector/tuple of lengthL. Default:tanh.
Keyword arguments
Per-layer reservoir options (passed to each ESNCell):
leak_coefficient: Leak rate(s)α_ℓ ∈ (0,1]. Scalar or length-Lcollection. Default:1.0.init_reservoir: Initializer(s) forW_res^{(ℓ)}. Scalar or length-L. Default:rand_sparse.init_input: Initializer(s) forW_in^{(ℓ)}. Scalar or length-L. Default:scaled_rand.init_bias: Initializer(s) for reservoir bias (used iffuse_bias[ℓ]=true). Scalar or length-L. Default:zeros32.init_state: Initializer(s) used when an external state is not provided. Scalar or length-L. Default:randn32.use_bias: Whether each reservoir uses a bias term. Boolean scalar or length-L. Default:false.depth: Depth of the DeepESN. If the reservoir size is given as a number instead of a vector, this parameter controls the depth of the model. Default is 2.
Composition:
state_modifiers: Per-layer modifier(s) applied to each layer’s state before it feeds into the next layer (and the readout for the last layer). Acceptsnothing, a single layer, a vector/tuple of lengthL, or per-layer collections. Defaults to no modifiers.readout_activation: Activation for the final linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple) containing states for all cells, modifiers, and readout.
Parameters
cells :: NTuple{L,NamedTuple}— parameters for eachESNCell, including:input_matrix :: (res_dims[ℓ] × in_size[ℓ])—W_in^{(ℓ)}reservoir_matrix :: (res_dims[ℓ] × res_dims[ℓ])—W_res^{(ℓ)}bias :: (res_dims[ℓ],)— present only ifuse_bias[ℓ]=true
states_modifiers :: NTuple{L,Tuple}— per-layer tuples of modifier parameters (empty tuples if none).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims[L])—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
cells :: NTuple{L,NamedTuple}— states for eachESNCell.states_modifiers :: NTuple{L,Tuple}— per-layer tuples of modifier states.readout— states forLinearReadout.
ReservoirComputing.DelayESN — Type
DelayESN(in_dims, res_dims, out_dims, activation=tanh;
num_input_delays=1, input_stride=1,
num_state_delays=1, state_stride=1,
leak_coefficient=1.0, init_reservoir=rand_sparse,
init_input=scaled_rand, init_bias=zeros32,
init_state=randn32, use_bias=false,
states_modifiers=(), readout_activation=identity)Echo State Network with both input and state delays (Fleddermann et al., 2025).
DelayESN composes:
- an internal
DelayLayerapplied to the input signal, - a stateful
ESNCell(reservoir) receiving the augmented input, - a second internal
DelayLayerapplied to the reservoir state, - zero or more additional
states_modifiersapplied after the state delay, and - a
LinearReadoutmapping the final feature vector to outputs.
At each time step, the input u(t) is expanded into a delay-coordinate vector u_d(t). This drives the reservoir to produce state x(t). Finally, x(t) is expanded into a state delay-coordinate vector x_d(t) before passing to modifiers and readout.
Equations
\[\begin{aligned} \mathbf{u}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{u}(t) \\ \mathbf{u}(t-s_{in}) \\ \vdots \\ \mathbf{u}\!\bigl(t-D_{in}s_{in}\bigr) \end{bmatrix}, \qquad D_{in}=\text{num\_input\_delays},\ \ s_{in}=\text{input\_stride}, \\ \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}_{\mathrm{d}}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right), \\ \mathbf{x}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{x}(t) \\ \mathbf{x}(t-s_{st}) \\ \vdots \\ \mathbf{x}\!\bigl(t-D_{st}s_{st}\bigr) \end{bmatrix}, \qquad D_{st}=\text{num\_state\_delays},\ \ s_{st}=\text{state\_stride}, \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left( \mathbf{x}_{\mathrm{d}}(t)\right), \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
Reservoir (passed to ESNCell):
leak_coefficient: Leak rate in(0, 1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used iffuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.
Input delay expansion:
num_input_delays: Number of past input steps to include. The effective input to the reservoir has size(num_input_delays + 1) * in_dims. Default:1.input_stride: Stride for the input delay buffer. Default:1.
State delay expansion:
num_state_delays: Number of past reservoir states to include. The readout receives a vector of size(num_state_delays + 1) * res_dims. Default:1.state_stride: Stride for the state delay buffer. Default:1.
Composition:
states_modifiers: A layer or collection of layers applied to the delayed reservoir features before the readout. These run after the internal stateDelayLayer. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
input_delay— parameters for the inputDelayLayer.reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × ((num_input_delays + 1) * in_dims))—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for:- the internal state
DelayLayer, and - any user-provided modifier layers (may be empty).
- the internal state
readout— parameters ofLinearReadout, typically:weight :: (out_dims × ((num_state_delays + 1) * res_dims))—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
States
input_delay— state for the inputDelayLayer(buffer and clock).reservoir— states for the internalESNCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for the internal stateDelayLayer(its delay buffer and clock) and each additional modifier layer.readout— states forLinearReadout(typically empty).
ReservoirComputing.EIESN — Type
EIESN(in_dims, res_dims, out_dims, activation=tanh_fast;
use_bias=true,
exc_recurrence_scale=0.9, inh_recurrence_scale=0.5, exc_output_scale=1.0,
inh_output_scale=1.0, init_reservoir=rand_sparse,
init_input=scaled_rand, init_bias=zeros32,
init_state=randn32,
readout_activation=identity,
state_modifiers=(),
kwargs...)Excitatory-Inhibitory Echo State Network (EIESN) (Panahi et al., 2025).
This model wraps EIESNCell.
Equations
\[\begin{aligned} \mathbf{x}(t) &= b_{\mathrm{ex}} \, \phi_{\mathrm{ex}}\!\left( \mathbf{W}_{\mathrm{in}} \mathbf{u}(t) + a_{\mathrm{ex}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{ex}}\right) - b_{\mathrm{inh}} \, \phi_{\mathrm{inh}}\!\left( \mathbf{W}_{\mathrm{in}} \mathbf{u}(t) + a_{\mathrm{inh}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{inh}}\right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forEIESNCell). Can be a single function or aTuple(excitatory, inhibitory). Default:tanh_fast.
Keyword arguments
use_bias: Enable/disable bias vectors inside the reservoir. Default:true.exc_recurrence_scale: Excitatory recurrence scaling factor. Default:0.9.inh_recurrence_scale: Inhibitory recurrence scaling factor. Default:0.5.exc_output_scale: Excitatory output scaling factor. Default:1.0.inh_output_scale: Inhibitory output scaling factor. Default:1.0.init_reservoir: Initializer for the reservoir matrix. Default:rand_sparse.init_input: Initializer for the input matrix. Default:scaled_rand.init_bias: Initializer for the bias vectors. Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.readout_activation: Activation for the linear readout. Default:identity.state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalEIESNCell.states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout.
States
reservoir— states for the internalEIESNCell(e.g.rng).states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.ES2N — Type
ES2N(in_dims, res_dims, out_dims, activation=tanh;
proximity=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=False(),
state_modifiers=(), readout_activation=identity,
init_orthogonal=orthogonal,)Edge of Stability Echo State Network (ES2N) (Ceni and Gallicchio, 2025).
Equations
\[\begin{aligned} \mathbf{x}(t) &= (1-\beta)\, \mathbf{O}\, \mathbf{x}(t-1) + \beta\, \phi\!\left(\mathbf{W}_{\text{in}} \mathbf{u}(t) + \mathbf{W}_r \mathbf{x}(t-1) + \mathbf{b} \right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
proximity: proximityα ∈ (0,1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_orthogonal: Initializer forO. Default: [orthogonal].init_bias: Initializer for reservoir bias (used ifuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × in_dims)—W_inreservoir_matrix :: (res_dims × res_dims)—W_resorthogonal_matrix :: (res_dims × res_dims)—Obias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims)—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
reservoir— states for the internalES2NCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.ESN — Type
ESN(in_dims, res_dims, out_dims, activation=tanh;
leak_coefficient=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=false,
state_modifiers=(), readout_activation=identity)Echo State Network (Jaeger and Haas, 2004).
ESN composes:
- a stateful
ESNCell(reservoir), - zero or more
state_modifiersapplied to the reservoir state, and - a
LinearReadoutmapping reservoir features to outputs.
Equations
\[\begin{aligned} \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
Reservoir (passed to ESNCell):
leak_coefficient: Leak rateα ∈ (0,1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used ifuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.
Composition:
state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × in_dims)—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims)—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
reservoir— states for the internalESNCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.EuSN — Type
EuSN(in_dims, res_dims, out_dims, activation=tanh;
leak_coefficient=1.0, diffusion = 1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=false,
state_modifiers=(), readout_activation=identity)Euler State Network (ESN) (Gallicchio, 2024).
Equations
\[\begin{aligned} \mathbf{x}(t) &= \mathbf{x}(t-1) + \varepsilon\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}(t) + (\mathbf{W}_r - \gamma\, \mathbf{I})\, \mathbf{x}(t-1) + \mathbf{b} \right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
leak_coefficient: Leak rateα ∈ (0,1]. Default:1.0.diffusion: diffusion coefficient∈ (0,1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used ifuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × in_dims)—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims)—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
reservoir— states for the internalESNCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.HybridESN — Type
HybridESN(km, km_dims, in_dims, res_dims, out_dims, [activation];
state_modifiers=(), readout_activation=identity,
include_collect=true, kwargs...)Hybrid Echo State Network (Pathak et al., 2018).
HybridESN composes:
- a knowledge model
kmproducing auxiliary features from the input, - a stateful
ESNCellthat receives the concatenated input[km(x(t)); x(t)], - zero or more
state_modifiersapplied to the reservoir state, and - a
LinearReadoutmapping the combined features[km(x(t)); h*(t)]to the output.
Arguments
km: Knowledge model applied to the input (e.g. a physical model, neural submodule, or differentiable function). May be aWrappedFunctionor any callable layer.km_dims: Output dimension of the knowledge modelkm.in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
leak_coefficient: Leak rateα ∈ (0,1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used ifuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.include_collect: Whether the readout should include collection mode. Default:true.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
knowledge_model— parameters of the knowledge modelkm.reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × (in_dims + km_dims))—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × (res_dims + km_dims))—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
Created by initialstates(rng, hesn):
knowledge_model— states for the internal knowledge model.reservoir— states for the internalESNCell.states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.InputDelayESN — Type
InputDelayESN(in_dims, res_dims, out_dims, activation=tanh;
num_delays=1, stride=1, leak_coefficient=1.0,
init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=false,
states_modifiers=(), readout_activation=identity)Echo State Network with input delays (Fleddermann et al., 2025).
InputDelayESN composes:
- an internal
DelayLayerapplied to the input signal to build tapped-delay features, - a stateful
ESNCell(reservoir) receiving the augmented input, - zero or more
states_modifiersapplied to the reservoir state, and - a
LinearReadoutmapping the modified reservoir state to outputs.
At each time step, the input u(t) is expanded into a delay-coordinate vector that stacks the current and num_delays past inputs. This augmented signal is then used to update the reservoir state x(t).
Equations
\[\begin{aligned} \mathbf{u}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{u}(t) \\ \mathbf{u}(t-s) \\ \vdots \\ \mathbf{u}\!\bigl(t-Ds\bigr) \end{bmatrix}, \qquad D=\text{num\_delays},\ \ s=\text{stride}, \\ \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}_{\mathrm{d}}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right), \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right), \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
Reservoir (passed to ESNCell):
leak_coefficient: Leak rate in(0, 1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used iffuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.
Delay expansion (on input):
num_delays: Number of past input states to include in the tapped-delay vector. TheDelayLayeroutput has(num_delays + 1) * in_dimsentries. Default:1.stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple ofstride. Default:1.
Composition:
states_modifiers: A layer or collection of layers applied to the reservoir state before the readout. These run after the internalDelayLayer. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
input_delay— parameters for the internalDelayLayer.reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × ((num_delays + 1) * in_dims))—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for the user-provided modifier layers (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims)—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
States
input_delay— state for the internalDelayLayer(its delay buffer and clock).reservoir— states for the internalESNCell(e.g.rng).states_modifiers— states for the user-provided modifier layers.readout— states forLinearReadout(typically empty).
ReservoirComputing.StateDelayESN — Type
StateDelayESN(in_dims, res_dims, out_dims, activation=tanh;
num_delays=1, stride=1, leak_coefficient=1.0,
init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=false,
state_modifiers=(), readout_activation=identity)Echo State Network with state delays (Fleddermann et al., 2025).
StateDelayESN composes:
- a stateful
ESNCell(reservoir), - a
DelayLayerapplied to the reservoir state to build tapped-delay features, - zero or more additional
state_modifiersapplied after the delay, and - a
LinearReadoutmapping delayed reservoir features to outputs.
At each time step, the reservoir produces a state vector h(t) of length res_dims. The DelayLayer then constructs a feature vector that stacks h(t) together with num_delays past states, spaced according to stride, before passing it on to any further modifiers and the readout.
Equations
\[\begin{aligned} \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right), \\ \mathbf{x}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{x}(t) \\ \mathbf{x}(t-s) \\ \vdots \\ \mathbf{x}\!\bigl(t-Ds\bigr) \end{bmatrix}, \qquad D=\text{num\_delays},\ \ s=\text{stride}, \\ \mathbf{z}(t) &= \psi\!\left(\mathrm{Mods}\!\left( \mathbf{x}_{\mathrm{d}}(t)\right)\right), \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
Reservoir (passed to ESNCell):
leak_coefficient: Leak rate in(0, 1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used iffuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.
Delay expansion:
num_delays: Number of past reservoir states to include in the tapped-delay vector. TheDelayLayeroutput has(num_delays + 1) * res_dimsentries (current state plusnum_delayspast states). Default:1.stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple ofstride. Default:1.
Composition:
state_modifiers: A layer or collection of layers applied to the delayed reservoir features before the readout. These run after the internalDelayLayer. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × in_dims)—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for:- the internal
DelayLayer, and - any user-provided modifier layers (may be empty).
- the internal
readout— parameters ofLinearReadout, typically:weight :: (out_dims × ((num_delays + 1) * res_dims))—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
reservoir— states for the internalESNCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for the internalDelayLayer(its delay buffer and clock) and each additional modifier layer.readout— states forLinearReadout(typically empty).
Next generation reservoir computing
ReservoirComputing.NGRC — Type
NGRC(in_dims, out_dims; num_delays=2, stride=1,
features=(), include_input=true, init_delay=zeros32,
readout_activation=identity, state_modifiers=(),
ro_dims=nothing)Next Generation Reservoir Computing (Gauthier et al., 2021).
NGRC composes:
- a
DelayLayerapplied directly to the input, producing a vector containing the current input and a fixed number of past inputs, - a
NonlinearFeaturesLayerthat applies user-provided functions to this delayed vector and concatenates the results, and - a
LinearReadoutmapping the resulting feature vector to outputs.
Arguments
in_dims: Input dimension.out_dims: Output dimension.
Keyword arguments
num_delays: Number of past input vectors to include. The internalDelayLayeroutputs a vector of length(num_delays + 1) * in_dims(current input plusnum_delayspast inputs). Default:2.stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple ofstride. Default:1.init_delay: Initializer (or tuple of initializers) for the delay history, passed toDelayLayer. Each initializer function is called asinit(rng, in_dims, 1)to fill one delay column. Default:zeros32.features: A function or tuple of functions(f₁, f₂, ...)used byNonlinearFeaturesLayer. Eachfis called asf(x)wherexis the delayed input vector. By default it is assumed that eachfreturns a vector of the same length asxwhenro_dimsis not provided. Default: empty().include_input: Whether to include the raw delayed input vector itself as the first block of the feature vector (passed toNonlinearFeaturesLayer). Default:true.state_modifiers: Extra layers applied after theNonlinearFeaturesLayerand before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.ro_dims: Input dimension of the readout. Ifnothing(default), it is estimated under the assumption that each feature function returns a vector with the same length as the delayed input. In that case,ro_dims ≈ (num_delays + 1) * in_dims * n_blocks, wheren_blocksis the number of concatenated vectors (original delayed input ifinclude_input=trueplus one block per feature function). If your feature functions change the length (e.g. constant features, higher-order polynomial expansions with cross terms), you should passro_dimsexplicitly.
Inputs
x :: AbstractArray (in_dims, batch)or(in_dims,)
Returns
- Output
y :: (out_dims, batch)(or(out_dims,)for vector input). - Updated layer state (NamedTuple).
ReservoirComputing.polynomial_monomials — Function
polynomial_monomials(input_vector;
degrees = 1:2)Generate all unordered polynomial monomials of the entries in input_vector for the given set of degrees.
For each d in degrees, this function produces all degree-d monomials of the form
- degree 1:
x₁, x₂, … - degree 2:
x₁², x₁x₂, x₁x₃, x₂², … - degree 3:
x₁³, x₁²x₂, x₁x₂x₃, x₂³, …
where combinations are taken with repetition and in non-decreasing index order. This means that, for example, x₁x₂ and x₂x₁ are represented only once.
The returned vector is a flat list of all such products, in a deterministic order determined by the recursive enumeration.
Arguments
input_vectorInput vector whose entries define the variables used to build monomials.
Keyword arguments
degrees: An iterable of positive integers specifying which monomial degrees to generate. Each degree less than1is skipped. Default:1:2.
Returns
output_monomialsa vector of the same type asinput_vectorcontaining all generated monomials, concatenated across the requested degrees, in a deterministic order.
Utilities
ReservoirComputing.resetcarry! — Function
resetcarry!(rng, rc::ReservoirComputer, st; init_carry=nothing)
resetcarry!(rng, rc::ReservoirComputer, ps, st; init_carry=nothing)Reset (or set) the hidden-state carry of a model in the echo state network family.
If an existing carry is present in st.cell.carry, its leading dimension is used to infer the state size. Otherwise the reservoir output size is taken from rc.reservoir.cell.out_dims. When init_carry=nothing, the carry is cleared; the initializer from the struct construction will then be used. When a function is provided, it is called to create a new initial hidden state.
Arguments
rng: Random number generator (used if a new carry is sampled/created).rc: A reservoir computing network model.st: Current model states.ps: Optional model parameters. Returned unchanged.
Keyword arguments
init_carry: Controls the initialization of the new carry.nothing(default): remove/clear the carry (forces the cell to reinitialize from its owninit_stateon next use).f: a function following standard from WeightInitializers.jl
Returns
resetcarry!(rng, rc, st; ...) -> st′: Updated states withst′.cell.carryset tonothingor(h0,).resetcarry!(rng, rc, ps, st; ...) -> (ps, st′): Same as above, but also returns the unchangedpsfor convenience.
Reservoir Computing with Cellular Automata
ReservoirComputing.RECA — Function
RECA(in_dims, out_dims, automaton;
input_encoding=RandomMapping(),
generations=8, state_modifiers=(),
readout_activation=identity)Construct a cellular–automata reservoir model.
At each time step the input vector is randomly embedded into a Cellular Automaton (CA) lattice, the CA is evolved for generations steps, and the flattened evolution (excluding the initial row) is used as the reservoir state. A linear LinearReadout maps these features to out_dims.
Arguments
in_dims: Number of input features (rows of training data).out_dims: Number of output features (rows of target data).automaton: A CA rule/object fromCellularAutomata.jl(e.g.DCA(90),DCA(30), …).
Keyword Arguments
input_encoding: Random embedding spec with fieldspermutationsandexpansion_size. Default isRandomMapping().generations: Number of CA generations to evolve per time step. Default is 8.state_modifiers: Optional tuple/vector of additional layers applied after the CA cell and before the readout (e.g.,NLAT2(),Pad(1.0), custom transforms, etc.). Functions are wrapped automatically. Default is none.readout_activation: Activation applied by the readout Default isidentity.
The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:
ReservoirComputing.RandomMapping — Type
RandomMapping(permutations, expansion_size)
RandomMapping(permutations; expansion_size=40)
RandomMapping(;permutations=8, expansion_size=40)Specify the random input embedding used by the Cellular Automata reservoir. Each time step, the input vector of length in_dims is randomly placed into a larger 1D lattice of length expansion_size, and this is repeated for permutations independent lattices (blocks). The concatenation of these blocks forms the CA initial condition of length: ca_size = expansion_size * permutations. The detail of this implementation can be found in (Nichele and Molund, 2017).
Arguments
permutations: number of independent random maps (blocks). Larger values increase feature diversity andca_sizeproportionally.expansion_size: width of each block (the size of a single CA lattice). Larger values increase the spatial resolution and bothca_sizeandstates_size.
Usage
This is a configuration object; it does not perform the mapping by itself. Create the concrete tables with create_encoding and pass them to RECACell:
using ReservoirComputing, CellularAutomata, Random
in_dims = 4
generations = 8
mapping = RandomMapping(permutations = 8, expansion_size = 40)
enc = ReservoirComputing.create_encoding(mapping, in_dims, generations) # → RandomMaps
cell = RECACell(DCA(90), enc)
rc = ReservoirChain(
StatefulLayer(cell),
LinearReadout(enc.states_size => in_dims; include_collect = true)
)Or let RECA do this for you:
rc = RECA(in_dims = 4, out_dims = 4, DCA(90);
input_encoding = RandomMapping(permutations = 8, expansion_size = 40),
generations = 8)