Models

Echo State Networks

ReservoirComputing.AdditiveEIESNType
AdditiveEIESN(in_dims, res_dims, out_dims, activation=tanh_fast;
        input_activation=identity,
        use_bias=true,
        exc_recurrence_scale=0.9, inh_recurrence_scale=0.5, exc_output_scale=1.0,
        inh_output_scale=1.0,init_reservoir=rand_sparse,
        init_input=scaled_rand, init_bias=zeros32,
        init_state=randn32,
        readout_activation=identity,
        state_modifiers=(),
        kwargs...)

Excitatory-Inhibitory Echo State Network (EIESN) with additive input (Panahi et al., 2025).

This model wraps AdditiveEIESNCell, where the input is added linearly outside the non-linearity with optional bias terms.

Equations

\[\begin{aligned} \mathbf{x}(t) &= b_{\mathrm{ex}} \, \phi_{\mathrm{ex}}\!\left( a_{\mathrm{ex}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{ex}}\right) - b_{\mathrm{inh}} \, \phi_{\mathrm{inh}}\!\left( a_{\mathrm{inh}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{inh}}\right) + g\!\left( \mathbf{W}_{\mathrm{in}} \mathbf{u}(t) + \mathbf{\beta}_{\mathrm{in}}\right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for AdditiveEIESNCell). Can be a single function or a Tuple(excitatory, inhibitory). Default: tanh_fast.

Keyword arguments

  • input_activation: The non-linear function $g$ applied to the input. Default: identity.
  • use_bias: Enable/disable bias vectors. Default: true.
  • exc_recurrence_scale: Excitatory recurrence scaling factor. Default: 0.9.
  • inh_recurrence_scale: Inhibitory recurrence scaling factor. Default: 0.5.
  • exc_output_scale: Excitatory output scaling factor. Default: 1.0.
  • inh_output_scale: Inhibitory output scaling factor. Default: 1.0.
  • init_reservoir: Initializer for the reservoir matrix. Default: rand_sparse.
  • init_input: Initializer for the input matrix. Default: scaled_rand.
  • init_bias: Initializer for the bias vectors. Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • readout_activation: Activation for the linear readout. Default: identity.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal AdditiveEIESNCell.
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout.

States

  • reservoir — states for the internal AdditiveEIESNCell (e.g. rng).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.DeepESNType
DeepESN(in_dims, res_dims, out_dims,
        activation=tanh; depth=2, leak_coefficient=1.0, init_reservoir=rand_sparse,
        init_input=scaled_rand, init_bias=zeros32, init_state=randn32,
        use_bias=false, state_modifiers=(), readout_activation=identity)

Deep Echo State Network (Gallicchio and Micheli, 2017).

DeepESN composes, for L = length(res_dims) layers:

  1. a sequence of stateful ESNCell with widths res_dims[ℓ],
  2. zero or more per-layer state_modifiers[ℓ] applied to the layer's state, and
  3. a final LinearReadout from the last layer's features to the output.

Equations

\[\begin{aligned} \mathbf{x}^{(1)}(t) &= (1-\alpha_1)\, \mathbf{x}^{(1)}(t-1) + \alpha_1\, \phi_1\!\left(\mathbf{W}^{(1)}_{\text{in}}\, \mathbf{u}(t) + \mathbf{W}^{(1)}_r\, \mathbf{x}^{(1)}(t-1) + \mathbf{b}^{(1)} \right), \\ \mathbf{u}^{(1)}(t) &= \mathrm{Mods}_1\!\left(\mathbf{x}^{(1)}(t)\right), \\ \mathbf{x}^{(\ell)}(t) &= (1-\alpha_\ell)\, \mathbf{x}^{(\ell)}(t-1) + \alpha_\ell\, \phi_\ell\!\left(\mathbf{W}^{(\ell)}_{\text{in}}\, \mathbf{u}^{(\ell-1)}(t) + \mathbf{W}^{(\ell)}_r\, \mathbf{x}^{(\ell)}(t-1) + \mathbf{b}^{(\ell)} \right), \quad \ell = 2,\dots,L, \\ \mathbf{u}^{(\ell)}(t) &= \mathrm{Mods}_\ell\!\left(\mathbf{x}^{(\ell)}(t)\right), \quad \ell = 2,\dots,L, \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{u}^{(L)}(t) + \mathbf{b}_{\text{out}} \right). \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Vector of reservoir (hidden) dimensions per layer; its length sets the depth L.
  • out_dims: Output dimension.
  • activation: Reservoir activation(s). Either a single function (broadcast to all layers) or a vector/tuple of length L. Default: tanh.

Keyword arguments

Per-layer reservoir options (passed to each ESNCell):

  • leak_coefficient: Leak rate(s) α_ℓ ∈ (0,1]. Scalar or length-L collection. Default: 1.0.
  • init_reservoir: Initializer(s) for W_res^{(ℓ)}. Scalar or length-L. Default: rand_sparse.
  • init_input: Initializer(s) for W_in^{(ℓ)}. Scalar or length-L. Default: scaled_rand.
  • init_bias: Initializer(s) for reservoir bias (used iff use_bias[ℓ]=true). Scalar or length-L. Default: zeros32.
  • init_state: Initializer(s) used when an external state is not provided. Scalar or length-L. Default: randn32.
  • use_bias: Whether each reservoir uses a bias term. Boolean scalar or length-L. Default: false.
  • depth: Depth of the DeepESN. If the reservoir size is given as a number instead of a vector, this parameter controls the depth of the model. Default is 2.

Composition:

  • state_modifiers: Per-layer modifier(s) applied to each layer’s state before it feeds into the next layer (and the readout for the last layer). Accepts nothing, a single layer, a vector/tuple of length L, or per-layer collections. Defaults to no modifiers.
  • readout_activation: Activation for the final linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple) containing states for all cells, modifiers, and readout.

Parameters

  • cells :: NTuple{L,NamedTuple} — parameters for each ESNCell, including:
    • input_matrix :: (res_dims[ℓ] × in_size[ℓ])W_in^{(ℓ)}
    • reservoir_matrix :: (res_dims[ℓ] × res_dims[ℓ])W_res^{(ℓ)}
    • bias :: (res_dims[ℓ],) — present only if use_bias[ℓ]=true
  • states_modifiers :: NTuple{L,Tuple} — per-layer tuples of modifier parameters (empty tuples if none).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims[L])W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • cells :: NTuple{L,NamedTuple} — states for each ESNCell.
  • states_modifiers :: NTuple{L,Tuple} — per-layer tuples of modifier states.
  • readout — states for LinearReadout.
source
ReservoirComputing.DelayESNType
DelayESN(in_dims, res_dims, out_dims, activation=tanh;
             num_input_delays=1, input_stride=1,
             num_state_delays=1, state_stride=1,
             leak_coefficient=1.0, init_reservoir=rand_sparse,
             init_input=scaled_rand, init_bias=zeros32,
             init_state=randn32, use_bias=false,
             states_modifiers=(), readout_activation=identity)

Echo State Network with both input and state delays (Fleddermann et al., 2025).

DelayESN composes:

  1. an internal DelayLayer applied to the input signal,
  2. a stateful ESNCell (reservoir) receiving the augmented input,
  3. a second internal DelayLayer applied to the reservoir state,
  4. zero or more additional states_modifiers applied after the state delay, and
  5. a LinearReadout mapping the final feature vector to outputs.

At each time step, the input u(t) is expanded into a delay-coordinate vector u_d(t). This drives the reservoir to produce state x(t). Finally, x(t) is expanded into a state delay-coordinate vector x_d(t) before passing to modifiers and readout.

Equations

\[\begin{aligned} \mathbf{u}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{u}(t) \\ \mathbf{u}(t-s_{in}) \\ \vdots \\ \mathbf{u}\!\bigl(t-D_{in}s_{in}\bigr) \end{bmatrix}, \qquad D_{in}=\text{num\_input\_delays},\ \ s_{in}=\text{input\_stride}, \\ \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}_{\mathrm{d}}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right), \\ \mathbf{x}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{x}(t) \\ \mathbf{x}(t-s_{st}) \\ \vdots \\ \mathbf{x}\!\bigl(t-D_{st}s_{st}\bigr) \end{bmatrix}, \qquad D_{st}=\text{num\_state\_delays},\ \ s_{st}=\text{state\_stride}, \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left( \mathbf{x}_{\mathrm{d}}(t)\right), \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

Reservoir (passed to ESNCell):

  • leak_coefficient: Leak rate in (0, 1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used iff use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.

Input delay expansion:

  • num_input_delays: Number of past input steps to include. The effective input to the reservoir has size (num_input_delays + 1) * in_dims. Default: 1.
  • input_stride: Stride for the input delay buffer. Default: 1.

State delay expansion:

  • num_state_delays: Number of past reservoir states to include. The readout receives a vector of size (num_state_delays + 1) * res_dims. Default: 1.
  • state_stride: Stride for the state delay buffer. Default: 1.

Composition:

  • states_modifiers: A layer or collection of layers applied to the delayed reservoir features before the readout. These run after the internal state DelayLayer. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • input_delay — parameters for the input DelayLayer.
  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × ((num_input_delays + 1) * in_dims))W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for:
    1. the internal state DelayLayer, and
    2. any user-provided modifier layers (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × ((num_state_delays + 1) * res_dims))W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

States

  • input_delay — state for the input DelayLayer (buffer and clock).
  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for the internal state DelayLayer (its delay buffer and clock) and each additional modifier layer.
  • readout — states for LinearReadout (typically empty).
source
ReservoirComputing.EIESNType
EIESN(in_dims, res_dims, out_dims, activation=tanh_fast;
    use_bias=true,
    exc_recurrence_scale=0.9, inh_recurrence_scale=0.5, exc_output_scale=1.0,
    inh_output_scale=1.0, init_reservoir=rand_sparse,
    init_input=scaled_rand, init_bias=zeros32,
    init_state=randn32,
    readout_activation=identity,
    state_modifiers=(),
    kwargs...)

Excitatory-Inhibitory Echo State Network (EIESN) (Panahi et al., 2025).

This model wraps EIESNCell.

Equations

\[\begin{aligned} \mathbf{x}(t) &= b_{\mathrm{ex}} \, \phi_{\mathrm{ex}}\!\left( \mathbf{W}_{\mathrm{in}} \mathbf{u}(t) + a_{\mathrm{ex}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{ex}}\right) - b_{\mathrm{inh}} \, \phi_{\mathrm{inh}}\!\left( \mathbf{W}_{\mathrm{in}} \mathbf{u}(t) + a_{\mathrm{inh}} \mathbf{A} \mathbf{x}(t-1) + \mathbf{\beta}_{\mathrm{inh}}\right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for EIESNCell). Can be a single function or a Tuple(excitatory, inhibitory). Default: tanh_fast.

Keyword arguments

  • use_bias: Enable/disable bias vectors inside the reservoir. Default: true.
  • exc_recurrence_scale: Excitatory recurrence scaling factor. Default: 0.9.
  • inh_recurrence_scale: Inhibitory recurrence scaling factor. Default: 0.5.
  • exc_output_scale: Excitatory output scaling factor. Default: 1.0.
  • inh_output_scale: Inhibitory output scaling factor. Default: 1.0.
  • init_reservoir: Initializer for the reservoir matrix. Default: rand_sparse.
  • init_input: Initializer for the input matrix. Default: scaled_rand.
  • init_bias: Initializer for the bias vectors. Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • readout_activation: Activation for the linear readout. Default: identity.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal EIESNCell.
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout.

States

  • reservoir — states for the internal EIESNCell (e.g. rng).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.ES2NType
ES2N(in_dims, res_dims, out_dims, activation=tanh;
    proximity=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
    init_bias=zeros32, init_state=randn32, use_bias=False(),
    state_modifiers=(), readout_activation=identity,
    init_orthogonal=orthogonal,)

Edge of Stability Echo State Network (ES2N) (Ceni and Gallicchio, 2025).

Equations

\[\begin{aligned} \mathbf{x}(t) &= (1-\beta)\, \mathbf{O}\, \mathbf{x}(t-1) + \beta\, \phi\!\left(\mathbf{W}_{\text{in}} \mathbf{u}(t) + \mathbf{W}_r \mathbf{x}(t-1) + \mathbf{b} \right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

  • proximity: proximity α ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_orthogonal: Initializer for O. Default: [orthogonal].
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • orthogonal_matrix :: (res_dims × res_dims)O
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ES2NCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.ESNType
ESN(in_dims, res_dims, out_dims, activation=tanh;
    leak_coefficient=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
    init_bias=zeros32, init_state=randn32, use_bias=false,
    state_modifiers=(), readout_activation=identity)

Echo State Network (Jaeger and Haas, 2004).

ESN composes:

  1. a stateful ESNCell (reservoir),
  2. zero or more state_modifiers applied to the reservoir state, and
  3. a LinearReadout mapping reservoir features to outputs.

Equations

\[\begin{aligned} \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

Reservoir (passed to ESNCell):

  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.

Composition:

  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.EuSNType
EuSN(in_dims, res_dims, out_dims, activation=tanh;
    leak_coefficient=1.0, diffusion = 1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
    init_bias=zeros32, init_state=randn32, use_bias=false,
    state_modifiers=(), readout_activation=identity)

Euler State Network (ESN) (Gallicchio, 2024).

Equations

\[\begin{aligned} \mathbf{x}(t) &= \mathbf{x}(t-1) + \varepsilon\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}(t) + (\mathbf{W}_r - \gamma\, \mathbf{I})\, \mathbf{x}(t-1) + \mathbf{b} \right) \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right) \\ \mathbf{y}(t) &= \rho\!\left( \mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.
  • diffusion: diffusion coefficient ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.HybridESNType
HybridESN(km, km_dims, in_dims, res_dims, out_dims, [activation];
    state_modifiers=(), readout_activation=identity,
    include_collect=true, kwargs...)

Hybrid Echo State Network (Pathak et al., 2018).

HybridESN composes:

  1. a knowledge model km producing auxiliary features from the input,
  2. a stateful ESNCell that receives the concatenated input [km(x(t)); x(t)],
  3. zero or more state_modifiers applied to the reservoir state, and
  4. a LinearReadout mapping the combined features [km(x(t)); h*(t)] to the output.

Arguments

  • km: Knowledge model applied to the input (e.g. a physical model, neural submodule, or differentiable function). May be a WrappedFunction or any callable layer.
  • km_dims: Output dimension of the knowledge model km.
  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.
  • include_collect: Whether the readout should include collection mode. Default: true.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • knowledge_model — parameters of the knowledge model km.
  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × (in_dims + km_dims))W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × (res_dims + km_dims))W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

Created by initialstates(rng, hesn):

  • knowledge_model — states for the internal knowledge model.
  • reservoir — states for the internal ESNCell.
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.InputDelayESNType
InputDelayESN(in_dims, res_dims, out_dims, activation=tanh;
              num_delays=1, stride=1, leak_coefficient=1.0,
              init_reservoir=rand_sparse, init_input=scaled_rand,
              init_bias=zeros32, init_state=randn32, use_bias=false,
              states_modifiers=(), readout_activation=identity)

Echo State Network with input delays (Fleddermann et al., 2025).

InputDelayESN composes:

  1. an internal DelayLayer applied to the input signal to build tapped-delay features,
  2. a stateful ESNCell (reservoir) receiving the augmented input,
  3. zero or more states_modifiers applied to the reservoir state, and
  4. a LinearReadout mapping the modified reservoir state to outputs.

At each time step, the input u(t) is expanded into a delay-coordinate vector that stacks the current and num_delays past inputs. This augmented signal is then used to update the reservoir state x(t).

Equations

\[\begin{aligned} \mathbf{u}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{u}(t) \\ \mathbf{u}(t-s) \\ \vdots \\ \mathbf{u}\!\bigl(t-Ds\bigr) \end{bmatrix}, \qquad D=\text{num\_delays},\ \ s=\text{stride}, \\ \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}_{\mathrm{d}}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right), \\ \mathbf{z}(t) &= \mathrm{Mods}\!\left(\mathbf{x}(t)\right), \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

Reservoir (passed to ESNCell):

  • leak_coefficient: Leak rate in (0, 1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used iff use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.

Delay expansion (on input):

  • num_delays: Number of past input states to include in the tapped-delay vector. The DelayLayer output has (num_delays + 1) * in_dims entries. Default: 1.
  • stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple of stride. Default: 1.

Composition:

  • states_modifiers: A layer or collection of layers applied to the reservoir state before the readout. These run after the internal DelayLayer. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • input_delay — parameters for the internal DelayLayer.
  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × ((num_delays + 1) * in_dims))W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for the user-provided modifier layers (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

States

  • input_delay — state for the internal DelayLayer (its delay buffer and clock).
  • reservoir — states for the internal ESNCell (e.g. rng).
  • states_modifiers — states for the user-provided modifier layers.
  • readout — states for LinearReadout (typically empty).
source
ReservoirComputing.StateDelayESNType
StateDelayESN(in_dims, res_dims, out_dims, activation=tanh;
         num_delays=1, stride=1, leak_coefficient=1.0,
         init_reservoir=rand_sparse, init_input=scaled_rand,
         init_bias=zeros32, init_state=randn32, use_bias=false,
         state_modifiers=(), readout_activation=identity)

Echo State Network with state delays (Fleddermann et al., 2025).

StateDelayESN composes:

  1. a stateful ESNCell (reservoir),
  2. a DelayLayer applied to the reservoir state to build tapped-delay features,
  3. zero or more additional state_modifiers applied after the delay, and
  4. a LinearReadout mapping delayed reservoir features to outputs.

At each time step, the reservoir produces a state vector h(t) of length res_dims. The DelayLayer then constructs a feature vector that stacks h(t) together with num_delays past states, spaced according to stride, before passing it on to any further modifiers and the readout.

Equations

\[\begin{aligned} \mathbf{x}(t) &= (1-\alpha)\, \mathbf{x}(t-1) + \alpha\, \phi\!\left( \mathbf{W}_{\text{in}}\, \mathbf{u}(t) + \mathbf{W}_r\, \mathbf{x}(t-1) + \mathbf{b} \right), \\ \mathbf{x}_{\mathrm{d}}(t) &= \begin{bmatrix} \mathbf{x}(t) \\ \mathbf{x}(t-s) \\ \vdots \\ \mathbf{x}\!\bigl(t-Ds\bigr) \end{bmatrix}, \qquad D=\text{num\_delays},\ \ s=\text{stride}, \\ \mathbf{z}(t) &= \psi\!\left(\mathrm{Mods}\!\left( \mathbf{x}_{\mathrm{d}}(t)\right)\right), \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{\text{out}}\, \mathbf{z}(t) + \mathbf{b}_{\text{out}} \right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

Reservoir (passed to ESNCell):

  • leak_coefficient: Leak rate in (0, 1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used iff use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.

Delay expansion:

  • num_delays: Number of past reservoir states to include in the tapped-delay vector. The DelayLayer output has (num_delays + 1) * res_dims entries (current state plus num_delays past states). Default: 1.
  • stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple of stride. Default: 1.

Composition:

  • state_modifiers: A layer or collection of layers applied to the delayed reservoir features before the readout. These run after the internal DelayLayer. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for:
    1. the internal DelayLayer, and
    2. any user-provided modifier layers (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × ((num_delays + 1) * res_dims))W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for the internal DelayLayer (its delay buffer and clock) and each additional modifier layer.
  • readout — states for LinearReadout (typically empty).
source

Next generation reservoir computing

ReservoirComputing.NGRCType
NGRC(in_dims, out_dims; num_delays=2, stride=1,
     features=(), include_input=true, init_delay=zeros32,
     readout_activation=identity, state_modifiers=(),
     ro_dims=nothing)

Next Generation Reservoir Computing (Gauthier et al., 2021).

NGRC composes:

  1. a DelayLayer applied directly to the input, producing a vector containing the current input and a fixed number of past inputs,
  2. a NonlinearFeaturesLayer that applies user-provided functions to this delayed vector and concatenates the results, and
  3. a LinearReadout mapping the resulting feature vector to outputs.

Arguments

  • in_dims: Input dimension.
  • out_dims: Output dimension.

Keyword arguments

  • num_delays: Number of past input vectors to include. The internal DelayLayer outputs a vector of length (num_delays + 1) * in_dims (current input plus num_delays past inputs). Default: 2.
  • stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple of stride. Default: 1.
  • init_delay: Initializer (or tuple of initializers) for the delay history, passed to DelayLayer. Each initializer function is called as init(rng, in_dims, 1) to fill one delay column. Default: zeros32.
  • features: A function or tuple of functions (f₁, f₂, ...) used by NonlinearFeaturesLayer. Each f is called as f(x) where x is the delayed input vector. By default it is assumed that each f returns a vector of the same length as x when ro_dims is not provided. Default: empty ().
  • include_input: Whether to include the raw delayed input vector itself as the first block of the feature vector (passed to NonlinearFeaturesLayer). Default: true.
  • state_modifiers: Extra layers applied after the NonlinearFeaturesLayer and before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.
  • ro_dims: Input dimension of the readout. If nothing (default), it is estimated under the assumption that each feature function returns a vector with the same length as the delayed input. In that case, ro_dims ≈ (num_delays + 1) * in_dims * n_blocks, where n_blocks is the number of concatenated vectors (original delayed input if include_input=true plus one block per feature function). If your feature functions change the length (e.g. constant features, higher-order polynomial expansions with cross terms), you should pass ro_dims explicitly.

Inputs

  • x :: AbstractArray (in_dims, batch) or (in_dims,)

Returns

  • Output y :: (out_dims, batch) (or (out_dims,) for vector input).
  • Updated layer state (NamedTuple).
source
ReservoirComputing.polynomial_monomialsFunction
polynomial_monomials(input_vector;
    degrees = 1:2)

Generate all unordered polynomial monomials of the entries in input_vector for the given set of degrees.

For each d in degrees, this function produces all degree-d monomials of the form

  • degree 1: x₁, x₂, …
  • degree 2: x₁², x₁x₂, x₁x₃, x₂², …
  • degree 3: x₁³, x₁²x₂, x₁x₂x₃, x₂³, …

where combinations are taken with repetition and in non-decreasing index order. This means that, for example, x₁x₂ and x₂x₁ are represented only once.

The returned vector is a flat list of all such products, in a deterministic order determined by the recursive enumeration.

Arguments

  • input_vector Input vector whose entries define the variables used to build monomials.

Keyword arguments

  • degrees: An iterable of positive integers specifying which monomial degrees to generate. Each degree less than 1 is skipped. Default: 1:2.

Returns

  • output_monomials a vector of the same type as input_vector containing all generated monomials, concatenated across the requested degrees, in a deterministic order.
source

Utilities

ReservoirComputing.resetcarry!Function
resetcarry!(rng, rc::ReservoirComputer, st; init_carry=nothing)
resetcarry!(rng, rc::ReservoirComputer, ps, st; init_carry=nothing)

Reset (or set) the hidden-state carry of a model in the echo state network family.

If an existing carry is present in st.cell.carry, its leading dimension is used to infer the state size. Otherwise the reservoir output size is taken from rc.reservoir.cell.out_dims. When init_carry=nothing, the carry is cleared; the initializer from the struct construction will then be used. When a function is provided, it is called to create a new initial hidden state.

Arguments

  • rng: Random number generator (used if a new carry is sampled/created).
  • rc: A reservoir computing network model.
  • st: Current model states.
  • ps: Optional model parameters. Returned unchanged.

Keyword arguments

  • init_carry: Controls the initialization of the new carry.
    • nothing (default): remove/clear the carry (forces the cell to reinitialize from its own init_state on next use).
    • f: a function following standard from WeightInitializers.jl

Returns

  • resetcarry!(rng, rc, st; ...) -> st′: Updated states with st′.cell.carry set to nothing or (h0,).
  • resetcarry!(rng, rc, ps, st; ...) -> (ps, st′): Same as above, but also returns the unchanged ps for convenience.
source

Reservoir Computing with Cellular Automata

ReservoirComputing.RECAFunction
RECA(in_dims, out_dims, automaton;
    input_encoding=RandomMapping(),
    generations=8, state_modifiers=(),
    readout_activation=identity)

Construct a cellular–automata reservoir model.

At each time step the input vector is randomly embedded into a Cellular Automaton (CA) lattice, the CA is evolved for generations steps, and the flattened evolution (excluding the initial row) is used as the reservoir state. A linear LinearReadout maps these features to out_dims.

Note

This constructor is only available when the CellularAutomata.jl package is loaded.

Arguments

  • in_dims: Number of input features (rows of training data).
  • out_dims: Number of output features (rows of target data).
  • automaton: A CA rule/object from CellularAutomata.jl (e.g. DCA(90), DCA(30), …).

Keyword Arguments

  • input_encoding: Random embedding spec with fields permutations and expansion_size. Default is RandomMapping().
  • generations: Number of CA generations to evolve per time step. Default is 8.
  • state_modifiers: Optional tuple/vector of additional layers applied after the CA cell and before the readout (e.g., NLAT2(), Pad(1.0), custom transforms, etc.). Functions are wrapped automatically. Default is none.
  • readout_activation: Activation applied by the readout Default is identity.
source

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

ReservoirComputing.RandomMappingType
RandomMapping(permutations, expansion_size)
RandomMapping(permutations; expansion_size=40)
RandomMapping(;permutations=8, expansion_size=40)

Specify the random input embedding used by the Cellular Automata reservoir. Each time step, the input vector of length in_dims is randomly placed into a larger 1D lattice of length expansion_size, and this is repeated for permutations independent lattices (blocks). The concatenation of these blocks forms the CA initial condition of length: ca_size = expansion_size * permutations. The detail of this implementation can be found in (Nichele and Molund, 2017).

Arguments

  • permutations: number of independent random maps (blocks). Larger values increase feature diversity and ca_size proportionally.
  • expansion_size: width of each block (the size of a single CA lattice). Larger values increase the spatial resolution and both ca_size and states_size.

Usage

This is a configuration object; it does not perform the mapping by itself. Create the concrete tables with create_encoding and pass them to RECACell:

using ReservoirComputing, CellularAutomata, Random

in_dims = 4
generations = 8
mapping = RandomMapping(permutations = 8, expansion_size = 40)

enc = ReservoirComputing.create_encoding(mapping, in_dims, generations)  # → RandomMaps
cell = RECACell(DCA(90), enc)

rc = ReservoirChain(
    StatefulLayer(cell),
    LinearReadout(enc.states_size => in_dims; include_collect = true)
)

Or let RECA do this for you:

rc = RECA(in_dims = 4, out_dims = 4, DCA(90);
    input_encoding = RandomMapping(permutations = 8, expansion_size = 40),
    generations = 8)
source