Models

Echo State Networks

ReservoirComputing.ES2NType
ES2N(in_dims, res_dims, out_dims, activation=tanh;
    proximity=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
    init_bias=zeros32, init_state=randn32, use_bias=False(),
    state_modifiers=(), readout_activation=identity,
    init_orthogonal=orthogonal,)

Edge of Stability Echo State Network (ES2N) (Ceni and Gallicchio, 2025).

Equations

\[\begin{aligned} x(t) = \beta\, \phi\!\left( \rho\, \mathbf{W}_r x(t-1) + \omega\, \mathbf{W}_{in} u(t) \right) + (1-\beta)\, \mathbf{O}\, x(t-1), \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

  • proximity: proximity α ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_orthogonal: Initializer for O. Default: [orthogonal].
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • orthogonal_matrix :: (res_dims × res_dims)O
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ES2NCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.ESNType
ESN(in_dims, res_dims, out_dims, activation=tanh;
    leak_coefficient=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
    init_bias=zeros32, init_state=randn32, use_bias=false,
    state_modifiers=(), readout_activation=identity)

Echo State Network (ESN): a reservoir (recurrent) layer followed by an optional sequence of state-modifier layers and a linear readout.

ESN composes:

  1. a stateful ESNCell (reservoir),
  2. zero or more state_modifiers applied to the reservoir state, and
  3. a LinearReadout mapping reservoir features to outputs.

Equations

For input \mathbf{x}(t) ∈ \mathbb{R}^{in\_dims}, reservoir state \mathbf{h}(t) ∈ \mathbb{R}^{res\_dims}, and output \mathbf{y}(t) ∈ \mathbb{R}^{out\_dims}:

\[\begin{aligned} \tilde{\mathbf{h}}(t) &= \phi\!\left(\mathbf{W}_{in}\,\mathbf{x}(t) + \mathbf{W}_{res}\,\mathbf{h}(t-1) + \mathbf{b}\right) \\ \mathbf{h}(t) &= (1-\alpha)\,\mathbf{h}(t-1) + \alpha\,\tilde{\mathbf{h}}(t) \\ \mathbf{z}(t) &= \psi\!\left(\mathrm{Mods}\big(\mathbf{h}(t)\big)\right) \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{out}\,\mathbf{z}(t) + \mathbf{b}_{out}\right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

Reservoir (passed to ESNCell):

  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.

Composition:

  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.EuSNType
EuSN(in_dims, res_dims, out_dims, activation=tanh;
    leak_coefficient=1.0, diffusion = 1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
    init_bias=zeros32, init_state=randn32, use_bias=false,
    state_modifiers=(), readout_activation=identity)

Euler State Network (ESN) (Gallicchio, 2024).

Equations

\[\begin{aligned} \mathbf{h}(t) = \mathbf{h}(t-1) + \varepsilon \tanh\!\left((\mathbf{W}_h - \gamma \mathbf{I})\mathbf{h}(t-1) + \mathbf{W}_x \mathbf{x}(t) + \mathbf{b}\right) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.
  • diffusion: diffusion coefficient ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims)W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source
ReservoirComputing.DeepESNType
DeepESN(in_dims, res_dims, out_dims,
        activation=tanh; depth=2, leak_coefficient=1.0, init_reservoir=rand_sparse,
        init_input=scaled_rand, init_bias=zeros32, init_state=randn32,
        use_bias=false, state_modifiers=(), readout_activation=identity)

Deep Echo State Network (DeepESN): a stack of stateful ESNCell layers (optionally with per-layer state modifiers) followed by a linear readout.

DeepESN composes, for L = length(res_dims) layers:

  1. a sequence of stateful ESNCell with widths res_dims[ℓ],
  2. zero or more per-layer state_modifiers[ℓ] applied to the layer's state, and
  3. a final LinearReadout from the last layer's features to the output.

Equations

For input \mathbf{x}(t) ∈ \mathbb{R}^{in\_dims}, per-layer reservoir states \mathbf{h}^{(\ell)}(t) ∈ \mathbb{R}^{res\_dims[\ell]} (\ell = 1..L), and output \mathbf{y}(t) ∈ \mathbb{R}^{out\_dims}:

```math \begin{aligned} \tilde{\mathbf{h}}^{(1)}(t) &= \phi1!\left( \mathbf{W}^{(1)}{in}\,\mathbf{x}(t) + \mathbf{W}^{(1)}{res}\,\mathbf{h}^{(1)}(t-1) + \mathbf{b}^{(1)}\right) \ \mathbf{h}^{(1)}(t) &= (1-\alpha1)\,\mathbf{h}^{(1)}(t-1) + \alpha1\,\tilde{\mathbf{h}}^{(1)}(t) \ \mathbf{u}^{(1)}(t) &= \mathrm{Mods}1!\big(\mathbf{h}^{(1)}(t)\big) \ \tilde{\mathbf{h}}^{(\ell)}(t) &= \phi\ell!\left( \mathbf{W}^{(\ell)}{in}\,\mathbf{u}^{(\ell-1)}(t) + \mathbf{W}^{(\ell)}{res}\,\mathbf{h}^{(\ell)}(t-1) + \mathbf{b}^{(\ell)}\right), \quad \ell=2..L \ \mathbf{h}^{(\ell)}(t) &= (1-\alpha\ell)\,\mathbf{h}^{(\ell)}(t-1) + \alpha\ell\,\tilde{\mathbf{h}}^{(\ell)}(t), \quad \ell=2..L \ \mathbf{u}^{(\ell)}(t) &= \mathrm{Mods}\ell!\big(\mathbf{h}^{(\ell)}(t)\big), \quad \ell=2..L \ \mathbf{y}(t) &= \rho!\left(\mathbf{W}{out}\,\mathbf{u}^{(L)}(t) + \mathbf{b}{out}\right) \end{aligned}

Where

  • \mathbf{x}(t) ∈ ℝ^{in_dims × batch} — input at time t.

  • \mathbf{h}^{(\ell)}(t) ∈ ℝ^{res_dims[ℓ] × batch} — hidden state of layer .

  • \tilde{\mathbf{h}}^{(\ell)}(t) — candidate state before leaky mixing.

  • \mathbf{u}^{(\ell)}(t) — features after applying the -th state_modifiers (identity if none).

  • \mathbf{y}(t) ∈ ℝ^{out_dims × batch} — network output.

  • \mathbf{W}^{(\ell)}_{in} ∈ ℝ^{res_dims[ℓ] × in\_size[ℓ]} — input matrix at layer (in_size[1]=in_dims, in_size[ℓ]=res_dims[ℓ-1] for ℓ>1).

  • \mathbf{W}^{(\ell)}_{res} ∈ ℝ^{res_dims[ℓ] × res_dims[ℓ]} — reservoir matrix at layer .

  • \mathbf{b}^{(\ell)} ∈ ℝ^{res_dims[ℓ] × 1} — reservoir bias (broadcast over batch), present iff use_bias[ℓ]=true.

  • \mathbf{W}_{out} ∈ ℝ^{out_dims × res_dims[L]} — readout matrix.

  • \mathbf{b}_{out} ∈ ℝ^{out_dims × 1} — readout bias (if used by the readout).

  • \phi_\ell — activation of layer (activation[ℓ], default tanh).

  • \alpha_\ell ∈ (0,1] — leak coefficient of layer (leak_coefficient[ℓ]).

  • \mathrm{Mods}_\ell(·) — composition of modifiers for layer (may be empty).

  • \rho — readout activation (readout_activation, default identity).

Arguments

  • in_dims: Input dimension.
  • res_dims: Vector of reservoir (hidden) dimensions per layer; its length sets the depth L.
  • out_dims: Output dimension.
  • activation: Reservoir activation(s). Either a single function (broadcast to all layers) or a vector/tuple of length L. Default: tanh.

Keyword arguments

Per-layer reservoir options (passed to each ESNCell):

  • leak_coefficient: Leak rate(s) α_ℓ ∈ (0,1]. Scalar or length-L collection. Default: 1.0.
  • init_reservoir: Initializer(s) for W_res^{(ℓ)}. Scalar or length-L. Default: rand_sparse.
  • init_input: Initializer(s) for W_in^{(ℓ)}. Scalar or length-L. Default: scaled_rand.
  • init_bias: Initializer(s) for reservoir bias (used iff use_bias[ℓ]=true). Scalar or length-L. Default: zeros32.
  • init_state: Initializer(s) used when an external state is not provided. Scalar or length-L. Default: randn32.
  • use_bias: Whether each reservoir uses a bias term. Boolean scalar or length-L. Default: false.
  • depth: Depth of the DeepESN. If the reservoir size is given as a number instead of a vector, this parameter controls the depth of the model. Default is 2.

Composition:

  • state_modifiers: Per-layer modifier(s) applied to each layer’s state before it feeds into the next layer (and the readout for the last layer). Accepts nothing, a single layer, a vector/tuple of length L, or per-layer collections. Defaults to no modifiers.
  • readout_activation: Activation for the final linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple) containing states for all cells, modifiers, and readout.

Parameters

  • cells :: NTuple{L,NamedTuple} — parameters for each ESNCell, including:
    • input_matrix :: (res_dims[ℓ] × in_size[ℓ])W_in^{(ℓ)}
    • reservoir_matrix :: (res_dims[ℓ] × res_dims[ℓ])W_res^{(ℓ)}
    • bias :: (res_dims[ℓ],) — present only if use_bias[ℓ]=true
  • states_modifiers :: NTuple{L,Tuple} — per-layer tuples of modifier parameters (empty tuples if none).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × res_dims[L])W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • cells :: NTuple{L,NamedTuple} — states for each ESNCell.
  • states_modifiers :: NTuple{L,Tuple} — per-layer tuples of modifier states.
  • readout — states for LinearReadout.
source
ReservoirComputing.DelayESNType
DelayESN(in_dims, res_dims, out_dims, activation=tanh;
         num_delays=1, stride=1, leak_coefficient=1.0,
         init_reservoir=rand_sparse, init_input=scaled_rand,
         init_bias=zeros32, init_state=randn32, use_bias=false,
         state_modifiers=(), readout_activation=identity)

Echo State Network whose reservoir state is first passed through a delay feature expansion before the readout. This implements a state-delayed ESN (Fleddermann et al., 2025), where the readout sees the current reservoir state together with a fixed number of its past values.

DelayESN composes:

  1. a stateful ESNCell (reservoir),
  2. a DelayLayer applied to the reservoir state to build tapped-delay features,
  3. zero or more additional state_modifiers applied after the delay, and
  4. a LinearReadout mapping delayed reservoir features to outputs.

At each time step, the reservoir produces a state vector h(t) of length res_dims. The DelayLayer then constructs a feature vector that stacks h(t) together with num_delays past states, spaced according to stride, before passing it on to any further modifiers and the readout.

Arguments

  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

Reservoir (passed to ESNCell):

  • leak_coefficient: Leak rate in (0, 1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used iff use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.

Delay expansion:

  • num_delays: Number of past reservoir states to include in the tapped-delay vector. The DelayLayer output has (num_delays + 1) * res_dims entries (current state plus num_delays past states). Default: 1.
  • stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple of stride. Default: 1.

Composition:

  • state_modifiers: A layer or collection of layers applied to the delayed reservoir features before the readout. These run after the internal DelayLayer. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × in_dims)W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for:
    1. the internal DelayLayer, and
    2. any user-provided modifier layers (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × ((num_delays + 1) * res_dims))W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

  • reservoir — states for the internal ESNCell (e.g. rng used to sample initial hidden states).
  • states_modifiers — a Tuple with states for the internal DelayLayer (its delay buffer and clock) and each additional modifier layer.
  • readout — states for LinearReadout (typically empty).
source
ReservoirComputing.HybridESNType
HybridESN(km, km_dims, in_dims, res_dims, out_dims, [activation];
    state_modifiers=(), readout_activation=identity,
    include_collect=true, kwargs...)

Hybrid Echo State Network (HybridESN): an Echo State Network augmented with a knowledge model whose outputs are concatenated to the ESN’s input and used throughout the reservoir and readout computations.

HybridESN composes:

  1. a knowledge model km producing auxiliary features from the input,
  2. a stateful ESNCell that receives the concatenated input [km(x(t)); x(t)],
  3. zero or more state_modifiers applied to the reservoir state, and
  4. a LinearReadout mapping the combined features [km(x(t)); h*(t)] to the output.

Arguments

  • km: Knowledge model applied to the input (e.g. a physical model, neural submodule, or differentiable function). May be a WrappedFunction or any callable layer.
  • km_dims: Output dimension of the knowledge model km.
  • in_dims: Input dimension.
  • res_dims: Reservoir (hidden state) dimension.
  • out_dims: Output dimension.
  • activation: Reservoir activation (for ESNCell). Default: tanh.

Keyword arguments

  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.
  • init_reservoir: Initializer for W_res. Default: rand_sparse.
  • init_input: Initializer for W_in. Default: scaled_rand.
  • init_bias: Initializer for reservoir bias (used if use_bias=true). Default: zeros32.
  • init_state: Initializer used when an external state is not provided. Default: randn32.
  • use_bias: Whether the reservoir uses a bias term. Default: false.
  • state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.
  • include_collect: Whether the readout should include collection mode. Default: true.

Inputs

  • x :: AbstractArray (in_dims, batch)

Returns

  • Output y :: (out_dims, batch).
  • Updated layer state (NamedTuple).

Parameters

  • knowledge_model — parameters of the knowledge model km.
  • reservoir — parameters of the internal ESNCell, including:
    • input_matrix :: (res_dims × (in_dims + km_dims))W_in
    • reservoir_matrix :: (res_dims × res_dims)W_res
    • bias :: (res_dims,) — present only if use_bias=true
  • states_modifiers — a Tuple with parameters for each modifier layer (may be empty).
  • readout — parameters of LinearReadout, typically:
    • weight :: (out_dims × (res_dims + km_dims))W_out
    • bias :: (out_dims,)b_out (if the readout uses bias)

Exact field names for modifiers/readout follow their respective layer definitions.

States

Created by initialstates(rng, hesn):

  • knowledge_model — states for the internal knowledge model.
  • reservoir — states for the internal ESNCell.
  • states_modifiers — a Tuple with states for each modifier layer.
  • readout — states for LinearReadout.
source

Next generation reservoir computing

ReservoirComputing.NGRCType
NGRC(in_dims, out_dims; num_delays=2, stride=1,
     features=(), include_input=true, init_delay=zeros32,
     readout_activation=identity, state_modifiers=(),
     ro_dims=nothing)

Next Generation Reservoir Computing (NGRC) / NVAR-style model (Gauthier et al., 2021): a tapped-delay embedding of the input, followed by user-defined nonlinear feature maps and a linear readout. This is a "reservoir-free" architecture where all dynamics come from explicit input delays rather than a recurrent state.

NGRC composes:

  1. a DelayLayer applied directly to the input, producing a vector containing the current input and a fixed number of past inputs,
  2. a NonlinearFeaturesLayer that applies user-provided functions to this delayed vector and concatenates the results, and
  3. a LinearReadout mapping the resulting feature vector to outputs.

Internally, NGRC is represented as a ReservoirComputer with:

Arguments

  • in_dims: Input dimension.
  • out_dims: Output dimension.

Keyword arguments

  • num_delays: Number of past input vectors to include. The internal DelayLayer outputs a vector of length (num_delays + 1) * in_dims (current input plus num_delays past inputs). Default: 2.
  • stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple of stride. Default: 1.
  • init_delay: Initializer (or tuple of initializers) for the delay history, passed to DelayLayer. Each initializer function is called as init(rng, in_dims, 1) to fill one delay column. Default: zeros32.
  • features: A function or tuple of functions (f₁, f₂, ...) used by NonlinearFeaturesLayer. Each f is called as f(x) where x is the delayed input vector. By default it is assumed that each f returns a vector of the same length as x when ro_dims is not provided. Default: empty ().
  • include_input: Whether to include the raw delayed input vector itself as the first block of the feature vector (passed to NonlinearFeaturesLayer). Default: true.
  • state_modifiers: Extra layers applied after the NonlinearFeaturesLayer and before the readout. Accepts a single layer, an AbstractVector, or a Tuple. Default: empty ().
  • readout_activation: Activation for the linear readout. Default: identity.
  • ro_dims: Input dimension of the readout. If nothing (default), it is estimated under the assumption that each feature function returns a vector with the same length as the delayed input. In that case, ro_dims ≈ (num_delays + 1) * in_dims * n_blocks, where n_blocks is the number of concatenated vectors (original delayed input if include_input=true plus one block per feature function). If your feature functions change the length (e.g. constant features, higher-order polynomial expansions with cross terms), you should pass ro_dims explicitly.

Inputs

  • x :: AbstractArray (in_dims, batch) or (in_dims,)

Returns

  • Output y :: (out_dims, batch) (or (out_dims,) for vector input).
  • Updated layer state (NamedTuple).
source
ReservoirComputing.polynomial_monomialsFunction
polynomial_monomials(input_vector;
    degrees = 1:2)

Generate all unordered polynomial monomials of the entries in input_vector for the given set of degrees.

For each d in degrees, this function produces all degree-d monomials of the form

  • degree 1: x₁, x₂, …
  • degree 2: x₁², x₁x₂, x₁x₃, x₂², …
  • degree 3: x₁³, x₁²x₂, x₁x₂x₃, x₂³, …

where combinations are taken with repetition and in non-decreasing index order. This means that, for example, x₁x₂ and x₂x₁ are represented only once.

The returned vector is a flat list of all such products, in a deterministic order determined by the recursive enumeration.

Arguments

  • input_vector Input vector whose entries define the variables used to build monomials.

Keyword arguments

  • degrees: An iterable of positive integers specifying which monomial degrees to generate. Each degree less than 1 is skipped. Default: 1:2.

Returns

  • output_monomials a vector of the same type as input_vector containing all generated monomials, concatenated across the requested degrees, in a deterministic order.
source

Utilities

ReservoirComputing.resetcarry!Function
resetcarry!(rng, rc::ReservoirComputer, st; init_carry=nothing)
resetcarry!(rng, rc::ReservoirComputer, ps, st; init_carry=nothing)

Reset (or set) the hidden-state carry of a model in the echo state network family.

If an existing carry is present in st.cell.carry, its leading dimension is used to infer the state size. Otherwise the reservoir output size is taken from rc.reservoir.cell.out_dims. When init_carry=nothing, the carry is cleared; the initializer from the struct construction will then be used. When a function is provided, it is called to create a new initial hidden state.

Arguments

  • rng: Random number generator (used if a new carry is sampled/created).
  • rc: A reservoir computing network model.
  • st: Current model states.
  • ps: Optional model parameters. Returned unchanged.

Keyword arguments

  • init_carry: Controls the initialization of the new carry.
    • nothing (default): remove/clear the carry (forces the cell to reinitialize from its own init_state on next use).
    • f: a function following standard from WeightInitializers.jl

Returns

  • resetcarry!(rng, rc, st; ...) -> st′: Updated states with st′.cell.carry set to nothing or (h0,).
  • resetcarry!(rng, rc, ps, st; ...) -> (ps, st′): Same as above, but also returns the unchanged ps for convenience.
source

Reservoir Computing with Cellular Automata

ReservoirComputing.RECAFunction
RECA(in_dims, out_dims, automaton;
    input_encoding=RandomMapping(),
    generations=8, state_modifiers=(),
    readout_activation=identity)

Construct a cellular–automata reservoir model.

At each time step the input vector is randomly embedded into a Cellular Automaton (CA) lattice, the CA is evolved for generations steps, and the flattened evolution (excluding the initial row) is used as the reservoir state. A linear LinearReadout maps these features to out_dims.

Note

This constructor is only available when the CellularAutomata.jl package is loaded.

Arguments

  • in_dims: Number of input features (rows of training data).
  • out_dims: Number of output features (rows of target data).
  • automaton: A CA rule/object from CellularAutomata.jl (e.g. DCA(90), DCA(30), …).

Keyword Arguments

  • input_encoding: Random embedding spec with fields permutations and expansion_size. Default is RandomMapping().
  • generations: Number of CA generations to evolve per time step. Default is 8.
  • state_modifiers: Optional tuple/vector of additional layers applied after the CA cell and before the readout (e.g., NLAT2(), Pad(1.0), custom transforms, etc.). Functions are wrapped automatically. Default is none.
  • readout_activation: Activation applied by the readout Default is identity.
source

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

ReservoirComputing.RandomMappingType
RandomMapping(permutations, expansion_size)
RandomMapping(permutations; expansion_size=40)
RandomMapping(;permutations=8, expansion_size=40)

Specify the random input embedding used by the Cellular Automata reservoir. Each time step, the input vector of length in_dims is randomly placed into a larger 1D lattice of length expansion_size, and this is repeated for permutations independent lattices (blocks). The concatenation of these blocks forms the CA initial condition of length: ca_size = expansion_size * permutations. The detail of this implementation can be found in (Nichele and Molund, 2017).

Arguments

  • permutations: number of independent random maps (blocks). Larger values increase feature diversity and ca_size proportionally.
  • expansion_size: width of each block (the size of a single CA lattice). Larger values increase the spatial resolution and both ca_size and states_size.

Usage

This is a configuration object; it does not perform the mapping by itself. Create the concrete tables with create_encoding and pass them to RECACell:

using ReservoirComputing, CellularAutomata, Random

in_dims = 4
generations = 8
mapping = RandomMapping(permutations = 8, expansion_size = 40)

enc = ReservoirComputing.create_encoding(mapping, in_dims, generations)  # → RandomMaps
cell = RECACell(DCA(90), enc)

rc = ReservoirChain(
    StatefulLayer(cell),
    LinearReadout(enc.states_size => in_dims; include_collect = true)
)

Or let RECA do this for you:

rc = RECA(in_dims = 4, out_dims = 4, DCA(90);
    input_encoding = RandomMapping(permutations = 8, expansion_size = 40),
    generations = 8)
source