Layers

Base Layers

ReservoirComputing.ReservoirComputerType
ReservoirComputer(reservoir, states_modifiers, readout)

Generic reservoir-computing container that wires together:

  1. a reservoir (any Lux-compatible layer producing features),
  2. zero or more states_modifiers applied sequentially to the reservoir features,
  3. a readout layer (typically LinearReadout).

The container exposes a standard (x, ps, st) -> (y, st′) interface and utility functions to initialize parameters/states, stream sequences to collect features, and install trained readout weights.

Arguments

  • reservoir: a layer that consumes inputs and produces feature vectors.
  • states_modifiers: a tuple (or vector converted to Tuple) of layers applied after the reservoir (may be empty).
  • readout: the final trainable layer mapping features to outputs.

Inputs

  • x: input to the reservoir (shape determined by the reservoir).
  • ps: reservoir computing parameters.
  • st: reservoir computing states.

Returns

  • (y, st′) where y is the readout output and st′ contains the updated states of the reservoir, modifiers, and readout.
source
ReservoirComputing.ReservoirChainType
ReservoirChain(layers...; name=nothing)
ReservoirChain(xs::AbstractVector; name=nothing)
ReservoirChain(nt::NamedTuple; name=nothing)
ReservoirChain(; name=nothing, kwargs...)

A lightweight, Lux-compatible container that composes a sequence of layers and executes them in order. The implementation of ReservoirChain is equivalent to Lux's own Chain.

Construction

You can build a chain from:

  • Positional layers:ReservoirChain(l1, l2, ...)
  • A vector of layers:ReservoirChain([l1, l2, ...])
  • A named tuple of layers:ReservoirChain((; layer_a=l1, layer_b=l2))
  • Keywords (sugar for a named tuple):ReservoirChain(; layer_a=l1, layer_b=l2)

In all cases, function objects are automatically wrapped via WrappedFunction so they can participate like regular layers. If a LinearReadout with include_collect=true is present, the chain automatically inserts a Collect layer immediately before that readout.

Use name to optionally tag the chain instance.

Inputs

(x, ps, st) where:

  • x: input to the first layer.
  • ps: parameters as a named tuple with the same fields and order as the chain's layers.
  • st: states as a named tuple with the same fields and order as the chain's layers.

The call (c::ReservoirChain)(x, ps, st) forwards x through each layer: (x, ps_i, st_i) -> (x_next, st_i′) and returns the final output and the updated states for every layer.

Returns

  • (y, st′) where y is the output of the last layer and st′ is a named tuple collecting the updated states for each layer.

Parameters

  • A NamedTuple whose fields correspond 1:1 with the layers. Each field holds the parameters for that layer.
  • Field names are generated as :layer_1, :layer_2, ... when constructed positionally, or preserved when you pass a NamedTuple/keyword constructor.

States

  • A NamedTuple whose fields correspond 1:1 with the layers. Each field holds the state for that layer.

Layer access & indexing

  • c[i]: get the i-th layer (1-based).
  • c[indices]: return a new ReservoirChain formed by selecting a subset of layers.
  • getproperty(c, :layer_k): access layer k by its generated/explicit name.
  • length(c), firstindex(c), lastindex(c): standard collection interfaces.

Notes

  • Function wrapping: Any plain Function in the constructor is wrapped as WrappedFunction(f). Non-layer, non-function objects will error.
  • Auto-collect for readouts: When a LinearReadout has include_collect=true, the constructor expands it to (Collect(), readout) so that downstream tooling can capture features consistently.
source
ReservoirComputing.CollectType
Collect()

Marker layer that passes data through unchanged but marks a feature checkpoint for collectstates. At each time step, whenever a Collect is encountered in the chain, the current vector is recorded as part of the feature vector used to train the readout. If multiple Collect layers exist, their vectors are concatenated with vcat in order of appearance.

Arguments

  • None.

Keyword arguments

  • None.

Inputs

  • x :: AbstractArray (d, batch) — the current tensor flowing through the chain.

Returns

  • (x, st) — the same tensor x and the unchanged state st.

Parameters

  • None.

States

  • None.

Notes

  • When used with a single Collect before a LinearReadout, training uses exactly the tensor right before the readout (e.g., the reservoir state).
  • With multipleCollect layers (e.g., after different submodules), the per-step features are vcat-ed in chain order to form one feature vector.
  • If the readout is constructed with include_collect=true, an implicit collection point is assumed immediately before the readout. Use an explicit Collect only when you want to control where/what is collected (or to stack multiple features).
rc = ReservoirChain(
    StatefulLayer(ESNCell(3 => 300)),
    NLAT2(),
    Collect(), # <-- collect the 300-dim reservoir after NLAT2
    LinearReadout(300 => 3; include_collect=false) # <-- toggle off the default Collect()
)
source
ReservoirComputing.StatefulLayerType
StatefulLayer(cell::AbstractReservoirRecurrentCell)

A lightweight wrapper that makes a recurrent cell carry its input state to the next step.

Arguments

  • cell: AbstractReservoirRecurrentCell (e.g. ESNCell).

States

  • cell: internal states for the wrapped cell (e.g., RNG replicas, etc.).
  • carry: the per-sequence hidden state; initialized to nothing.
source
ReservoirComputing.DelayLayerType
DelayLayer(input_dim; num_delays=2, stride=1)

Stateful delay layer that augments the current vector with a fixed number of time-delayed copies of itself. Intended to be used as a state_modifier in a ReservoirComputer, for example to build NVAR-style feature vectors.

At each call, the layer:

  1. Takes the current input vector h(t) of length input_dim.
  2. Produces an output vector that concatenates:
  • the current input h(t), and
  • num_delays previous inputs stored in an internal buffer.
  1. Updates its internal delay buffer with h(t) every stride calls.

Newly initialized buffers are filled with zeros, so at the beginning of a sequence, delayed entries correspond to zero padding.

Arguments

  • input_dim: Dimension of the input/state vector at each time step.

Keyword arguments

  • num_delays: Number of delayed copies to keep. The output will have (num_delays + 1) * input_dim entries: the current vector plus num_delays past vectors. Default is 2.
  • stride: Delay stride in layer calls. The internal buffer is updated only when clock % stride == 0. Default is 1.
  • init_delay: Initializer(s) for the delays. Must be either a single function (e.g. zeros32, randn32) or an NTuple of num_delays functions, e.g. (zeros32, randn32). If a single function fn is provided, it is automatically expanded into a num_delays-element tuple (fn, fn, ..., fn). Default is zeros32.

Inputs

  • h(t) :: AbstractVector (input_dim,)

Typically the current reservoir state at time t.

Returns

  • z :: AbstractVector ((num_delays + 1) * input_dim,)

Concatenation of the current input and its delayed copies.

Parameters

None

States

  • history: a matrix whose column j holds the j-th most recent stored input. On a stride update, the columns are shifted and the current input is placed in column 1.
  • clock: A counter that updates each call of the layer. The delay buffer is updated when clock % stride == 0.
source
ReservoirComputing.NonlinearFeaturesLayerType
NonlinearFeaturesLayer(features...; include_input=true)

Layer that builds a feature vector by applying one or more user-defined functions to a single input vector and concatenating the results. Intended to be used as a state_modifier (for example, after a DelayLayer) to construct NGRC/NVAR-style feature maps.

At each call, for an input vector x, the layer:

  1. Optionally includes x itself (if include_input=true).
  2. Applies each function in features to x.
  3. Returns the vertical concatenation of all results.

Arguments

  • features...: One or more functions f(x) that map a vector to a vector. Each function is called as f(inp) and must return an AbstractVector.

Keyword arguments

  • include_input: If true (default), the original input vector inp is included as the first block in the feature vector. If false, the output contains only the concatenation of features(inp).

Inputs

  • inp :: AbstractVector The current feature vector, typically the output of a DelayLayer or a reservoir state.

Returns

  • out :: AbstractVector Concatenation of:
    • the original input inp (if include_input=true), and
    • the outputs of each function in features applied to inp.
  • The unchanged state st (this layer is stateless).

Parameters

  • None. NonlinearFeaturesLayer has no trainable parameters.

States

  • None. initialstates returns an empty NamedTuple.
source

Readout Layers

ReservoirComputing.LinearReadoutType
LinearReadout(in_dims => out_dims, [activation];
        use_bias=false, include_collect=true)

Linear readout layer with optional bias and elementwise activation. Intended as the final, trainable mapping from collected features (e.g., reservoir state) to outputs. When include_collect=true, training will collect features immediately before this layer (logically inserting a Collect right before it).

Equation

\[\mathbf{y} = \psi\!\left(\mathbf{W}\,\mathbf{z} + \mathbf{b}\right)\]

Arguments

  • in_dims: Input/feature dimension (e.g., reservoir size).
  • out_dims: Output dimension (e.g., number of targets).
  • activation: Elementwise output nonlinearity. Default: identity.

Keyword arguments

  • use_bias: Include an additive bias vector b. Default: false.
  • include_collect: If true (default), training collects features immediately before this layer (as if a Collect were inserted right before it).

Parameters

  • weight :: (out_dims × in_dims)
  • bias :: (out_dims,) — present only if use_bias=true

States

  • None.

Notes

  • In ESN workflows, readout weights are typically replaced via ridge regression in train!. Therefore, how LinearReadout gets initialized is of no consequence. Additionally, the dimensions will also not be taken into account, as train! will replace the weights.
  • If you set include_collect=false, make sure a Collect appears earlier in the chain. Otherwise training may operate on the post-readout signal, which is usually unintended.
source
ReservoirComputing.SVMReadoutType
SVMReadout(in_dims => out_dims;
    include_collect=true, kwargs...)

Readout layer based on support vector machines. Requires LIBSVM.jl.

Arguments

  • in_dims: Input/feature dimension (e.g., reservoir size).
  • out_dims: Output dimension (e.g., number of targets).

Keyword arguments

  • include_collect: If true (default), training collects features immediately before this layer (as if a Collect() were inserted right before it).
  • kwags: specific keyword arguments for LIBSVM elements.
source

Echo State Networks

ReservoirComputing.ESNCellType
ESNCell(in_dims => out_dims, [activation];
    use_bias=false, init_bias=rand32,
    init_reservoir=rand_sparse, init_input=scaled_rand,
    init_state=randn32, leak_coefficient=1.0)

Echo State Network (ESN) recurrent cell with optional leaky integration.

Equations

\[\begin{aligned} \tilde{\mathbf{h}}(t) &= \phi\!\left(\mathbf{W}_{in}\,\mathbf{x}(t) + \mathbf{W}_{res}\,\mathbf{h}(t-1) + \mathbf{b}\right) \\ \mathbf{h}(t) &= (1-\alpha)\,\mathbf{h}(t-1) + \alpha\,\tilde{\mathbf{h}}(t) \end{aligned}\]

Arguments

  • in_dims: Input dimension.
  • out_dims: Reservoir (hidden state) dimension.
  • activation: Activation function. Default: tanh.

Keyword arguments

  • use_bias: Whether to include a bias term. Default: false.
  • init_bias: Initializer for the bias. Used only if use_bias=true. Default is rand32.
  • init_reservoir: Initializer for the reservoir matrix W_res. Default is rand_sparse.
  • init_input: Initializer for the input matrix W_in. Default is scaled_rand.
  • init_state: Initializer for the hidden state when an external state is not provided. Default is randn32.
  • leak_coefficient: Leak rate α ∈ (0,1]. Default: 1.0.

Inputs

  • Case 1:x :: AbstractArray (in_dims, batch) A fresh state is created via init_state; the call is forwarded to Case 2.
  • Case 2:(x, (h,)) where h :: AbstractArray (out_dims, batch) Computes the update and returns the new state.

In both cases, the forward returns ((h_new, (h_new,)), st_out) where st_out contains any updated internal state.

Returns

  • Output/hidden state h_new :: out_dims and state tuple (h_new,).
  • Updated layer state (NamedTuple).

Parameters

Created by initialparameters(rng, esn):

  • input_matrix :: (out_dims × in_dims)W_in
  • reservoir_matrix :: (out_dims × out_dims)W_res
  • bias :: (out_dims,) — present only if use_bias=true

States

Created by initialstates(rng, esn):

  • rng: a replicated RNG used to sample initial hidden states when needed.
source

Reservoir computing with cellular automata

ReservoirComputing.RECACellType
RECACell(automaton, enc::RandomMaps)

Cellular Automata (CA)–based reservoir recurrent cell. At each time step, the input vector is randomly embedded into a CA configuration, the CA is evolved for a fixed number of generations, and the flattened CA evolution is emitted as the reservoir state. The last CA configuration is carried to the next step. For more details please refer to (Nichele and Molund, 2017), and (Yilmaz, 2014).

Arguments

  • automaton: A cellular automaton rule/object from CellularAutomata.jl (e.g., DCA(90), DCA(30), …).

  • enc: Precomputed random-mapping/encoding metadata given as a RandomMapping.

Inputs

  • Case A: a single input vector x with length in_dims. The cell internally uses the stored CA state (st.ca) as the previous configuration.

  • Case B: a tuple (x, (ca,)) where x is as above and ca has length enc.ca_size.

Computation

  1. Random embedding of x into a CA initial condition c₀ using enc.maps across enc.permutations blocks of length enc.expansion_size.

  2. CA evolution for G = enc.generations steps with the given automaton, producing an evolution matrix E ∈ ℝ^{(G+1) × ca_size} where E[1,:] = c₀ and E[t+1,:] = F(E[t,:]).

  3. Feature vector is the flattened stack of E[2:end, :] (dropping the initial row), shaped as a column vector of length enc.states_size.

  4. Carry is the final CA configuration E[end, :].

Returns

  • Output: (h, (caₙ,)) where
    • h has length enc.states_size (the CA features),
    • caₙ has length enc.ca_size (next carry).
  • Updated (unchanged) cell state (parameters-free layer state).

Parameters & State

  • Parameters: none
  • State: (ca = zeros(Float32, enc.ca_size))
source