Layers
Base Layers
ReservoirComputing.ReservoirComputer — Type
ReservoirComputer(reservoir, states_modifiers, readout)Generic reservoir-computing container that wires together:
- a
reservoir(any Lux-compatible layer producing features), - zero or more
states_modifiersapplied sequentially to the reservoir features, - a
readoutlayer (typicallyLinearReadout).
The container exposes a standard (x, ps, st) -> (y, st′) interface and utility functions to initialize parameters/states, stream sequences to collect features, and install trained readout weights.
Arguments
reservoir: a layer that consumes inputs and produces feature vectors.states_modifiers: a tuple (or vector converted toTuple) of layers applied after the reservoir (may be empty).readout: the final trainable layer mapping features to outputs.
Inputs
x: input to the reservoir (shape determined by the reservoir).ps: reservoir computing parameters.st: reservoir computing states.
Returns
(y, st′)whereyis the readout output andst′contains the updated states of the reservoir, modifiers, and readout.
ReservoirComputing.ReservoirChain — Type
ReservoirChain(layers...; name=nothing)
ReservoirChain(xs::AbstractVector; name=nothing)
ReservoirChain(nt::NamedTuple; name=nothing)
ReservoirChain(; name=nothing, kwargs...)A lightweight, Lux-compatible container that composes a sequence of layers and executes them in order. The implementation of ReservoirChain is equivalent to Lux's own Chain.
Construction
You can build a chain from:
- Positional layers:
ReservoirChain(l1, l2, ...) - A vector of layers:
ReservoirChain([l1, l2, ...]) - A named tuple of layers:
ReservoirChain((; layer_a=l1, layer_b=l2)) - Keywords (sugar for a named tuple):
ReservoirChain(; layer_a=l1, layer_b=l2)
In all cases, function objects are automatically wrapped via WrappedFunction so they can participate like regular layers. If a LinearReadout with include_collect=true is present, the chain automatically inserts a Collect layer immediately before that readout.
Use name to optionally tag the chain instance.
Inputs
(x, ps, st) where:
x: input to the first layer.ps: parameters as a named tuple with the same fields and order as the chain's layers.st: states as a named tuple with the same fields and order as the chain's layers.
The call (c::ReservoirChain)(x, ps, st) forwards x through each layer: (x, ps_i, st_i) -> (x_next, st_i′) and returns the final output and the updated states for every layer.
Returns
(y, st′)whereyis the output of the last layer andst′is a named tuple collecting the updated states for each layer.
Parameters
- A
NamedTuplewhose fields correspond 1:1 with the layers. Each field holds the parameters for that layer. - Field names are generated as
:layer_1, :layer_2, ...when constructed positionally, or preserved when you pass aNamedTuple/keyword constructor.
States
- A
NamedTuplewhose fields correspond 1:1 with the layers. Each field holds the state for that layer.
Layer access & indexing
c[i]: get the i-th layer (1-based).c[indices]: return a newReservoirChainformed by selecting a subset of layers.getproperty(c, :layer_k): access layerkby its generated/explicit name.length(c),firstindex(c),lastindex(c): standard collection interfaces.
Notes
- Function wrapping: Any plain
Functionin the constructor is wrapped asWrappedFunction(f). Non-layer, non-function objects will error. - Auto-collect for readouts: When a
LinearReadouthasinclude_collect=true, the constructor expands it to(Collect(), readout)so that downstream tooling can capture features consistently.
ReservoirComputing.Collect — Type
Collect()Marker layer that passes data through unchanged but marks a feature checkpoint for collectstates. At each time step, whenever a Collect is encountered in the chain, the current vector is recorded as part of the feature vector used to train the readout. If multiple Collect layers exist, their vectors are concatenated with vcat in order of appearance.
Arguments
- None.
Keyword arguments
- None.
Inputs
x :: AbstractArray (d, batch)— the current tensor flowing through the chain.
Returns
(x, st)— the same tensorxand the unchanged statest.
Parameters
- None.
States
- None.
Notes
- When used with a single
Collectbefore aLinearReadout, training uses exactly the tensor right before the readout (e.g., the reservoir state). - With multiple
Collectlayers (e.g., after different submodules), the per-step features arevcat-ed in chain order to form one feature vector. - If the readout is constructed with
include_collect=true, an implicit collection point is assumed immediately before the readout. Use an explicitCollectonly when you want to control where/what is collected (or to stack multiple features).
rc = ReservoirChain(
StatefulLayer(ESNCell(3 => 300)),
NLAT2(),
Collect(), # <-- collect the 300-dim reservoir after NLAT2
LinearReadout(300 => 3; include_collect=false) # <-- toggle off the default Collect()
)ReservoirComputing.StatefulLayer — Type
StatefulLayer(cell::AbstractReservoirRecurrentCell)A lightweight wrapper that makes a recurrent cell carry its input state to the next step.
Arguments
cell:AbstractReservoirRecurrentCell(e.g.ESNCell).
States
cell: internal states for the wrappedcell(e.g., RNG replicas, etc.).carry: the per-sequence hidden state; initialized tonothing.
ReservoirComputing.DelayLayer — Type
DelayLayer(input_dim; num_delays=2, stride=1)Stateful delay layer that augments the current vector with a fixed number of time-delayed copies of itself. Intended to be used as a state_modifier in a ReservoirComputer, for example to build NVAR-style feature vectors.
At each call, the layer:
- Takes the current input vector
h(t)of lengthinput_dim. - Produces an output vector that concatenates:
- the current input
h(t), and num_delaysprevious inputs stored in an internal buffer.
- Updates its internal delay buffer with
h(t)everystridecalls.
Newly initialized buffers are filled with zeros, so at the beginning of a sequence, delayed entries correspond to zero padding.
Arguments
input_dim: Dimension of the input/state vector at each time step.
Keyword arguments
num_delays: Number of delayed copies to keep. The output will have(num_delays + 1) * input_dimentries: the current vector plusnum_delayspast vectors. Default is 2.stride: Delay stride in layer calls. The internal buffer is updated only whenclock % stride == 0. Default is 1.init_delay: Initializer(s) for the delays. Must be either a single function (e.g.zeros32,randn32) or anNTupleofnum_delaysfunctions, e.g.(zeros32, randn32). If a single functionfnis provided, it is automatically expanded into anum_delays-element tuple(fn, fn, ..., fn). Default iszeros32.
Inputs
h(t) :: AbstractVector (input_dim,)
Typically the current reservoir state at time t.
Returns
z :: AbstractVector ((num_delays + 1) * input_dim,)
Concatenation of the current input and its delayed copies.
Parameters
None
States
history: a matrix whose column j holds the j-th most recent stored input. On a stride update, the columns are shifted and the current input is placed in column 1.clock: A counter that updates each call of the layer. The delay buffer is updated whenclock % stride == 0.
ReservoirComputing.NonlinearFeaturesLayer — Type
NonlinearFeaturesLayer(features...; include_input=true)Layer that builds a feature vector by applying one or more user-defined functions to a single input vector and concatenating the results. Intended to be used as a state_modifier (for example, after a DelayLayer) to construct NGRC/NVAR-style feature maps.
At each call, for an input vector x, the layer:
- Optionally includes
xitself (ifinclude_input=true). - Applies each function in
featurestox. - Returns the vertical concatenation of all results.
Arguments
features...: One or more functionsf(x)that map a vector to a vector. Each function is called asf(inp)and must return anAbstractVector.
Keyword arguments
include_input: Iftrue(default), the original input vectorinpis included as the first block in the feature vector. Iffalse, the output contains only the concatenation offeatures(inp).
Inputs
inp :: AbstractVectorThe current feature vector, typically the output of aDelayLayeror a reservoir state.
Returns
out :: AbstractVectorConcatenation of:- the original input
inp(ifinclude_input=true), and - the outputs of each function in
featuresapplied toinp.
- the original input
- The unchanged state
st(this layer is stateless).
Parameters
- None.
NonlinearFeaturesLayerhas no trainable parameters.
States
- None.
initialstatesreturns an emptyNamedTuple.
Readout Layers
ReservoirComputing.LinearReadout — Type
LinearReadout(in_dims => out_dims, [activation];
use_bias=false, include_collect=true)Linear readout layer with optional bias and elementwise activation. Intended as the final, trainable mapping from collected features (e.g., reservoir state) to outputs. When include_collect=true, training will collect features immediately before this layer (logically inserting a Collect right before it).
Equation
\[\mathbf{y} = \psi\!\left(\mathbf{W}\,\mathbf{z} + \mathbf{b}\right)\]
Arguments
in_dims: Input/feature dimension (e.g., reservoir size).out_dims: Output dimension (e.g., number of targets).activation: Elementwise output nonlinearity. Default:identity.
Keyword arguments
use_bias: Include an additive bias vectorb. Default:false.include_collect: Iftrue(default), training collects features immediately before this layer (as if aCollectwere inserted right before it).
Parameters
weight :: (out_dims × in_dims)bias :: (out_dims,)— present only ifuse_bias=true
States
- None.
Notes
- In ESN workflows, readout weights are typically replaced via ridge regression in
train!. Therefore, howLinearReadoutgets initialized is of no consequence. Additionally, the dimensions will also not be taken into account, astrain!will replace the weights. - If you set
include_collect=false, make sure aCollectappears earlier in the chain. Otherwise training may operate on the post-readout signal, which is usually unintended.
ReservoirComputing.SVMReadout — Type
SVMReadout(in_dims => out_dims;
include_collect=true, kwargs...)Readout layer based on support vector machines. Requires LIBSVM.jl.
Arguments
in_dims: Input/feature dimension (e.g., reservoir size).out_dims: Output dimension (e.g., number of targets).
Keyword arguments
include_collect: Iftrue(default), training collects features immediately before this layer (as if aCollect()were inserted right before it).kwags: specific keyword arguments for LIBSVM elements.
Echo State Networks
ReservoirComputing.ESNCell — Type
ESNCell(in_dims => out_dims, [activation];
use_bias=false, init_bias=rand32,
init_reservoir=rand_sparse, init_input=scaled_rand,
init_state=randn32, leak_coefficient=1.0)Echo State Network (ESN) recurrent cell with optional leaky integration.
Equations
\[\begin{aligned} \tilde{\mathbf{h}}(t) &= \phi\!\left(\mathbf{W}_{in}\,\mathbf{x}(t) + \mathbf{W}_{res}\,\mathbf{h}(t-1) + \mathbf{b}\right) \\ \mathbf{h}(t) &= (1-\alpha)\,\mathbf{h}(t-1) + \alpha\,\tilde{\mathbf{h}}(t) \end{aligned}\]
Arguments
in_dims: Input dimension.out_dims: Reservoir (hidden state) dimension.activation: Activation function. Default:tanh.
Keyword arguments
use_bias: Whether to include a bias term. Default:false.init_bias: Initializer for the bias. Used only ifuse_bias=true. Default isrand32.init_reservoir: Initializer for the reservoir matrixW_res. Default isrand_sparse.init_input: Initializer for the input matrixW_in. Default isscaled_rand.init_state: Initializer for the hidden state when an external state is not provided. Default israndn32.leak_coefficient: Leak rateα ∈ (0,1]. Default:1.0.
Inputs
- Case 1:
x :: AbstractArray (in_dims, batch)A fresh state is created viainit_state; the call is forwarded to Case 2. - Case 2:
(x, (h,))whereh :: AbstractArray (out_dims, batch)Computes the update and returns the new state.
In both cases, the forward returns ((h_new, (h_new,)), st_out) where st_out contains any updated internal state.
Returns
- Output/hidden state
h_new :: out_dimsand state tuple(h_new,). - Updated layer state (NamedTuple).
Parameters
Created by initialparameters(rng, esn):
input_matrix :: (out_dims × in_dims)—W_inreservoir_matrix :: (out_dims × out_dims)—W_resbias :: (out_dims,)— present only ifuse_bias=true
States
Created by initialstates(rng, esn):
rng: a replicated RNG used to sample initial hidden states when needed.
Reservoir computing with cellular automata
ReservoirComputing.RECACell — Type
RECACell(automaton, enc::RandomMaps)Cellular Automata (CA)–based reservoir recurrent cell. At each time step, the input vector is randomly embedded into a CA configuration, the CA is evolved for a fixed number of generations, and the flattened CA evolution is emitted as the reservoir state. The last CA configuration is carried to the next step. For more details please refer to (Nichele and Molund, 2017), and (Yilmaz, 2014).
Arguments
automaton: A cellular automaton rule/object fromCellularAutomata.jl(e.g.,DCA(90),DCA(30), …).enc: Precomputed random-mapping/encoding metadata given as aRandomMapping.
Inputs
Case A: a single input vector
xwith lengthin_dims. The cell internally uses the stored CA state (st.ca) as the previous configuration.Case B: a tuple
(x, (ca,))wherexis as above andcahas lengthenc.ca_size.
Computation
Random embedding of
xinto a CA initial conditionc₀usingenc.mapsacrossenc.permutationsblocks of lengthenc.expansion_size.CA evolution for
G = enc.generationssteps with the givenautomaton, producing an evolution matrixE ∈ ℝ^{(G+1) × ca_size}whereE[1,:] = c₀andE[t+1,:] = F(E[t,:]).Feature vector is the flattened stack of
E[2:end, :](dropping the initial row), shaped as a column vector of lengthenc.states_size.Carry is the final CA configuration
E[end, :].
Returns
- Output:
(h, (caₙ,))wherehhas lengthenc.states_size(the CA features),caₙhas lengthenc.ca_size(next carry).
- Updated (unchanged) cell state (parameters-free layer state).
Parameters & State
- Parameters: none
- State:
(ca = zeros(Float32, enc.ca_size))