Models
Echo State Networks
ReservoirComputing.ESN — Type
ESN(in_dims, res_dims, out_dims, activation=tanh;
leak_coefficient=1.0, init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=false,
state_modifiers=(), readout_activation=identity)Echo State Network (ESN): a reservoir (recurrent) layer followed by an optional sequence of state-modifier layers and a linear readout.
ESN composes:
- a stateful
ESNCell(reservoir), - zero or more
state_modifiersapplied to the reservoir state, and - a
LinearReadoutmapping reservoir features to outputs.
Equations
For input \mathbf{x}(t) ∈ \mathbb{R}^{in\_dims}, reservoir state \mathbf{h}(t) ∈ \mathbb{R}^{res\_dims}, and output \mathbf{y}(t) ∈ \mathbb{R}^{out\_dims}:
\[\begin{aligned} \tilde{\mathbf{h}}(t) &= \phi\!\left(\mathbf{W}_{in}\,\mathbf{x}(t) + \mathbf{W}_{res}\,\mathbf{h}(t-1) + \mathbf{b}\right) \\ \mathbf{h}(t) &= (1-\alpha)\,\mathbf{h}(t-1) + \alpha\,\tilde{\mathbf{h}}(t) \\ \mathbf{z}(t) &= \psi\!\left(\mathrm{Mods}\big(\mathbf{h}(t)\big)\right) \\ \mathbf{y}(t) &= \rho\!\left(\mathbf{W}_{out}\,\mathbf{z}(t) + \mathbf{b}_{out}\right) \end{aligned}\]
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
Reservoir (passed to ESNCell):
leak_coefficient: Leak rateα ∈ (0,1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used iffuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.
Composition:
state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × in_dims)—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims)—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
reservoir— states for the internalESNCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
ReservoirComputing.DeepESN — Type
DeepESN(in_dims, res_dims, out_dims,
activation=tanh; depth=2, leak_coefficient=1.0, init_reservoir=rand_sparse,
init_input=scaled_rand, init_bias=zeros32, init_state=randn32,
use_bias=false, state_modifiers=(), readout_activation=identity)Deep Echo State Network (DeepESN): a stack of stateful ESNCell layers (optionally with per-layer state modifiers) followed by a linear readout.
DeepESN composes, for L = length(res_dims) layers:
- a sequence of stateful
ESNCellwith widthsres_dims[ℓ], - zero or more per-layer
state_modifiers[ℓ]applied to the layer's state, and - a final
LinearReadoutfrom the last layer's features to the output.
Equations
For input \mathbf{x}(t) ∈ \mathbb{R}^{in\_dims}, per-layer reservoir states \mathbf{h}^{(\ell)}(t) ∈ \mathbb{R}^{res\_dims[\ell]} (\ell = 1..L), and output \mathbf{y}(t) ∈ \mathbb{R}^{out\_dims}:
```math \begin{aligned} \tilde{\mathbf{h}}^{(1)}(t) &= \phi1!\left( \mathbf{W}^{(1)}{in}\,\mathbf{x}(t) + \mathbf{W}^{(1)}{res}\,\mathbf{h}^{(1)}(t-1) + \mathbf{b}^{(1)}\right) \ \mathbf{h}^{(1)}(t) &= (1-\alpha1)\,\mathbf{h}^{(1)}(t-1) + \alpha1\,\tilde{\mathbf{h}}^{(1)}(t) \ \mathbf{u}^{(1)}(t) &= \mathrm{Mods}1!\big(\mathbf{h}^{(1)}(t)\big) \ \tilde{\mathbf{h}}^{(\ell)}(t) &= \phi\ell!\left( \mathbf{W}^{(\ell)}{in}\,\mathbf{u}^{(\ell-1)}(t) + \mathbf{W}^{(\ell)}{res}\,\mathbf{h}^{(\ell)}(t-1) + \mathbf{b}^{(\ell)}\right), \quad \ell=2..L \ \mathbf{h}^{(\ell)}(t) &= (1-\alpha\ell)\,\mathbf{h}^{(\ell)}(t-1) + \alpha\ell\,\tilde{\mathbf{h}}^{(\ell)}(t), \quad \ell=2..L \ \mathbf{u}^{(\ell)}(t) &= \mathrm{Mods}\ell!\big(\mathbf{h}^{(\ell)}(t)\big), \quad \ell=2..L \ \mathbf{y}(t) &= \rho!\left(\mathbf{W}{out}\,\mathbf{u}^{(L)}(t) + \mathbf{b}{out}\right) \end{aligned}
Where
\mathbf{x}(t) ∈ ℝ^{in_dims × batch}— input at timet.\mathbf{h}^{(\ell)}(t) ∈ ℝ^{res_dims[ℓ] × batch}— hidden state of layerℓ.\tilde{\mathbf{h}}^{(\ell)}(t)— candidate state before leaky mixing.\mathbf{u}^{(\ell)}(t)— features after applying theℓ-thstate_modifiers(identity if none).\mathbf{y}(t) ∈ ℝ^{out_dims × batch}— network output.\mathbf{W}^{(\ell)}_{in} ∈ ℝ^{res_dims[ℓ] × in\_size[ℓ]}— input matrix at layerℓ(in_size[1]=in_dims,in_size[ℓ]=res_dims[ℓ-1]forℓ>1).\mathbf{W}^{(\ell)}_{res} ∈ ℝ^{res_dims[ℓ] × res_dims[ℓ]}— reservoir matrix at layerℓ.\mathbf{b}^{(\ell)} ∈ ℝ^{res_dims[ℓ] × 1}— reservoir bias (broadcast over batch), present iffuse_bias[ℓ]=true.\mathbf{W}_{out} ∈ ℝ^{out_dims × res_dims[L]}— readout matrix.\mathbf{b}_{out} ∈ ℝ^{out_dims × 1}— readout bias (if used by the readout).\phi_\ell— activation of layerℓ(activation[ℓ], defaulttanh).\alpha_\ell ∈ (0,1]— leak coefficient of layerℓ(leak_coefficient[ℓ]).\mathrm{Mods}_\ell(·)— composition of modifiers for layerℓ(may be empty).\rho— readout activation (readout_activation, defaultidentity).
Arguments
in_dims: Input dimension.res_dims: Vector of reservoir (hidden) dimensions per layer; its length sets the depthL.out_dims: Output dimension.activation: Reservoir activation(s). Either a single function (broadcast to all layers) or a vector/tuple of lengthL. Default:tanh.
Keyword arguments
Per-layer reservoir options (passed to each ESNCell):
leak_coefficient: Leak rate(s)α_ℓ ∈ (0,1]. Scalar or length-Lcollection. Default:1.0.init_reservoir: Initializer(s) forW_res^{(ℓ)}. Scalar or length-L. Default:rand_sparse.init_input: Initializer(s) forW_in^{(ℓ)}. Scalar or length-L. Default:scaled_rand.init_bias: Initializer(s) for reservoir bias (used iffuse_bias[ℓ]=true). Scalar or length-L. Default:zeros32.init_state: Initializer(s) used when an external state is not provided. Scalar or length-L. Default:randn32.use_bias: Whether each reservoir uses a bias term. Boolean scalar or length-L. Default:false.depth: Depth of the DeepESN. If the reservoir size is given as a number instead of a vector, this parameter controls the depth of the model. Default is 2.
Composition:
state_modifiers: Per-layer modifier(s) applied to each layer’s state before it feeds into the next layer (and the readout for the last layer). Acceptsnothing, a single layer, a vector/tuple of lengthL, or per-layer collections. Defaults to no modifiers.readout_activation: Activation for the final linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple) containing states for all cells, modifiers, and readout.
Parameters
cells :: NTuple{L,NamedTuple}— parameters for eachESNCell, including:input_matrix :: (res_dims[ℓ] × in_size[ℓ])—W_in^{(ℓ)}reservoir_matrix :: (res_dims[ℓ] × res_dims[ℓ])—W_res^{(ℓ)}bias :: (res_dims[ℓ],)— present only ifuse_bias[ℓ]=true
states_modifiers :: NTuple{L,Tuple}— per-layer tuples of modifier parameters (empty tuples if none).readout— parameters ofLinearReadout, typically:weight :: (out_dims × res_dims[L])—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
cells :: NTuple{L,NamedTuple}— states for eachESNCell.states_modifiers :: NTuple{L,Tuple}— per-layer tuples of modifier states.readout— states forLinearReadout.
ReservoirComputing.DelayESN — Type
DelayESN(in_dims, res_dims, out_dims, activation=tanh;
num_delays=1, stride=1, leak_coefficient=1.0,
init_reservoir=rand_sparse, init_input=scaled_rand,
init_bias=zeros32, init_state=randn32, use_bias=false,
state_modifiers=(), readout_activation=identity)Echo State Network whose reservoir state is first passed through a delay feature expansion before the readout. This implements a state-delayed ESN (Fleddermann et al., 2025), where the readout sees the current reservoir state together with a fixed number of its past values.
DelayESN composes:
- a stateful
ESNCell(reservoir), - a
DelayLayerapplied to the reservoir state to build tapped-delay features, - zero or more additional
state_modifiersapplied after the delay, and - a
LinearReadoutmapping delayed reservoir features to outputs.
At each time step, the reservoir produces a state vector h(t) of length res_dims. The DelayLayer then constructs a feature vector that stacks h(t) together with num_delays past states, spaced according to stride, before passing it on to any further modifiers and the readout.
Arguments
in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
Reservoir (passed to ESNCell):
leak_coefficient: Leak rate in(0, 1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used iffuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.
Delay expansion:
num_delays: Number of past reservoir states to include in the tapped-delay vector. TheDelayLayeroutput has(num_delays + 1) * res_dimsentries (current state plusnum_delayspast states). Default:1.stride: Delay stride in layer calls. The delay buffer is updated only when the internal clock is a multiple ofstride. Default:1.
Composition:
state_modifiers: A layer or collection of layers applied to the delayed reservoir features before the readout. These run after the internalDelayLayer. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × in_dims)—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for:- the internal
DelayLayer, and - any user-provided modifier layers (may be empty).
- the internal
readout— parameters ofLinearReadout, typically:weight :: (out_dims × ((num_delays + 1) * res_dims))—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
reservoir— states for the internalESNCell(e.g.rngused to sample initial hidden states).states_modifiers— aTuplewith states for the internalDelayLayer(its delay buffer and clock) and each additional modifier layer.readout— states forLinearReadout(typically empty).
ReservoirComputing.HybridESN — Type
HybridESN(km, km_dims, in_dims, res_dims, out_dims, [activation];
state_modifiers=(), readout_activation=identity,
include_collect=true, kwargs...)Hybrid Echo State Network (HybridESN): an Echo State Network augmented with a knowledge model whose outputs are concatenated to the ESN’s input and used throughout the reservoir and readout computations.
HybridESN composes:
- a knowledge model
kmproducing auxiliary features from the input, - a stateful
ESNCellthat receives the concatenated input[km(x(t)); x(t)], - zero or more
state_modifiersapplied to the reservoir state, and - a
LinearReadoutmapping the combined features[km(x(t)); h*(t)]to the output.
Arguments
km: Knowledge model applied to the input (e.g. a physical model, neural submodule, or differentiable function). May be aWrappedFunctionor any callable layer.km_dims: Output dimension of the knowledge modelkm.in_dims: Input dimension.res_dims: Reservoir (hidden state) dimension.out_dims: Output dimension.activation: Reservoir activation (forESNCell). Default:tanh.
Keyword arguments
leak_coefficient: Leak rateα ∈ (0,1]. Default:1.0.init_reservoir: Initializer forW_res. Default:rand_sparse.init_input: Initializer forW_in. Default:scaled_rand.init_bias: Initializer for reservoir bias (used ifuse_bias=true). Default:zeros32.init_state: Initializer used when an external state is not provided. Default:randn32.use_bias: Whether the reservoir uses a bias term. Default:false.state_modifiers: A layer or collection of layers applied to the reservoir state before the readout. Accepts a single layer, anAbstractVector, or aTuple. Default: empty().readout_activation: Activation for the linear readout. Default:identity.include_collect: Whether the readout should include collection mode. Default:true.
Inputs
x :: AbstractArray (in_dims, batch)
Returns
- Output
y :: (out_dims, batch). - Updated layer state (NamedTuple).
Parameters
knowledge_model— parameters of the knowledge modelkm.reservoir— parameters of the internalESNCell, including:input_matrix :: (res_dims × (in_dims + km_dims))—W_inreservoir_matrix :: (res_dims × res_dims)—W_resbias :: (res_dims,)— present only ifuse_bias=true
states_modifiers— aTuplewith parameters for each modifier layer (may be empty).readout— parameters ofLinearReadout, typically:weight :: (out_dims × (res_dims + km_dims))—W_outbias :: (out_dims,)—b_out(if the readout uses bias)
Exact field names for modifiers/readout follow their respective layer definitions.
States
Created by initialstates(rng, hesn):
knowledge_model— states for the internal knowledge model.reservoir— states for the internalESNCell.states_modifiers— aTuplewith states for each modifier layer.readout— states forLinearReadout.
Utilities
ReservoirComputing.resetcarry! — Function
resetcarry!(rng, rc::ReservoirComputer, st; init_carry=nothing)
resetcarry!(rng, rc::ReservoirComputer, ps, st; init_carry=nothing)Reset (or set) the hidden-state carry of a model in the echo state network family.
If an existing carry is present in st.cell.carry, its leading dimension is used to infer the state size. Otherwise the reservoir output size is taken from rc.reservoir.cell.out_dims. When init_carry=nothing, the carry is cleared; the initializer from the struct construction will then be used. When a function is provided, it is called to create a new initial hidden state.
Arguments
rng: Random number generator (used if a new carry is sampled/created).rc: A reservoir computing network model.st: Current model states.ps: Optional model parameters. Returned unchanged.
Keyword arguments
init_carry: Controls the initialization of the new carry.nothing(default): remove/clear the carry (forces the cell to reinitialize from its owninit_stateon next use).f: a function following standard from WeightInitializers.jl
Returns
resetcarry!(rng, rc, st; ...) -> st′: Updated states withst′.cell.carryset tonothingor(h0,).resetcarry!(rng, rc, ps, st; ...) -> (ps, st′): Same as above, but also returns the unchangedpsfor convenience.
Reservoir Computing with Cellular Automata
ReservoirComputing.RECA — Function
RECA(in_dims, out_dims, automaton;
input_encoding=RandomMapping(),
generations=8, state_modifiers=(),
readout_activation=identity)Construct a cellular–automata reservoir model.
At each time step the input vector is randomly embedded into a Cellular Automaton (CA) lattice, the CA is evolved for generations steps, and the flattened evolution (excluding the initial row) is used as the reservoir state. A linear LinearReadout maps these features to out_dims.
Arguments
in_dims: Number of input features (rows of training data).out_dims: Number of output features (rows of target data).automaton: A CA rule/object fromCellularAutomata.jl(e.g.DCA(90),DCA(30), …).
Keyword Arguments
input_encoding: Random embedding spec with fieldspermutationsandexpansion_size. Default isRandomMapping().generations: Number of CA generations to evolve per time step. Default is 8.state_modifiers: Optional tuple/vector of additional layers applied after the CA cell and before the readout (e.g.,NLAT2(),Pad(1.0), custom transforms, etc.). Functions are wrapped automatically. Default is none.readout_activation: Activation applied by the readout Default isidentity.
The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:
ReservoirComputing.RandomMapping — Type
RandomMapping(permutations, expansion_size)
RandomMapping(permutations; expansion_size=40)
RandomMapping(;permutations=8, expansion_size=40)Specify the random input embedding used by the Cellular Automata reservoir. Each time step, the input vector of length in_dims is randomly placed into a larger 1D lattice of length expansion_size, and this is repeated for permutations independent lattices (blocks). The concatenation of these blocks forms the CA initial condition of length: ca_size = expansion_size * permutations. The detail of this implementation can be found in (Nichele and Molund, 2017).
Arguments
permutations: number of independent random maps (blocks). Larger values increase feature diversity andca_sizeproportionally.expansion_size: width of each block (the size of a single CA lattice). Larger values increase the spatial resolution and bothca_sizeandstates_size.
Usage
This is a configuration object; it does not perform the mapping by itself. Create the concrete tables with create_encoding and pass them to RECACell:
using ReservoirComputing, CellularAutomata, Random
in_dims = 4
generations = 8
mapping = RandomMapping(permutations = 8, expansion_size = 40)
enc = ReservoirComputing.create_encoding(mapping, in_dims, generations) # → RandomMaps
cell = RECACell(DCA(90), enc)
rc = ReservoirChain(
StatefulLayer(cell),
LinearReadout(enc.states_size => in_dims; include_collect = true)
)Or let RECA do this for you:
rc = RECA(in_dims = 4, out_dims = 4, DCA(90);
input_encoding = RandomMapping(permutations = 8, expansion_size = 40),
generations = 8)