ESN Drivers

ReservoirComputing.RNNType
RNN(activation_function, leaky_coefficient)
RNN(;activation_function=tanh, leaky_coefficient=1.0)

Returns a Recurrent Neural Network (RNN) initializer for echo state networks (ESN).

Arguments

  • activation_function: The activation function used in the RNN.
  • leaky_coefficient: The leaky coefficient used in the RNN.

Keyword Arguments

  • activation_function: The activation function used in the RNN. Defaults to tanh_fast.
  • leaky_coefficient: The leaky coefficient used in the RNN. Defaults to 1.0.
source
ReservoirComputing.MRNNType
MRNN(activation_function, leaky_coefficient, scaling_factor)
MRNN(;activation_function=[tanh, sigmoid], leaky_coefficient=1.0,
    scaling_factor=fill(leaky_coefficient, length(activation_function)))

Returns a Multiple RNN (MRNN) initializer for the Echo State Network (ESN), introduced in (Lun et al., 2015).

Arguments

  • activation_function: A vector of activation functions used in the MRNN.
  • leaky_coefficient: The leaky coefficient used in the MRNN.
  • scaling_factor: A vector of scaling factors for combining activation functions.

Keyword Arguments

  • activation_function: A vector of activation functions used in the MRNN. Defaults to [tanh, sigmoid].
  • leaky_coefficient: The leaky coefficient used in the MRNN. Defaults to 1.0.
  • scaling_factor: A vector of scaling factors for combining activation functions. Defaults to an array of the same size as activation_function with all elements set to leaky_coefficient.

This function creates an MRNN object with the specified activation functions, leaky coefficient, and scaling factors, which can be used as a reservoir driver in the ESN.

source
ReservoirComputing.GRUType
GRU(;activation_function=[NNlib.sigmoid, NNlib.sigmoid, tanh],
    inner_layer = fill(DenseLayer(), 2),
    reservoir = fill(RandSparseReservoir(), 2),
    bias = fill(DenseLayer(), 2),
    variant = FullyGated())

Returns a Gated Recurrent Unit (GRU) reservoir driver for Echo State Network (ESN). This driver is based on the GRU architecture (Cho et al., 2014).

Arguments

  • activation_function: An array of activation functions for the GRU layers. By default, it uses sigmoid activation functions for the update gate, reset gate, and tanh for the hidden state.
  • inner_layer: An array of inner layers used in the GRU architecture. By default, it uses two dense layers.
  • reservoir: An array of reservoir layers. By default, it uses two random sparse reservoirs.
  • bias: An array of bias layers for the GRU. By default, it uses two dense layers.
  • variant: The GRU variant to use. By default, it uses the "FullyGated" variant.
source

The GRU driver also provides the user with the choice of the possible variants:

ReservoirComputing.FullyGatedType
FullyGated()

Returns a Fully Gated Recurrent Unit (FullyGated) initializer for the Echo State Network (ESN).

Returns the standard gated recurrent unit (Cho et al., 2014) as a driver for the echo state network (ESN).

source

Please refer to the original papers for more detail about these architectures.

References