ESN Drivers
ReservoirComputing.RNN — TypeRNN(activation_function, leaky_coefficient)
RNN(;activation_function=tanh, leaky_coefficient=1.0)Returns a Recurrent Neural Network (RNN) initializer for echo state networks (ESN).
Arguments
activation_function: The activation function used in the RNN.leaky_coefficient: The leaky coefficient used in the RNN.
Keyword Arguments
activation_function: The activation function used in the RNN. Defaults totanh_fast.leaky_coefficient: The leaky coefficient used in the RNN. Defaults to 1.0.
ReservoirComputing.MRNN — TypeMRNN(activation_function, leaky_coefficient, scaling_factor)
MRNN(;activation_function=[tanh, sigmoid], leaky_coefficient=1.0,
scaling_factor=fill(leaky_coefficient, length(activation_function)))Returns a Multiple RNN (MRNN) initializer for the Echo State Network (ESN), introduced in (Lun et al., 2015).
Arguments
activation_function: A vector of activation functions used in the MRNN.leaky_coefficient: The leaky coefficient used in the MRNN.scaling_factor: A vector of scaling factors for combining activation functions.
Keyword Arguments
activation_function: A vector of activation functions used in the MRNN. Defaults to[tanh, sigmoid].leaky_coefficient: The leaky coefficient used in the MRNN. Defaults to 1.0.scaling_factor: A vector of scaling factors for combining activation functions. Defaults to an array of the same size asactivation_functionwith all elements set toleaky_coefficient.
This function creates an MRNN object with the specified activation functions, leaky coefficient, and scaling factors, which can be used as a reservoir driver in the ESN.
ReservoirComputing.GRU — TypeGRU(;activation_function=[NNlib.sigmoid, NNlib.sigmoid, tanh],
inner_layer = fill(DenseLayer(), 2),
reservoir = fill(RandSparseReservoir(), 2),
bias = fill(DenseLayer(), 2),
variant = FullyGated())Returns a Gated Recurrent Unit (GRU) reservoir driver for Echo State Network (ESN). This driver is based on the GRU architecture (Cho et al., 2014).
Arguments
activation_function: An array of activation functions for the GRU layers. By default, it uses sigmoid activation functions for the update gate, reset gate, and tanh for the hidden state.inner_layer: An array of inner layers used in the GRU architecture. By default, it uses two dense layers.reservoir: An array of reservoir layers. By default, it uses two random sparse reservoirs.bias: An array of bias layers for the GRU. By default, it uses two dense layers.variant: The GRU variant to use. By default, it uses the "FullyGated" variant.
The GRU driver also provides the user with the choice of the possible variants:
ReservoirComputing.FullyGated — TypeFullyGated()Returns a Fully Gated Recurrent Unit (FullyGated) initializer for the Echo State Network (ESN).
Returns the standard gated recurrent unit (Cho et al., 2014) as a driver for the echo state network (ESN).
ReservoirComputing.Minimal — TypeMinimal()Returns a minimal GRU ESN initializer.
Please refer to the original papers for more detail about these architectures.
References
- Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H. and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation, arXiv preprint arXiv:1406.1078.
- Lun, S.-X.; Yao, X.-S.; Qi, H.-Y. and Hu, H.-F. (2015). A novel model of leaky integrator echo state network for time-series prediction. Neurocomputing 159, 58–66.