Altering States

In every ReservoirComputing model is posible to perform some alteration on the states in the training stage. Depending on the chosen modification this can improve the results for the prediction. Or more simply they can be used to reproduce results in the literature. The alterations are divided in two possibilities: the first concerns padding or extending the states, the second concerns non linear algorithms performed over the states.

Padding and Extending States

Extending the states means appending to them the corresponding input values. If $\textbf{x}(t)$ is the reservoir state at time t corresponding to the input $\textbf{u}(t)$ the extended state will be represented as $[\textbf{x}(t); \textbf{u}(t)]$ where $[;]$ is intended as vertical concatenation. This procedure is, for example, used in Jaeger's Scholarpedia description of Echo State Networks. The extension of the states can be obtained in every ReservoirComputing.jl model by using the keyword argument states_type and calling the method ExtendedStates(). No argument is needed.

Padding the states is appending a constant value, 1.0 for example, to each state. Using the notation introduced before we can define the padded states as $[\textbf{x}(t); 1.0]$. This approach is detailed in the seminal guide to Echo State Networks by Mantas Lukoševičius. By using the keyword argument states_type the user can call the method PaddedStates(padding) where padding represents the value that will be concatenated to the states. As default the value is set to unity, so the majority of times calling PaddedStates() will suffice.

Altough not found easily in the literature, it is also possible to pad the extended states by using the method PaddedExtendedStates(padding) that has unity as padding default as well.

Of course it is also possible to not apport any of these changes to the states by calling StandardStates(). This is also the default choice for the states.

Non Linear Algorithms

First introduced in [1] and expanded in [2] these are nonlinear combinations of the columns of the matrix states. There are three such algorithms implemented. Using the keyword argument nla_type it is possible to choose in every model in ReservoirComputing.jl the specific non linear algorithm to use. The defualt value is set to NLADefault(), where no non linear algorithm takes place.

Following the nomenclature used in [2] the algorithms can be called as NLAT1(), NLAT2() and NLAT3(). To better explain what they do, let $\textbf{x}_{i, j}$ be elements of the states matrix, with $i=1,...,T \ j=1,...,N$ where $T$ is the length of the training and $N$ is the reservoir size.

NLAT1

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \times \textbf{x}_{i,j} \ \ \text{if \textit{j} is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is even}\]

NLAT2

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j-1} \times \textbf{x}_{i,j-2} \ \ \text{if \textit{j} > 1 is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

NLAT3

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j-1} \times \textbf{x}_{i,j+1} \ \ \text{if \textit{j} > 1 is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Chattopadhyay, Ashesh, Pedram Hassanzadeh, and Devika Subramanian. "Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network." Nonlinear Processes in Geophysics 27.3 (2020): 373-389.