States Modifications
Padding and Estension
ReservoirComputing.StandardStates
— TypeStandardStates()
When this struct is employed, the states of the reservoir are not modified.
Example
julia> states = StandardStates()
StandardStates()
julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
julia> new_vec = states(test_vec)
5-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> new_mat = states(test_mat)
5×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
ReservoirComputing.ExtendedStates
— TypeExtendedStates()
The ExtendedStates
struct is used to extend the reservoir states by vertically concatenating the input data (during training) and the prediction data (during the prediction phase).
Example
julia> states = ExtendedStates()
ExtendedStates()
julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
julia> new_vec = states(test_vec, fill(3.0f0, 3))
8-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
3.0
3.0
3.0
julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> new_mat = states(test_mat, fill(3.0f0, 3))
8×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
3.0 3.0 3.0 3.0 3.0
3.0 3.0 3.0 3.0 3.0
3.0 3.0 3.0 3.0 3.0
ReservoirComputing.PaddedStates
— TypePaddedStates(padding)
PaddedStates(;padding=1.0)
Creates an instance of the PaddedStates
struct with specified padding value (default 1.0). The states of the reservoir are padded by vertically concatenating the padding value.
Example
julia> states = PaddedStates(1.0)
PaddedStates{Float64}(1.0)
julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
julia> new_vec = states(test_vec)
6-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
1.0
julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> new_mat = states(test_mat)
6×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0
ReservoirComputing.PaddedExtendedStates
— TypePaddedExtendedStates(padding)
PaddedExtendedStates(;padding=1.0)
Constructs a PaddedExtendedStates
struct, which first extends the reservoir states with training or prediction data,then pads them with a specified value (defaulting to 1.0).
Example
julia> states = PaddedExtendedStates(1.0)
PaddedExtendedStates{Float64}(1.0)
julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
julia> new_vec = states(test_vec, fill(3.0f0, 3))
9-element Vector{Float32}:
0.0
0.0
0.0
0.0
0.0
1.0
3.0
3.0
3.0
julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> new_mat = states(test_mat, fill(3.0f0, 3))
9×5 Matrix{Float32}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0 1.0
3.0 3.0 3.0 3.0 3.0
3.0 3.0 3.0 3.0 3.0
3.0 3.0 3.0 3.0 3.0
Non Linear Transformations
ReservoirComputing.NLADefault
— TypeNLADefault()
NLADefault
represents the default non-linear algorithm option. When used, it leaves the input array unchanged.
Example
julia> nlat = NLADefault()
NLADefault()
julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
julia> n_new = nlat(x_old)
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
julia> mat_old = [1 2 3;
4 5 6;
7 8 9;
10 11 12;
13 14 15;
16 17 18;
19 20 21]
7×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
ReservoirComputing.NLAT1
— TypeNLAT1()
NLAT1
implements the T₁ transformation algorithm introduced in (Chattopadhyay et al., 2020) and (Pathak et al., 2017). The T₁ algorithm squares elements of the input array, targeting every second row.
\[\tilde{r}_{i,j} = \begin{cases} r_{i,j} \times r_{i,j}, & \text{if } j \text{ is odd}; \\ r_{i,j}, & \text{if } j \text{ is even}. \end{cases}\]
Example
julia> nlat = NLAT1()
NLAT1()
julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
julia> n_new = nlat(x_old)
10-element Vector{Int64}:
0
1
4
3
16
5
36
7
64
9
julia> mat_old = [1 2 3;
4 5 6;
7 8 9;
10 11 12;
13 14 15;
16 17 18;
19 20 21]
7×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
1 4 9
4 5 6
49 64 81
10 11 12
169 196 225
16 17 18
361 400 441
ReservoirComputing.NLAT2
— TypeNLAT2()
NLAT2
implements the T₂ transformation algorithm as defined in (Chattopadhyay et al., 2020). This transformation algorithm modifies the reservoir states by multiplying each odd-indexed row (starting from the second row) with the product of its two preceding rows.
\[\tilde{r}_{i,j} = \begin{cases} r_{i,j-1} \times r_{i,j-2}, & \text{if } j > 1 \text{ is odd}; \\ r_{i,j}, & \text{if } j \text{ is 1 or even}. \end{cases}\]
Example
julia> nlat = NLAT2()
NLAT2()
julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
julia> n_new = nlat(x_old)
10-element Vector{Int64}:
0
1
0
3
6
5
20
7
42
9
julia> mat_old = [1 2 3;
4 5 6;
7 8 9;
10 11 12;
13 14 15;
16 17 18;
19 20 21]
7×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
1 2 3
4 5 6
4 10 18
10 11 12
70 88 108
16 17 18
19 20 21
ReservoirComputing.NLAT3
— TypeNLAT3()
Implements the T₃ transformation algorithm as detailed in (Chattopadhyay et al., 2020). This algorithm modifies the reservoir's states by multiplying each odd-indexed row (beginning from the second row) with the product of the immediately preceding and the immediately following rows.
\[\tilde{r}_{i,j} = \begin{cases} r_{i,j-1} \times r_{i,j+1}, & \text{if } j > 1 \text{ is odd}; \\ r_{i,j}, & \text{if } j = 1 \text{ or even.} \end{cases}\]
Example
julia> nlat = NLAT3()
NLAT3()
julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
julia> n_new = nlat(x_old)
10-element Vector{Int64}:
0
1
3
3
15
5
35
7
63
9
julia> mat_old = [1 2 3;
4 5 6;
7 8 9;
10 11 12;
13 14 15;
16 17 18;
19 20 21]
7×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
10 11 12
13 14 15
16 17 18
19 20 21
julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
1 2 3
4 5 6
40 55 72
10 11 12
160 187 216
16 17 18
19 20 21
ReservoirComputing.PartialSquare
— TypePartialSquare(eta)
Implement a partial squaring of the states as described in (Barbosa et al., 2021).
Equations
\[ \begin{equation} g(r_i) = \begin{cases} r_i^2, & \text{if } i \leq \eta_r N, \\ r_i, & \text{if } i > \eta_r N. \end{cases} \end{equation}\]
Examples
julia> ps = PartialSquare(0.6)
PartialSquare(0.6)
julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
0
1
2
3
4
5
6
7
8
9
julia> x_new = ps(x_old)
10-element Vector{Int64}:
0
1
4
9
16
25
6
7
8
9
ReservoirComputing.ExtendedSquare
— TypeExtendedSquare()
Extension of the Lu initialization proposed in (Herteux and Räth, 2020). The state vector is extended with the squared elements of the initial state
Equations
\[\begin{equation} \vec{x} = \{x_1, x_2, \dots, x_N, x_1^2, x_2^2, \dots, x_N^2\} \end{equation}\]
Examples
julia> x_old = [1, 2, 3, 4, 5, 6, 7, 8, 9]
9-element Vector{Int64}:
1
2
3
4
5
6
7
8
9
julia> es = ExtendedSquare()
ExtendedSquare()
julia> x_new = es(x_old)
18-element Vector{Int64}:
1
2
3
4
5
6
7
8
9
1
4
9
16
25
36
49
64
81
Internals
ReservoirComputing.create_states
— Functioncreate_states(reservoir_driver::AbstractReservoirDriver, train_data, washout,
reservoir_matrix, input_matrix, bias_vector)
Create and return the trained Echo State Network (ESN) states according to the specified reservoir driver.
Arguments
reservoir_driver
: The reservoir driver that determines how the ESN states evolve over time.train_data
: The training data used to train the ESN.washout
: The number of initial time steps to discard during training to allow the reservoir dynamics to wash out the initial conditions.reservoir_matrix
: The reservoir matrix representing the dynamic, recurrent part of the ESN.input_matrix
: The input matrix that defines the connections between input features and reservoir nodes.bias_vector
: The bias vector to be added at each time step during the reservoir update.
References
- Barbosa, W. A.; Griffith, A.; Rowlands, G. E.; Govia, L. C.; Ribeill, G. J.; Nguyen, M.-H.; Ohki, T. A. and Gauthier, D. J. (2021). Symmetry-aware reservoir computing. Physical Review E 104.
- Chattopadhyay, A.; Hassanzadeh, P. and Subramanian, D. (2020). Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network. Nonlinear Processes in Geophysics 27, 373–389.
- Herteux, J. and Räth, C. (2020). Breaking symmetries of the reservoir equations in echo state networks. Chaos: An Interdisciplinary Journal of Nonlinear Science 30.
- Pathak, J.; Lu, Z.; Hunt, B. R.; Girvan, M. and Ott, E. (2017). Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos: An Interdisciplinary Journal of Nonlinear Science 27.