States Modifications

Padding and Estension

ReservoirComputing.StandardStatesType
StandardStates()

When this struct is employed, the states of the reservoir are not modified.

Example

julia> states = StandardStates()
StandardStates()

julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0

julia> new_vec = states(test_vec)
5-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0

julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0

julia> new_mat = states(test_mat)
5×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
source
ReservoirComputing.ExtendedStatesType
ExtendedStates()

The ExtendedStates struct is used to extend the reservoir states by vertically concatenating the input data (during training) and the prediction data (during the prediction phase).

Example

julia> states = ExtendedStates()
ExtendedStates()

julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0

julia> new_vec = states(test_vec, fill(3.0f0, 3))
8-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0
 3.0
 3.0
 3.0

julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0

julia> new_mat = states(test_mat, fill(3.0f0, 3))
8×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 3.0  3.0  3.0  3.0  3.0
 3.0  3.0  3.0  3.0  3.0
 3.0  3.0  3.0  3.0  3.0
source
ReservoirComputing.PaddedStatesType
PaddedStates(padding)
PaddedStates(;padding=1.0)

Creates an instance of the PaddedStates struct with specified padding value (default 1.0). The states of the reservoir are padded by vertically concatenating the padding value.

Example

julia> states = PaddedStates(1.0)
PaddedStates{Float64}(1.0)

julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0

julia> new_vec = states(test_vec)
6-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0
 1.0

julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0

julia> new_mat = states(test_mat)
6×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 1.0  1.0  1.0  1.0  1.0
source
ReservoirComputing.PaddedExtendedStatesType
PaddedExtendedStates(padding)
PaddedExtendedStates(;padding=1.0)

Constructs a PaddedExtendedStates struct, which first extends the reservoir states with training or prediction data,then pads them with a specified value (defaulting to 1.0).

Example

julia> states = PaddedExtendedStates(1.0)
PaddedExtendedStates{Float64}(1.0)

julia> test_vec = zeros(Float32, 5)
5-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0

julia> new_vec = states(test_vec, fill(3.0f0, 3))
9-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0
 1.0
 3.0
 3.0
 3.0

julia> test_mat = zeros(Float32, 5, 5)
5×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0

julia> new_mat = states(test_mat, fill(3.0f0, 3))
9×5 Matrix{Float32}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 1.0  1.0  1.0  1.0  1.0
 3.0  3.0  3.0  3.0  3.0
 3.0  3.0  3.0  3.0  3.0
 3.0  3.0  3.0  3.0  3.0
source

Non Linear Transformations

ReservoirComputing.NLADefaultType
NLADefault()

NLADefault represents the default non-linear algorithm option. When used, it leaves the input array unchanged.

Example

julia> nlat = NLADefault()
NLADefault()

julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> n_new = nlat(x_old)
10-element Vector{Int64}:
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> mat_old = [1 2 3;
                  4 5 6;
                  7 8 9;
                  10 11 12;
                  13 14 15;
                  16 17 18;
                  19 20 21]
7×3 Matrix{Int64}:
  1   2   3
  4   5   6
  7   8   9
 10  11  12
 13  14  15
 16  17  18
 19  20  21

julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
  1   2   3
  4   5   6
  7   8   9
 10  11  12
 13  14  15
 16  17  18
 19  20  21
source
ReservoirComputing.NLAT1Type
NLAT1()

NLAT1 implements the T₁ transformation algorithm introduced in (Chattopadhyay et al., 2020) and (Pathak et al., 2017). The T₁ algorithm squares elements of the input array, targeting every second row.

\[\tilde{r}_{i,j} = \begin{cases} r_{i,j} \times r_{i,j}, & \text{if } j \text{ is odd}; \\ r_{i,j}, & \text{if } j \text{ is even}. \end{cases}\]

Example

julia> nlat = NLAT1()
NLAT1()

julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> n_new = nlat(x_old)
10-element Vector{Int64}:
  0
  1
  4
  3
 16
  5
 36
  7
 64
  9

julia> mat_old = [1  2  3;
                   4  5  6;
                   7  8  9;
                  10 11 12;
                  13 14 15;
                  16 17 18;
                  19 20 21]
7×3 Matrix{Int64}:
  1   2   3
  4   5   6
  7   8   9
 10  11  12
 13  14  15
 16  17  18
 19  20  21

julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
   1    4    9
   4    5    6
  49   64   81
  10   11   12
 169  196  225
  16   17   18
 361  400  441
source
ReservoirComputing.NLAT2Type
NLAT2()

NLAT2 implements the T₂ transformation algorithm as defined in (Chattopadhyay et al., 2020). This transformation algorithm modifies the reservoir states by multiplying each odd-indexed row (starting from the second row) with the product of its two preceding rows.

\[\tilde{r}_{i,j} = \begin{cases} r_{i,j-1} \times r_{i,j-2}, & \text{if } j > 1 \text{ is odd}; \\ r_{i,j}, & \text{if } j \text{ is 1 or even}. \end{cases}\]

Example

julia> nlat = NLAT2()
NLAT2()

julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> n_new = nlat(x_old)
10-element Vector{Int64}:
  0
  1
  0
  3
  6
  5
 20
  7
 42
  9

julia> mat_old = [1  2  3;
                   4  5  6;
                   7  8  9;
                  10 11 12;
                  13 14 15;
                  16 17 18;
                  19 20 21]
7×3 Matrix{Int64}:
  1   2   3
  4   5   6
  7   8   9
 10  11  12
 13  14  15
 16  17  18
 19  20  21

julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
  1   2    3
  4   5    6
  4  10   18
 10  11   12
 70  88  108
 16  17   18
 19  20   21
source
ReservoirComputing.NLAT3Type
NLAT3()

Implements the T₃ transformation algorithm as detailed in (Chattopadhyay et al., 2020). This algorithm modifies the reservoir's states by multiplying each odd-indexed row (beginning from the second row) with the product of the immediately preceding and the immediately following rows.

\[\tilde{r}_{i,j} = \begin{cases} r_{i,j-1} \times r_{i,j+1}, & \text{if } j > 1 \text{ is odd}; \\ r_{i,j}, & \text{if } j = 1 \text{ or even.} \end{cases}\]

Example

julia> nlat = NLAT3()
NLAT3()

julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> n_new = nlat(x_old)
10-element Vector{Int64}:
  0
  1
  3
  3
 15
  5
 35
  7
 63
  9

julia> mat_old = [1  2  3;
                   4  5  6;
                   7  8  9;
                  10 11 12;
                  13 14 15;
                  16 17 18;
                  19 20 21]
7×3 Matrix{Int64}:
  1   2   3
  4   5   6
  7   8   9
 10  11  12
 13  14  15
 16  17  18
 19  20  21

julia> mat_new = nlat(mat_old)
7×3 Matrix{Int64}:
   1    2    3
   4    5    6
  40   55   72
  10   11   12
 160  187  216
  16   17   18
  19   20   21
source
ReservoirComputing.PartialSquareType
PartialSquare(eta)

Implement a partial squaring of the states as described in (Barbosa et al., 2021).

Equations

\[ \begin{equation} g(r_i) = \begin{cases} r_i^2, & \text{if } i \leq \eta_r N, \\ r_i, & \text{if } i > \eta_r N. \end{cases} \end{equation}\]

Examples

julia> ps = PartialSquare(0.6)
PartialSquare(0.6)


julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
10-element Vector{Int64}:
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> x_new = ps(x_old)
10-element Vector{Int64}:
  0
  1
  4
  9
 16
 25
  6
  7
  8
  9
source
ReservoirComputing.ExtendedSquareType
ExtendedSquare()

Extension of the Lu initialization proposed in (Herteux and Räth, 2020). The state vector is extended with the squared elements of the initial state

Equations

\[\begin{equation} \vec{x} = \{x_1, x_2, \dots, x_N, x_1^2, x_2^2, \dots, x_N^2\} \end{equation}\]

Examples

julia> x_old = [1, 2, 3, 4, 5, 6, 7, 8, 9]
9-element Vector{Int64}:
 1
 2
 3
 4
 5
 6
 7
 8
 9

julia> es = ExtendedSquare()
ExtendedSquare()

julia> x_new = es(x_old)
18-element Vector{Int64}:
  1
  2
  3
  4
  5
  6
  7
  8
  9
  1
  4
  9
 16
 25
 36
 49
 64
 81
source

Internals

ReservoirComputing.create_statesFunction
create_states(reservoir_driver::AbstractReservoirDriver, train_data, washout,
    reservoir_matrix, input_matrix, bias_vector)

Create and return the trained Echo State Network (ESN) states according to the specified reservoir driver.

Arguments

  • reservoir_driver: The reservoir driver that determines how the ESN states evolve over time.
  • train_data: The training data used to train the ESN.
  • washout: The number of initial time steps to discard during training to allow the reservoir dynamics to wash out the initial conditions.
  • reservoir_matrix: The reservoir matrix representing the dynamic, recurrent part of the ESN.
  • input_matrix: The input matrix that defines the connections between input features and reservoir nodes.
  • bias_vector: The bias vector to be added at each time step during the reservoir update.
source

References