Surrogate
Every surrogate has a different definition depending on the parameters needed. It uses the interface defined in SurrogatesBase.jl. In a nutshell, they use:
update!(::AbstractDeterministicSurrogate, x_new, y_new)
AbstractDeterministicSurrogate(value)
The first function adds a sample point to the surrogate, thus changing the internal coefficients. The second one calculates the approximation at value.
- Linear surrogate
Surrogates.LinearSurrogate
— MethodLinearSurrogate(x,y,lb,ub)
Builds a linear surrogate using GLM.jl
- Radial basis function surrogate
Surrogates.RadialBasis
— MethodRadialBasis(x,y,lb,ub,rad::RadialFunction, scale_factor::Float = 1.0)
Constructor for RadialBasis surrogate, of the form
$f(x) = \sum_{i=1}^{N} w_i \phi(|x - \bold{c}_i|) \bold{v}^{T} + \bold{v}^{\mathrm{T}} [ 0; \bold{x} ]$
where $w_i$ are the weights of polyharmonic splines $\phi(x)$ and $\bold{v}$ are coefficients of a polynomial term.
References: https://en.wikipedia.org/wiki/Polyharmonic_spline
- Kriging surrogate
Surrogates.Kriging
— MethodKriging(x,y,lb,ub;p=collect(one.(x[1])),theta=collect(one.(x[1])))
Constructor for Kriging surrogate.
- (x,y): sampled points
- p: array of values 0<=p<2 modeling the smoothness of the function being approximated in the i-th variable. low p -> rough, high p -> smooth
- theta: array of values > 0 modeling how much the function is changing in the i-th variable.
- Lobachevsky surrogate
Surrogates.LobachevskySurrogate
— MethodLobachevskySurrogate(x,y,alpha,n::Int,lb,ub,sparse = false)
Build the Lobachevsky surrogate with parameters alpha and n.
Surrogates.lobachevsky_integral
— Methodlobachevsky_integral(loba::LobachevskySurrogate,lb,ub)
Calculates the integral of the Lobachevsky surrogate, which has a closed form.
- Support vector machine surrogate, requires
using LIBSVM
andusing SurrogatesSVM
SVMSurrogate(x,y,lb::Number,ub::Number)
- Random forest surrogate, requires
using XGBoost
andusing SurrogatesRandomForest
RandomForestSurrogate(x,y,lb,ub;num_round::Int = 1)
- Neural network surrogate, requires
using Flux
andusing SurrogatesFlux
NeuralSurrogate(x,y,lb,ub; model = Chain(Dense(length(x[1]),1), first), loss = (x,y) -> Flux.mse(model(x), y),opt = Descent(0.01),n_echos::Int = 1)
Creating another surrogate
It's great that you want to add another surrogate to the library! You will need to:
- Define a new mutable struct and a constructor function
- Define update!(your_surrogate, x_new, y_new)
- Define your_surrogate(value) for the approximation
Example
mutable struct NewSurrogate{X, Y, L, U, C, A, B} <: AbstractDeterministicSurrogate
x::X
y::Y
lb::L
ub::U
coeff::C
alpha::A
beta::B
end
function NewSurrogate(x, y, lb, ub, parameters)
...
return NewSurrogate(x, y, lb, ub, calculated \ _coeff, alpha, beta)
end
function update!(NewSurrogate, x_new, y_new)
...
end
function (s::NewSurrogate)(value)
return s.coeff * value + s.alpha
end