API
FiniteDiff — ModuleFiniteDiffFast non-allocating calculations of gradients, Jacobians, and Hessians with sparsity support.
Derivatives
FiniteDiff.finite_difference_derivative — FunctionFiniteDiff.finite_difference_derivative(
f, x::T,
fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(x),
f_x::Union{Nothing,T}=nothing)Single-point derivative of scalar->scalar maps.
FiniteDiff.finite_difference_derivative(
f,
x :: AbstractArray{<:Number},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x), # return type of f
fx :: Union{Nothing,AbstractArray{<:Number}} = nothing,
epsilon :: Union{Nothing,AbstractArray{<:Real}} = nothing;
[epsilon_factor])Compute the derivative df of a scalar-valued map f at a collection of points x.
Cache-less.
FiniteDiff.finite_difference_derivative! — FunctionFiniteDiff.finite_difference_derivative!(
df :: AbstractArray{<:Number},
f,
x :: AbstractArray{<:Number},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x),
fx :: Union{Nothing,AbstractArray{<:Number}} = nothing,
epsilon :: Union{Nothing,AbstractArray{<:Real}} = nothing;
[epsilon_factor])Compute the derivative df of a scalar-valued map f at a collection of points x.
Cache-less but non-allocating if fx and epsilon are supplied (fx must be f(x)).
FiniteDiff.finite_difference_derivative!(
df::AbstractArray{<:Number},
f,
x::AbstractArray{<:Number},
cache::DerivativeCache{T1,T2,fdtype,returntype};
[epsilon_factor])Compute the derivative df of a scalar-valued map f at a collection of points x.
Cached.
FiniteDiff.DerivativeCache — TypeFiniteDiff.DerivativeCache(
x :: AbstractArray{<:Number},
fx :: Union{Nothing,AbstractArray{<:Number}} = nothing,
epsilon :: Union{Nothing,AbstractArray{<:Real}} = nothing,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x))This allocates either fx or epsilon if these are nothing and they are needed. fx is the current call of f(x) and is required for forward-differencing (otherwise is not necessary).
Gradients
FiniteDiff.finite_difference_gradient — FunctionFiniteDiff.finite_difference_gradient(
f,
x,
fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(x),
inplace::Type{Val{T3}}=Val{true};
[epsilon_factor])Gradients are either a vector->scalar map f(x), or a scalar->vector map f(fx,x) if inplace=Val{true} and fx=f(x) if inplace=Val{false}.
Cache-less.
FiniteDiff.finite_difference_gradient!(
df::AbstractArray{<:Number},
f,
x::AbstractArray{<:Number},
cache::GradientCache;
[epsilon_factor])Gradients are either a vector->scalar map f(x), or a scalar->vector map f(fx,x) if inplace=Val{true} and fx=f(x) if inplace=Val{false}.
Cached.
FiniteDiff.finite_difference_gradient! — FunctionFiniteDiff.finite_difference_gradient!(
df,
f,
x,
fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(df),
inplace::Type{Val{T3}}=Val{true};
[epsilon_factor])Gradients are either a vector->scalar map f(x), or a scalar->vector map f(fx,x) if inplace=Val{true} and fx=f(x) if inplace=Val{false}.
Cache-less.
FiniteDiff.GradientCache — TypeFiniteDiff.GradientCache(
df :: Union{<:Number,AbstractArray{<:Number}},
x :: Union{<:Number, AbstractArray{<:Number}},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(df),
inplace :: Type{Val{T3}} = Val{true})Allocating Cache Constructor
FiniteDiff.GradientCache(
fx :: Union{Nothing,<:Number,AbstractArray{<:Number}},
c1 :: Union{Nothing,AbstractArray{<:Number}},
c2 :: Union{Nothing,AbstractArray{<:Number}},
c3 :: Union{Nothing,AbstractArray{<:Number}},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(fx),
inplace :: Type{Val{T3}} = Val{true})Non-Allocating Cache Constructor
Arguments
fx: Cached function call.c1,c2,c3: (Non-aliased) caches for the input vector.fdtype = Val(:central): Method for cmoputing the finite difference.returntype = eltype(fx): Element type for the returned function value.inplace = Val(false): Whether the function is computed in-place or not.
Output
The output is a GradientCache struct.
julia> x = [1.0, 3.0]
2-element Vector{Float64}:
1.0
3.0
julia> _f = x -> x[1] + x[2]
#13 (generic function with 1 method)
julia> fx = _f(x)
4.0
julia> gradcache = GradientCache(copy(x), copy(x), copy(x), fx)
GradientCache{Float64, Vector{Float64}, Vector{Float64}, Vector{Float64}, Val{:central}(), Float64, Val{false}()}(4.0, [1.0, 3.0], [1.0, 3.0], [1.0, 3.0])Gradients are either a vector->scalar map f(x), or a scalar->vector map f(fx,x) if inplace=Val{true} and fx=f(x) if inplace=Val{false}.
Note that here fx is a cached function call of f. If you provide fx, then fx will be used in the forward differencing method to skip a function call. It is on you to make sure that you update cache.fx every time before calling FiniteDiff.finite_difference_gradient!. If fx is an immutable, e.g. a scalar or a StaticArray, cache.fx should be updated using @set from Setfield.jl. A good use of this is if you have a cache array for the output of fx already being used, you can make it alias into the differencing algorithm here.
Jacobians
FiniteDiff.finite_difference_jacobian — FunctionFiniteDiff.finite_difference_jacobian(
f,
x :: AbstractArray{<:Number},
fdtype :: Type{T1}=Val{:central},
returntype :: Type{T2}=eltype(x),
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep,
colorvec = 1:length(x),
sparsity = nothing,
jac_prototype = nothing)Cache-less.
FiniteDiff.finite_difference_jacobian(
f,
x,
cache::JacobianCache;
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep,
colorvec = cache.colorvec,
sparsity = cache.sparsity,
jac_prototype = nothing)Cached.
FiniteDiff.finite_difference_jacobian! — Functionfinite_difference_jacobian!(
J::AbstractMatrix,
f,
x::AbstractArray{<:Number},
fdtype :: Type{T1}=Val{:forward},
returntype :: Type{T2}=eltype(x),
f_in :: Union{T2,Nothing}=nothing;
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep,
colorvec = 1:length(x),
sparsity = ArrayInterfaceCore.has_sparsestruct(J) ? J : nothing)Cache-less.
FiniteDiff.finite_difference_jacobian!(
J::AbstractMatrix{<:Number},
f,
x::AbstractArray{<:Number},
cache::JacobianCache;
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep,
colorvec = cache.colorvec,
sparsity = cache.sparsity)Cached.
FiniteDiff.JacobianCache — TypeFiniteDiff.JacobianCache(
x,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x),
colorvec = 1:length(x)
sparsity = nothing)Allocating Cache Constructor.
This assumes the Jacobian is square.
FiniteDiff.JacobianCache(
x1 ,
fx ,
fx1,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(fx),
colorvec = 1:length(x1),
sparsity = nothing)Non-Allocating Cache Constructor.
Jacobians are for functions f!(fx,x) when using in-place finite_difference_jacobian!, and fx = f(x) when using out-of-place finite_difference_jacobian. The out-of-place jacobian will return a similar type as jac_prototype if it is not a nothing. For non-square Jacobians, a cache which specifies the vector fx is required.
For sparse differentiation, pass a colorvec of matrix colors. sparsity should be a sparse or structured matrix (Tridiagonal, Banded, etc. according to the ArrayInterfaceCore.jl specs) to allow for decompression, otherwise the result will be the colorvec compressed Jacobian.
Hessians
FiniteDiff.finite_difference_hessian — Functionfinite_difference_hessian(
f,
x::AbstractArray{<:Number},
fdtype :: Type{T1}=Val{:hcentral},
inplace :: Type{Val{T2}} = x isa StaticArray ? Val{true} : Val{false};
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep)Cache-less.
finite_difference_hessian(
f,
x,
cache::HessianCache{T,fdtype,inplace};
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep)Cached.
FiniteDiff.finite_difference_hessian! — Functionfinite_difference_hessian!(
H::AbstractMatrix,
f,
x::AbstractArray{<:Number},
fdtype :: Type{T1}=Val{:hcentral},
inplace :: Type{Val{T2}} = x isa StaticArray ? Val{true} : Val{false};
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep)Cache-less.
finite_difference_hessian!(
H,
f,
x,
cache::HessianCache{T,fdtype,inplace};
relstep = default_relstep(fdtype, eltype(x)),
absstep = relstep)Cached.
FiniteDiff.HessianCache — TypeHessianCache(
xpp,
xpm,
xmp,
xmm,
fdtype::Type{T1}=Val{:hcentral},
inplace::Type{Val{T2}} = x isa StaticArray ? Val{true} : Val{false})Non-allocating cache constructor.
HessianCache(
x,
fdtype::Type{T1}=Val{:hcentral},
inplace::Type{Val{T2}} = x isa StaticArray ? Val{true} : Val{false})Allocating cache constructor.
Hessians are for functions f(x) which return a scalar.