Gradients
Functions for computing gradients of scalar-valued functions with respect to vector inputs.
Function Types
Gradients support two types of function mappings:
- Vector→scalar:
f(x)wherexis a vector andfreturns a scalar - Scalar→vector:
f(fx, x)for in-place evaluation orfx = f(x)for out-of-place
Performance Notes
- Forward differences:
O(n)function evaluations,O(h)accuracy - Central differences:
O(2n)function evaluations,O(h²)accuracy - Complex step:
O(n)function evaluations, machine precision accuracy
Cache Management
When using GradientCache with pre-computed function values:
- If you provide
fx, thenfxwill be used in forward differencing to skip a function call - You must update
cache.fxbefore each call tofinite_difference_gradient! - For immutable types (scalars,
StaticArray), use@setfrom Setfield.jl - Consider aliasing existing arrays into the cache for memory efficiency
Functions
FiniteDiff.finite_difference_gradient — Function
FiniteDiff.finite_difference_gradient(
f,
x,
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(x),
inplace :: Type{Val{T3}} = Val{true};
relstep = default_relstep(fdtype, eltype(x)),
absstep = relstep,
dir = true)Compute the gradient of function f at point x using finite differences.
This is the cache-less version that allocates temporary arrays internally. Supports both vector→scalar maps f(x) → scalar and scalar→vector maps depending on the inplace parameter and function signature.
Arguments
f: Function to differentiate- If
typeof(x) <: AbstractArray:f(x)should return a scalar (vector→scalar gradient) - If
typeof(x) <: Numberandinplace=Val(true):f(fx, x)modifiesfxin-place (scalar→vector gradient) - If
typeof(x) <: Numberandinplace=Val(false):f(x)returns a vector (scalar→vector gradient)
- If
x: Point at which to evaluate the gradient (vector or scalar)fdtype::Type{T1}=Val{:central}: Finite difference method (:forward,:central,:complex)returntype::Type{T2}=eltype(x): Element type of gradient componentsinplace::Type{Val{T3}}=Val{true}: Whether to use in-place function evaluation
Keyword Arguments
relstep: Relative step size (default: method-dependent optimal value)absstep=relstep: Absolute step size fallbackdir=true: Direction for step size (typically ±1)
Returns
- Gradient vector
∇fwhere∇f[i] = ∂f/∂x[i]
Examples
# Vector→scalar gradient
f(x) = x[1]^2 + x[2]^2
x = [1.0, 2.0]
grad = finite_difference_gradient(f, x) # [2.0, 4.0]
# Scalar→vector gradient (out-of-place)
g(t) = [t^2, t^3]
t = 2.0
grad = finite_difference_gradient(g, t, Val(:central), eltype(t), Val(false))Notes
- Forward differences:
O(n)function evaluations,O(h)accuracy - Central differences:
O(2n)function evaluations,O(h²)accuracy - Complex step:
O(n)function evaluations, machine precision accuracy
FiniteDiff.finite_difference_gradient!(
df :: AbstractArray{<:Number},
f,
x :: AbstractArray{<:Number},
cache :: GradientCache;
relstep = default_relstep(fdtype, eltype(x)),
absstep = relstep
dir = true)Gradients are either a vector->scalar map f(x), or a scalar->vector map f(fx,x) if inplace=Val{true} and fx=f(x) if inplace=Val{false}.
Cached.
FiniteDiff.finite_difference_gradient! — Function
FiniteDiff.finite_difference_gradient!(
df,
f,
x,
fdtype::Type{T1}=Val{:central},
returntype::Type{T2}=eltype(df),
inplace::Type{Val{T3}}=Val{true};
relstep=default_relstep(fdtype, eltype(x)),
absstep=relstep)Gradients are either a vector->scalar map f(x), or a scalar->vector map f(fx,x) if inplace=Val{true} and fx=f(x) if inplace=Val{false}.
Cache-less.
Cache
FiniteDiff.GradientCache — Type
FiniteDiff.GradientCache(
df :: Union{<:Number,AbstractArray{<:Number}},
x :: Union{<:Number, AbstractArray{<:Number}},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(df),
inplace :: Type{Val{T3}} = Val{true})Allocating Cache Constructor
FiniteDiff.GradientCache(
fx :: Union{Nothing,<:Number,AbstractArray{<:Number}},
c1 :: Union{Nothing,AbstractArray{<:Number}},
c2 :: Union{Nothing,AbstractArray{<:Number}},
c3 :: Union{Nothing,AbstractArray{<:Number}},
fdtype :: Type{T1} = Val{:central},
returntype :: Type{T2} = eltype(fx),
inplace :: Type{Val{T3}} = Val{true})Non-Allocating Cache Constructor
Arguments
fx: Cached function call.c1,c2,c3: (Non-aliased) caches for the input vector.fdtype = Val(:central): Method for cmoputing the finite difference.returntype = eltype(fx): Element type for the returned function value.inplace = Val(false): Whether the function is computed in-place or not.
Output
The output is a GradientCache struct.
julia> x = [1.0, 3.0]
2-element Vector{Float64}:
1.0
3.0
julia> _f = x -> x[1] + x[2]
#13 (generic function with 1 method)
julia> fx = _f(x)
4.0
julia> gradcache = GradientCache(copy(x), copy(x), copy(x), fx)
GradientCache{Float64, Vector{Float64}, Vector{Float64}, Vector{Float64}, Val{:central}(), Float64, Val{false}()}(4.0, [1.0, 3.0], [1.0, 3.0], [1.0, 3.0])