Globalization Subroutines
The following globalization subroutines are available.
NonlinearSolveFirstOrder.RadiusUpdateSchemes
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Bastin
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Fan
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Hei
NonlinearSolveFirstOrder.RadiusUpdateSchemes.NLsolve
NonlinearSolveFirstOrder.RadiusUpdateSchemes.NocedalWright
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Simple
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Yuan
Line Search Algorithms
Line Searches have been moved to an external package. Take a look at the LineSearch.jl package and its documentation.
Radius Update Schemes for Trust Region
NonlinearSolveFirstOrder.RadiusUpdateSchemes
— ModuleRadiusUpdateSchemes
RadiusUpdateSchemes
is provides different types of radius update schemes implemented in the Trust Region method. These schemes specify how the radius of the so-called trust region is updated after each iteration of the algorithm. The specific role and caveats associated with each scheme are provided below.
Using RadiusUpdateSchemes
Simply put the desired scheme as follows: sol = solve(prob, alg = TrustRegion(radius_update_scheme = RadiusUpdateSchemes.Hei))
.
Available Radius Update Schemes
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Simple
— ConstantRadiusUpdateSchemes.Simple
The simple or conventional radius update scheme. This scheme is chosen by default and follows the conventional approach to update the trust region radius, i.e. if the trial step is accepted it increases the radius by a fixed factor (bounded by a maximum radius) and if the trial step is rejected, it shrinks the radius by a fixed factor.
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Hei
— ConstantRadiusUpdateSchemes.Hei
This scheme is proposed in Hei [9]. The trust region radius depends on the size (norm) of the current step size. The hypothesis is to let the radius converge to zero as the iterations progress, which is more reliable and robust for ill-conditioned as well as degenerate problems.
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Yuan
— ConstantRadiusUpdateSchemes.Yuan
This scheme is proposed by Yuan [6]. Similar to Hei's scheme, the trust region is updated in a way so that it converges to zero, however here, the radius depends on the size (norm) of the current gradient of the objective (merit) function. The hypothesis is that the step size is bounded by the gradient size, so it makes sense to let the radius depend on the gradient.
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Bastin
— ConstantRadiusUpdateSchemes.Bastin
This scheme is proposed by Bastin et al. [10]. The scheme is called a retrospective update scheme as it uses the model function at the current iteration to compute the ratio of the actual reduction and the predicted reduction in the previous trial step, and use this ratio to update the trust region radius. The hypothesis is to exploit the information made available during the optimization process in order to vary the accuracy of the objective function computation.
NonlinearSolveFirstOrder.RadiusUpdateSchemes.Fan
— ConstantRadiusUpdateSchemes.Fan
This scheme is proposed by Fan [11]. It is very much similar to Hei's and Yuan's schemes as it lets the trust region radius depend on the current size (norm) of the objective (merit) function itself. These new update schemes are known to improve local convergence.
NonlinearSolveFirstOrder.RadiusUpdateSchemes.NLsolve
— ConstantRadiusUpdateSchemes.NLsolve
The same updating scheme as in NLsolve's (https://github.com/JuliaNLSolvers/NLsolve.jl) trust region dogleg implementation.
NonlinearSolveFirstOrder.RadiusUpdateSchemes.NocedalWright
— ConstantRadiusUpdateSchemes.NocedalWright
Trust region updating scheme as in Nocedal and Wright [see Alg 11.5, page 291].