Globalization Subroutines

The following globalization subroutines are available.

Line Searches have been moved to an external package. Take a look at the LineSearch.jl package and its documentation.

Radius Update Schemes for Trust Region

NonlinearSolveFirstOrder.RadiusUpdateSchemesModule
RadiusUpdateSchemes

RadiusUpdateSchemes is provides different types of radius update schemes implemented in the Trust Region method. These schemes specify how the radius of the so-called trust region is updated after each iteration of the algorithm. The specific role and caveats associated with each scheme are provided below.

Using RadiusUpdateSchemes

Simply put the desired scheme as follows: sol = solve(prob, alg = TrustRegion(radius_update_scheme = RadiusUpdateSchemes.Hei)).

source

Available Radius Update Schemes

NonlinearSolveFirstOrder.RadiusUpdateSchemes.SimpleConstant
RadiusUpdateSchemes.Simple

The simple or conventional radius update scheme. This scheme is chosen by default and follows the conventional approach to update the trust region radius, i.e. if the trial step is accepted it increases the radius by a fixed factor (bounded by a maximum radius) and if the trial step is rejected, it shrinks the radius by a fixed factor.

source
NonlinearSolveFirstOrder.RadiusUpdateSchemes.HeiConstant
RadiusUpdateSchemes.Hei

This scheme is proposed in Hei [9]. The trust region radius depends on the size (norm) of the current step size. The hypothesis is to let the radius converge to zero as the iterations progress, which is more reliable and robust for ill-conditioned as well as degenerate problems.

source
NonlinearSolveFirstOrder.RadiusUpdateSchemes.YuanConstant
RadiusUpdateSchemes.Yuan

This scheme is proposed by Yuan [6]. Similar to Hei's scheme, the trust region is updated in a way so that it converges to zero, however here, the radius depends on the size (norm) of the current gradient of the objective (merit) function. The hypothesis is that the step size is bounded by the gradient size, so it makes sense to let the radius depend on the gradient.

source
NonlinearSolveFirstOrder.RadiusUpdateSchemes.BastinConstant
RadiusUpdateSchemes.Bastin

This scheme is proposed by Bastin et al. [10]. The scheme is called a retrospective update scheme as it uses the model function at the current iteration to compute the ratio of the actual reduction and the predicted reduction in the previous trial step, and use this ratio to update the trust region radius. The hypothesis is to exploit the information made available during the optimization process in order to vary the accuracy of the objective function computation.

source
NonlinearSolveFirstOrder.RadiusUpdateSchemes.FanConstant
RadiusUpdateSchemes.Fan

This scheme is proposed by Fan [11]. It is very much similar to Hei's and Yuan's schemes as it lets the trust region radius depend on the current size (norm) of the objective (merit) function itself. These new update schemes are known to improve local convergence.

source