Linear Solve with Caching Interface

Often, one may want to cache information that is reused between different linear solves. For example, if one is going to perform:

A \ b1
A \ b2

then it would be more efficient to LU-factorize one time and reuse the factorization:

LA.lu!(A)
A \ b1
A \ b2

LinearSolve.jl's caching interface automates this process to use the most efficient means of solving and resolving linear systems. To do this with LinearSolve.jl, you simply init a cache, solve, replace b, and solve again. This looks like:

import LinearSolve as LS
import LinearAlgebra as LA

n = 4
A = rand(n, n)
b1 = rand(n);
b2 = rand(n);
prob = LS.LinearProblem(A, b1)

linsolve = LS.init(prob)
sol1 = LS.solve!(linsolve)
retcode: Success
u: 4-element Vector{Float64}:
  25.452991538002742
 -29.902656929486426
 -28.824714514063626
  14.361804710123433
linsolve.b = b2
sol2 = LS.solve!(linsolve)

sol2.u
4-element Vector{Float64}:
  66.33980989062407
 -78.23820800591025
 -75.69261283304054
  37.54069570345237

Then refactorization will occur when a new A is given:

A2 = rand(n, n)
linsolve.A = A2
sol3 = LS.solve!(linsolve)

sol3.u
4-element Vector{Float64}:
  0.2244221775258118
 -5.831857247952444
 -5.336479011993387
  3.3586537539579053

The factorization occurs on the first solve, and it stores the factorization in the cache. You can retrieve this cache via sol.cache, which is the same object as the init, but updated to know not to re-solve the factorization.

The advantage of course with import LinearSolve.jl in this form is that it is efficient while being agnostic to the linear solver. One can easily swap in iterative solvers, sparse solvers, etc. and it will do all the tricks like caching the symbolic factorization if the sparsity pattern is unchanged.