Linear Solve with Caching Interface
Often, one may want to cache information that is reused between different linear solves. For example, if one is going to perform:
A \ b1
A \ b2
then it would be more efficient to LU-factorize one time and reuse the factorization:
lu!(A)
A \ b1
A \ b2
LinearSolve.jl's caching interface automates this process to use the most efficient means of solving and resolving linear systems. To do this with LinearSolve.jl, you simply init
a cache, solve
, replace b
, and solve again. This looks like:
using LinearSolve
n = 4
A = rand(n, n)
b1 = rand(n);
b2 = rand(n);
prob = LinearProblem(A, b1)
linsolve = init(prob)
sol1 = solve!(linsolve)
retcode: Success
u: 4-element Vector{Float64}:
1.24081424370935
-1.1336729033237058
0.5236722985649337
0.0838851452434799
linsolve.b = b2
sol2 = solve!(linsolve)
sol2.u
4-element Vector{Float64}:
0.656431410103897
-1.37245254426586
0.7950018268171156
1.2341848294044881
Then refactorization will occur when a new A
is given:
A2 = rand(n, n)
linsolve.A = A2
sol3 = solve!(linsolve)
sol3.u
4-element Vector{Float64}:
0.4784205392064094
-0.5205021775688009
-0.21455653927126464
1.7411828181805251
The factorization occurs on the first solve, and it stores the factorization in the cache. You can retrieve this cache via sol.cache
, which is the same object as the init
, but updated to know not to re-solve the factorization.
The advantage of course with using LinearSolve.jl in this form is that it is efficient while being agnostic to the linear solver. One can easily swap in iterative solvers, sparse solvers, etc. and it will do all the tricks like caching the symbolic factorization if the sparsity pattern is unchanged.