Question

Often in modeling, data processing, and optimization; the intensive portions of the code can often reduce to solving a lot of linear equations.

e.g.

solve for x1 in A x1 = b1

solve for x2 in A x2 = b2

... and so on

Sometimes the matrix 'A' doesn't change between computations, and my question is how (and when) can I take advantage of this with some pre-computation? This assumes that you're already using linear algebra libraries (e.g. BLAS, LAPACK) optimized for your system, and can't find a way to solve it in a single batch operation for which (LAPACK, Matlab, etc) already have specialized functions.

For example, one strategy is to compute and store an appropriate matrix decomposition (LUP or QR) once that would otherwise be (internally) called by the linear algebra library each time. However, I can find little guidance on which ones to use that work well with the intermediate solvers (in particular when working with LAPACK), or the relative merits of the decompositions in terms of speed or running into bad edge cases.

Note: A "bad" strategy is to compute the inverse matrix inv(A)due to speed and accuracy issues. (MatLab's documentation for inv() discusses this in greater detail )

Was it helpful?

Solution

Matrix decomposition is definitely the way to go here. Which decomposition you use will be determined by the structure of the systems you are are trying to solve: solving using Cholesky decomposition requires a square, symmetric, positive definite, matrix. You can solve a general square matrix using LU or QR. LU is typically used because it usually takes fewer steps than QR. QR is used for under and over determined systems like least squares, and finding eigenvalues.

If you are using LAPACK, you can call dgetrf to LU factorize a general, square matrix, and dgetrs to find the solution, using the input vector, and the factor matrices you created with dgetrf.

Licensed under: CC-BY-SA with attribution
scroll top