Question

I would like to solve the system of linear equations:

 Ax = b

A is a n x m matrix (not square), b and x are both n x 1 vectors. Where A and b are known, n is from the order of 50-100 and m is about 2 (in other words, A could be maximum [100x2]).

I know the solution of x: $x = \inv(A^T A) A^T b$

I found few ways to solve it: uBLAS (Boost), Lapack, Eigen and etc. but i dont know how fast are the CPU computation time of 'x' using those packages. I also don't know if this numerically a fast why to solve 'x'

What is for my important is that the CPU computation time would be short as possible and good documentation since i am newbie.

After solving the normal equation Ax = b i would like to improve my approximation using regressive and maybe later applying Kalman Filter.

My question is which C++ library is the robuster and faster for the needs i describe above?

Was it helpful?

Solution

This is a least squares solution, because you have more unknowns than equations. If m is indeed equal to 2, that tells me that a simple linear least squares will be sufficient for you. The formulas can be written out in closed form. You don't need a library.

If m is in single digits, I'd still say that you can easily solve this using A(transpose)*A*X = A(transpose)*b. A simple LU decomposition to solve for the coefficients would be sufficient. It should be a much more straightforward problem than you're making it out to be.

OTHER TIPS

uBlas is not optimized unless you use it with optimized BLAS bindings.

The following are optimized for multi-threading and SIMD:

  1. Intel MKL. FORTRAN library with C interface. Not free but very good.
  2. Eigen. True C++ library. Free and open source. Easy to use and good.
  3. Atlas. FORTRAN and C. Free and open source. Not Windows friendly, but otherwise good.

Btw, I don't know exactly what are you doing, but as a rule normal equations are not a proper way to do linear regression. Unless your matrix is well conditioned, QR or SVD should be preferred.

If liscencing is not a problem, you might try the gnu scientific library

http://www.gnu.org/software/gsl/

It comes with a blas library that you can swap for an optimised library if you need to later (for example the intel, ATLAS, or ACML (AMD chip) library.

If you have access to MATLAB, I would recommend using its C libraries.

If you really need to specialize, you can approximate matrix inversion (to arbitrary precision) using the Skilling method. It uses order (N^2) operations only (rather than the order N^3 of usual matrix inversion - LU decomposition etc).

Its described in the thesis of Gibbs linked to here (around page 27):

http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top