Pregunta

Me gustaría resolver el sistema de ecuaciones lineales:

 Ax = b

A es una matriz n x m (no cuadrado), b y X son ambos vectores n x 1. Cuando se conocen A y b, n es del orden de 50-100 y m es de aproximadamente 2 (en otras palabras, A podría ser máximo [100x2]).

Yo sé la solución de x: $x = \inv(A^T A) A^T b$

He encontrado algunas formas de resolverlo: uBLAS (Boost), Lapack, Eigen y etc pero no sé qué tan rápido es el tiempo de cálculo de la CPU de 'x' el uso de esos paquetes. Asimismo, no sé si esto numéricamente un ayuno por qué resolver 'x'

¿Qué es para mi importante es que el tiempo de cálculo de la CPU sería más corto posible y una buena documentación ya que soy novato.

Después de resolver la ecuación Ax = b normal, me gustaría mejorar mi aproximación utilizando regresiva y quizás más tarde de aplicar el filtro de Kalman.

Mi pregunta es qué biblioteca C ++ es la más robusta y más rápido para las necesidades que describo arriba?

¿Fue útil?

Solución

This is a least squares solution, because you have more unknowns than equations. If m is indeed equal to 2, that tells me that a simple linear least squares will be sufficient for you. The formulas can be written out in closed form. You don't need a library.

If m is in single digits, I'd still say that you can easily solve this using A(transpose)*A*X = A(transpose)*b. A simple LU decomposition to solve for the coefficients would be sufficient. It should be a much more straightforward problem than you're making it out to be.

Otros consejos

uBlas is not optimized unless you use it with optimized BLAS bindings.

The following are optimized for multi-threading and SIMD:

  1. Intel MKL. FORTRAN library with C interface. Not free but very good.
  2. Eigen. True C++ library. Free and open source. Easy to use and good.
  3. Atlas. FORTRAN and C. Free and open source. Not Windows friendly, but otherwise good.

Btw, I don't know exactly what are you doing, but as a rule normal equations are not a proper way to do linear regression. Unless your matrix is well conditioned, QR or SVD should be preferred.

If liscencing is not a problem, you might try the gnu scientific library

http://www.gnu.org/software/gsl/

It comes with a blas library that you can swap for an optimised library if you need to later (for example the intel, ATLAS, or ACML (AMD chip) library.

If you have access to MATLAB, I would recommend using its C libraries.

If you really need to specialize, you can approximate matrix inversion (to arbitrary precision) using the Skilling method. It uses order (N^2) operations only (rather than the order N^3 of usual matrix inversion - LU decomposition etc).

Its described in the thesis of Gibbs linked to here (around page 27):

http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top