Pregunta

I have a set of linear algebraic equations, Ax=By. Where A is matrix of 36x20 and x is a vector of 20x1, B is 36x13 and y is 13x1. Rank(A)=20. Because system is overdetermined, so least squares solution is possible, i,e; x = (A^TA)^-1A^TBy. I want the solution so that the residual error e = Ax-By should be minimized. I was using Maple to take the transpose and inverse of the matrices but taking inverse of such a big matrix takes much longer time and RAM. I even spend the whole day to take the matrix inverse but due to shortage of RAM memory it was interrupted. This is very slow and I guess not achievable with Maple.

Could anybody suggest any solution of way to do so in C++ or any other way of solving equations rather than taking inverses and transposes.

Formation of matrices,

    [ 1 0 0 ...0]
    [ 0 1 0 ...0]
    [ 0 0 1 ...0]    [LinearVelocity_x]
    [ 0 0 0 ...1]    [LinearVelocity_y]
    [ . . . ....], x=[LinearVelocity_z] 
A = [ . . . ....]    [RotationalVelocity_ROLL]
    [ . . . ....]    [RotationalVelocity_PITCH]    
    [ 1 0 0 ...0]    [RotationalVelocity_YAW]
    [ 0 1 0 ...0]
    [ 0 0 1 ...0]
    [ 0 0 0 ...1]

x is basically position(x,y,z) and orientation(Roll, Pitch and Yaw) vector. However B is not a matrix of fixed ones and zeros. B is a matrix with elements of sin, cos of angles which are real time sensors data not a fixed data. In maple B is almost a matrix of variables and fixed elements, you can say a dense sparse matrix. Meanwhile, y is a vector of all the sensors or encoders.

¿Fue útil?

Solución

If your data is floating-point then Maple should get this very quickly. If A, B, and y all have only numeric entries then try,

ans := LinearAlgebra:-LeastSquares( evalf(A), evalf(B.y) );

or, if you want the solution which itself has minimal 2-norm,

ans := LinearAlgebra:-LeastSquares( evalf(A), evalf(B.y), 'optimize'=true );

My guess is that your data is purely rational or integer, and that you may not realize that using this will cause Maple to try and find an exact rational answer. Or you might have some unknown symbolic quantity in the data (...though that could make the goal of computing minimal residual problematic). Such purely exact data, whether rational or symbolic, is a potential memory hogging nightmare and likely not at all what you really want in you are considering C++ as an alternative scheme. That is why I wrapped with calls to evalf, to cast the data to floats.

With purely float data, 36x20 least squares is a tiny problem and Maple should be able to do it in just a fraction of a second.

You should let the LinearAlgebra:-LeastSquares routine do the lifting, and not try and form or use normal equations or for Matrix inversions yourself. Use the method=SVD option if you want a robust approach. Let it deal with the numerical difficulties.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top