Domanda

I need some help for this problem.

I want to solve Ax = b,

where the A is n x n (square matrix), b is n x 1 matrix.

But the A matrix has this property : + Ill conditioned (K >> 1) (maybe larger than 10 ^ 8) + Symmetric positive definite (because it is covariance matrix)

I already tried Jacobi method, but unfortunately it is very slow to converge. I avoid to using Cholesky decomposition.

And I already tried Conjugate Gradient, but unfortunately if the Condition number from matrix A is too large, it can't become convergent.

Update : I need a method that can be run in parallel framework (like MPI). So I can't use Gauss-seidal which need x[i] in the current iteration.

What kind of method that I can use for this kind of problem ? Thanks :)

È stato utile?

Soluzione

I'm going to take a guess that your troubles arise from an inaccurate computation of a matrix-vector product. (I've never seen conjugate gradient completely fail to reduce the residual unless the matrix-vector product was poor. The first iteration after a restart just does steepest descent.)

You might try running conjugate gradient again, but using extended precision or Kahan summation or something when computing the matrix-vector product.

Alternatively, if your matrix has some known structure, you might try to find a different way of writing the matrix-vector product that reduces roundoff in the computed result. If I could see your matrix, I might be able to give more concrete suggestions here.

Altri suggerimenti

Looking at the matrix you uploaded, a number of things seem a bit strange:

  1. Your matrix K is a relatively small (400 x 400) dense matrix.
  2. Your matrix K contains a significant number of near-zero entries, with (abs(K(i,j)) < 1.E-16*max(abs(K))).

For matrices of this size, computing the Cholesky factorisation directly should be the most efficient approach. I'm not sure why you say that you can't do this?

Iterative techniques, such as preconditioned conjugate gradient methods, are typically used only for very large and sparse systems of equations, so don't seem applicable here.


When solving systems of sparse linear equations such as this it's not the number of rows/cols in the matrix that's important it's the sparsity pattern of the matrix itself.

If, for instance, your matrix A is very sparse it may be possible to compute the sparse Cholesky factorisation A = L*L' directly. Beware though, the ordering of equations determines the sparsity pattern of the resulting factors, and choosing a poor ordering strategy for A can result in catastrophic fill-in for L*L' and poor performance.

There are a number of strategies, such as Approximate Minimum Degree and Multi-level Nested Dissection that should be used to reorder A to obtain pseudo-optimal sparsity for L*L'.

A number of good packages exist that implement high performance sparse factorisation, including implementations of the re-ordering schemes described above. I would recommend looking into the CHOLMOD package by Davis.

If you still find that your system of equations is too big to handle efficiently using direct factorisation you should look into preconditioning your iterative PCG solver. Good preconditioning can reduce the effective condition number of the linear system - greatly enhancing convergence in most cases.

You should always use at least the simple diagonal Jacobi preconditioner, although usually much better performance can be realised using more sophisticated approaches such as incomplete Cholesky factorisation or possibly algebraic multi-grid or multi-level methods. You may find the PETSc library helpful in this regard, as it contains high-performance implementations of a number of iterative solvers and preconditioning schemes.

Hope this helps.

I have seen (but not really taken in) recent work on this such as http://www.cs.yale.edu/homes/spielman/precon/precon.html. Tying up what you said with Wikipedia, you might want to look at http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method, which is a special case of http://en.wikipedia.org/wiki/Successive_Over-relaxation.

If push goes to shove you can always go down a level (find a faster implementation or throw more hardware at the problem) or up a level (try and find another way to achieve your goal which doesn't involve solving as big linear systems, or solving them so often).

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top