Question

Given a matrix QT:

% ipython
Python 2.7.3
In [3]: QT.dtype
Out[3]: dtype('float64')

In [4]: QT.__class__
Out[4]: numpy.ndarray

In [5]: QT.flags
Out[5]:
      C_CONTIGUOUS : True
      F_CONTIGUOUS : False
      OWNDATA : True
      WRITEABLE : True
      ALIGNED : True
      UPDATEIFCOPY : False

I need the results of:

QT.T * QT

Problem: Whenever I try to compute these matrices multiplication, the memory overflows and the code stop running. This happen because of the matrix copy numpy is doing behind.

Tried solutions:

First:

Q = numpy.array(QT.T, order='C')
numpy.dot(Q, QT)

Second:

QT = numpy.array(QT, order='F')
Q = numpy.array(QT.T, order='F')
numpy.dot(Q, QT)

Third:

QT = numpy.matrix(QT)
QT = QT.copy('F')
Q = numpy.matrix(QT.T)
Q = Q.copy('F')
Q.dot(QT)

However, none of them is solving.

Question

How can I operate QT.T * QT without having the memory to explode?

References

http://numpy-discussion.10968.n7.nabble.com/inplace-matrix-multiplication-td21817.html

Is there an "enhanced" numpy/scipy dot method?

Numpy dot product very slow using ints

http://www.scipy.org/PerformanceTips

Was it helpful?

Solution 2

If the result won't all fit into core memory, you can put it in a memory-mapped array so that the overflow will be written to your hard disk:

shape = (QT.shape[2],)*2
result = np.memmap('result.dat', dtype=QT.dtype, mode='w+', shape=shape)
np.dot(QT.T, QT, out=result)

You may also want to take a look at this algorithm for performing out-of-core SVD on very large arrays.

OTHER TIPS

Have you tried:

shape = (QT.shape[2], QT.shape[2])
result = np.zeros(shape, dtype=QT.dtype)
np.dot(QT.T,  QT, out=result)

Try running the above and see which line, if any, breaks.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top