Pergunta

A seek is the task of positioning the read/write head of the disk to the required data block, and irrespective of the size of the data block, all contiguous data can be accessed using a single seek. Seek complexity of an algorithm is defined as the number of times the algorithm leaves one data locality for another, (or, in other words, chooses to read or write a block that is not physically contiguous with the last block read or written) thereby necessitating a seek.

Now consider the simple I/O efficient Matrix Transposition algorithm. It takes $E_{P \times Q}$ as input and outputs its transpose $F_{Q \times P}$

Algorithm

Let $s=\sqrt{M/2}$

for $i = 1$ to $\lceil P/s \rceil$

$\hspace{20px}$for $i = 1$ to $\lceil Q/s \rceil$

$\hspace{40px}$ $F_{ji} = E^T_{ij}$

$\hspace{20px}$endfor

endfor

This is in fact a tiling method, that creates two tiles (matrix) of size $\sqrt{M/2} \times \sqrt{M/2}$ for input and output matrices and after its computation it writes the tile in to Disk.

When $P, Q \leq \sqrt{M/2}$ and matrix is in row-major order The two matrices fit in the main memory together (which is obvious). Simply Read $E$ in, compute $F$ and write $F$. It has been claimed:

If a seek is needed between two rows, then $S = P + Q$ (where $S$ is number of seeks) If rows follow one another contiguously, then $S = 2$.

I can't understand the above sentence. If both matrices fit in memory, then why any seeks are needed?

Reference

This statement is in I/O Efficient Algorithms for Matrix Computations Which is a Ph.D thesis. The definition is in Page 2 and the Algorithm in Page 12.

Thanks

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top