Question

I have an upper triangular matrix and the result vector b. My program need to solve the linear system:

Ax = b

using the pipeline method. And one of the constraints is that the number of process is smaller than the number of the equations (let's say it can be from 2 to numberOfEquations-1).

I don't have the code right now, I'm thinking about the pseudo code..

My Idea was that one of the processes will create the random upper triangular matrix (A) the vector b. lets say this is the random matrix:

1  2  3   4   5   6
0  1  7   8   9   10
0  0  1   12  13  14
0  0  0   1   16  17
0  0  0   0   1   18
0  0  0   0   0   1

and the vector b is [10 5 8 9 10 5] and I have a smaller amount of processes than the number of equations (lets say 2 processes)

so what I thought is that some process will send to each process line from the matrix and the relevant number from vector b.

so the last line of the matrix and the last number in vector b will be send to process[numProcs-1] (here i mean to the last process (process 1) ) than he compute the X and sends the result to process 0.

Now process 0 need to compute the 5 line of the matrix and here i'm stuck.. I have the X that was computed by process 1, but how can the process can send to himself the next line of the matrix and the relevant number from vector b that need to be computed?

Is it possible? I don't think it's right to send to "myself"

Was it helpful?

Solution

Yes, MPI allows a process to send data to itself but one has to be extra careful about possible deadlocks when blocking operations are used. In that case one usually pairs a non-blocking send with blocking receive or vice versa, or one uses calls like MPI_Sendrecv. Sending a message to self usually ends up with the message simply being memory-copied from the source buffer to the destination one with no networking or other heavy machinery involved.

And no, communicating with self is not necessary a bad thing. The most obvious benefit is that it makes the code more symmetric as it removes/reduces the special logic needed to handle self-interaction. Sending to/receiving from self also happens in most collective communication calls. For example, MPI_Scatter also sends part of the data to the root process. To prevent some send-to-self cases that unnecessarily replicate data and decrease performance, MPI allows in-place mode (MPI_IN_PLACE) for most communication-related collectives.

OTHER TIPS

Is it possible? I don't think it's right to send to "myself"

Sure, it is possible to communicate with oneself. There is even a communicator for it: MPI_COMM_SELF. Talking to yourself is not too uncommon. Your setup sounds like you would rather use MPI collectives. Have a look at MPI_Scatter and MPI_Gather and see if they don't provide you with the functionality, you are looking for.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top