Question

I'm trying to calculate pi using mpi c library on hypercube topology. But the execution doesn't proceed the MPI_Send and MPI_Recv part.

I'm using 4 processors!

It seems like none of the processors are receiving any data.

Here's the code, output and the error I'm getting.

Any help would be appreciated! Thanks!

Code: After initializations and calculating local mypi at each processor.

  mypi = h * sum;
    printf("Processor %d has local pi = %f", myid, mypi);
    //Logic for send and receive!                                                                                                                                                   
    int k;
    for(k = 0; k < log10(numprocs) / log10(2.0); k++){
      printf("entering dimension %d \n", dimension);
      dimension = k;
      if(decimalRank[k] == 1 && k < e){
        //if it is a processor that need to send then                                                                                                                               
        int destination = 0;
        //find destination processor and send                                                                                                                                       
        destination = myid ^ (int)pow(2,dimension);
        printf("Processor %d sending to %d in dimension %d the value %f\n", myid, destination, dimension,  mypi);

        MPI_SEND(&mypi, 1, MPI_DOUBLE, destination, MPI_ANY_TAG, MPI_COMM_WORLD);
        printf("Processor %d done sending to %d in dimension %d the value %f\n", myid, destination, dimension, mypi);
      }
      else{
        //Else this processor is supposed to be receiving                                                                                                                           
        pi += mypi;
        printf("Processor %d ready to receive in dimension %d\n", myid, dimension);
        MPI_RECV(&mypi, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD);
        printf("Processor %d received value %d in dimension %d\n", myid, pi, dimension);
        pi += mypi;
      }
    }

    done = 1;
  }

Error:

mpiexec: Warning: tasks 0-3 died with signal 11 (Segmentation fault).

Output:

bcast complete
Processor 0 has local pi = 0.785473
Processor 0 ready to receive in dimension 0
Processor 1 has local pi = 0.785423
Processor 1 sending to 0 in dimension 0 the value 0.785423
Processor 3 has local pi = 0.785323
Processor 3 sending to 2 in dimension 0 the value 0.785323
Processor 2 has local pi = 0.785373
Processor 2 ready to receive in dimension 0
Was it helpful?

Solution

MPI_ANY_TAG is not a valid tag value in send operations. It can only be used as a wildcard tag value in receive operations in order to receive messages no matter what their tag value. The sender must specify a valid tag value - 0 suffices in most cases.

OTHER TIPS

This:

for(k = 0; k < log10(numprocs) / log10(2.0); k++) ...

and this:

... pow(2,dimension);

are bad: you must use integer logic only. Be sure that at some time it will happen that something will be valued as "2.999999" and rounded to "2", breaking your algorithm.

I'd try something like:

for(k = 0, k2 = 1; k2 < numprocs; k++, k2 <<= 1) ...
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top