Question

In a related question I learned that performing request = Isend(...); Recv(...); request.Wait(); is not guaranteed to work, as Isend may not do anything until request.Wait(), hence deadlocking at Recv(...) (see original question for details).

But what about if Isend() / Wait() is performed on another thread than Recv? I'm now not directly interested in safety guarantees by the standard. This is because the standard only asserts thread-safety, if the appropriate Init_thread method is called and returns the correct level. With my configuration of openMPI this isn't the case. However, I can't see a reason that an implementation actually restricts calls to only from the thread which called Init_thread (an actual comparison for the thread id would be necessary). My reasoning is: if I serialize all sends and all recvs, mpi should never be able to notice that I'm using more than one thread.

So my simplified code is this:

#include <cassert>
#include <thread>
#include "mpi.h"

void send(int rank, int& item)
{
   MPI::Request request = MPI::COMM_WORLD.Isend(&item, sizeof(int), MPI::BYTE, rank, 0);
   request.Wait();
}

void recv(int rank, int& item)
{
   MPI::COMM_WORLD.Recv(&item, sizeof(int), MPI::BYTE, rank, 0);
}

int main()
{
   MPI::Init();
   int ns[] = {-1, -1};
   int rank = MPI::COMM_WORLD.Get_rank();
   ns[rank] = rank;
   auto t_0 = std::thread(send, 1 - rank, std::ref(ns[rank])); // send rank to partner (i.e. 1 - rank)
   auto t_1 = std::thread(recv, 1 - rank, std::ref(ns[1 - rank])); // receive partner rank from partner
   t_0.join();
   t_1.join();
   assert( ns[0] == 0 );
   assert( ns[1] == 1 );
   MPI::Finalize();
}

Explanation of code: Two threads are executed on each processor. One tries to Isend some data to the partner and waits until this is done, the other one receives some data from the partner.

Question: Can I safely assume that most implementations of MPI don't choke up on this piece of code?

(Disclaimer: This piece of code is not designed to be exception-safe or particularly beautiful. It's for demo purposes only)

Was it helpful?

Solution

Question: Can I safely assume that most implementations of MPI don't choke up on this piece of code?

In practice - yes, if you add synchronisation (which your code is lacking); in theory - no. While it is possible that some implementations allow for serialised calls from different threads at the MPI_THREAD_SINGLE level (with Open MPI being such one - see here), the MPI standard requires that the library must be initialised at the MPI_THREAD_SERIALIZED level. If you intent for your software to be portable and to be able to compile and run correctly with other MPI implementations, you should not rely on some particular Open MPI behaviour.

That said, Open MPI can be configured to support multithreading (MPI_THREAD_MULTIPLE) when the library is built. The default is that MT support is not enabled for performance reasons. You can check the state of your particular installation using ompi_info:

$ ompi_info | grep MPI_THREAD_MULTIPLE
     Thread support: poxis (MPI_THREAD_MULTIPLE: no, progress: no)
                            ^^^^^^^^^^^^^^^^^^^^^^^

That particular build does not support multithreading and will always return MPI_THREAD_SINGLE in the provided output argument of MPI_Init_thread.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top