Question

Hi I am trying to use fortran structure like this

type some
   u                ! actual code will have 17 such scalars
end type some
TYPE(some),ALLOCATABLE,DIMENSION(:) :: metvars,newmetvars

Now the aim of my test program is to send 10 numbers from one processor to another but the starting point of these 10 numbers would be my choice (example if i have an vector of say 20 numbers not necesary i will take the first 10 numbers to the next processor but lets say my choice is from 5 to 15). So first u use mpi_type_contiguous like this

 CALL MPI_TYPE_CONTIGUOUS(10,MPI_REAL,MPI_METVARS,ierr) ! declaring a derived datatype of the object to make it in to contiguous memory
 CALL MPI_TYPE_COMMIT(MPI_METVARS,ierr)

I do the send rec and was able to get the first 10 numbers to the other processor (I am testing it for 2 processors)

 if(rank.EQ.0)then
     do k= 2,nz-1
     metvars(k)%u = k
     un(k)=k
     enddo
 endif

I am sending this

now for the second part i used mpi_TYPE_CREATE_SUBARRAY so then

   array_size = (/20/)
   array_subsize =(/10/)
   array_start = (/5/)

   CALL MPI_TYPE_CREATE_SUBARRAY(1,array_size,array_subsize,array_start,MPI_ORDER_FORTRAN,MPI_METVARS,newtype,ierr)
   CALL MPI_TYPE_COMMIT(newtype,ierr)

   array_size = (/20/)
   array_subsize =(/10/)
   array_start = (/0/)

   CALL MPI_TYPE_CREATE_SUBARRAY(1,array_size,array_subsize,array_start,MPI_ORDER_FORTRAN,MPI_METVARS,newtype2,ierr)
   CALL MPI_TYPE_COMMIT(newtype2,ierr)

  if(rank .EQ. 0)then
     CALL MPI_SEND(metvars,1,newtype,1,19,MPI_COMM_WORLD,ierr)
  endif

  if(rank .eq. 1)then
     CALL MPI_RECV(newmetvars,1,newtype2,0,19,MPI_COMM_WORLD,MPI_STATUS_IGNORE,ierr)
  endif

I don't understand how to do this.

I get an error saying

[flatm1001:14066] *** An error occurred in MPI_Recv
[flatm1001:14066] *** on communicator MPI_COMM_WORLD
[flatm1001:14066] *** MPI_ERR_TRUNCATE: message truncated
[flatm1001:14066] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

I use openmpi in my local machine. I was able to make use of the subarray command without the mpi_type_contiguous part. However if i combine both because in this case I need to do that since i have a structure with fortran in the real code. I dunno if there is a better way to do it either. Any sort of help and suggestios are appreciated. Thanks in advance

Was it helpful?

Solution

I assume your custom type contains 1 real, as it's not specified. You first construct a contigious type of 10 of these variables, i.e. MPI_METVARS represents 10 contiguous reals. Now, I don't know if this is really the problem, as the code you posted might be incomplete, but the way it looks now is that you construct a subarray of 10 MPI_METVARS types, meaning you have in effect 100 contiguous reals in newtype and newtype2.

The 'correct' way to handle the structure is to create a type for it with MPI_TYPE_CREATE_STRUCT, which should be your MPI_METVARS type.

So, pls provide the correct code for your custom type and check the size of the newtype type.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top