سؤال

I am writing a parallel VTK file (pvti) from my fortran CFD solver. The file is really just a list of all the individual files for each piece of the data. Running MPI, if I have each process write the name of its individual file to standard output

print *, name

then I get a nice list of each file, ie

block0.vti
block1.vti
block2.vti

This is exactly the sort of list I want. But if I write to a file

write(9,*) name

then I only get one output in the file. Is there a simple way to replicate the standard output version of this without transferring data?

هل كانت مفيدة؟

المحلول 2

Apart from having the writes from different ranks well mixed, your problem is that the Fortran OPEN statement probably truncates the file to zero length, thus obliterating the previous content instead of appending to it. I'm with Vladimir F on this and would write this file only in rank 0. There are several possible cases, some of which are listed here:

  • each rank writes a separate VTK file and the order follows the ranks or the actual order is not significant. In that case you could simply use a DO loop in rank 0 from 0 to #ranks-1 to generate the whole list.

  • each rank writes a separate VTK file, but the order does not follow the ranks, e.g. rank 0 writes block3.vti, rank 1 writes block12.vti, etc. In that case you can use MPI_GATHER to collect the block number from each process into an array at rank 0 and then loop over the elements of the array.

  • some ranks write a VTK file, some don't, and the block order does not follow the ranks. It's similar to the previous case - just have the ranks that do not write a block send a negative block number and then rank 0 would skip the negative array elements.

  • block numbering follows ranks order but not all ranks write a block. In that case you can use MPI_GATHER to collect one LOGICAL value from each rank that indicates if it has written a block or not.

نصائح أخرى

You could try adapting the following which uses MPI-IO, which is really the only way to ensure ordered files from multiple processes. It does assume an end of line character and that all the lines are the same length (padded with blanks if required) but I think that's about it.

Program ascii_mpiio

  ! simple example to show MPI-IO "emulating" Fortran 
  ! formatted direct access files. Note can not use the latter
  ! in parallel with multiple processes writing to one file
  ! is behaviour is not defined (and DOES go wrong on certain
  ! machines)

  Use mpi

  Implicit None

  ! All the "lines" in the file will be this length
  Integer, Parameter :: max_line_length = 30

  ! We also need to explicitly write a carriage return. 
  ! here I am assuming  ASCII
  Character, Parameter :: lf = Achar( 10 )

  ! Buffer to hold a line
  Character( Len = max_line_length + 1 ) :: line

  Integer :: me, nproc
  Integer :: fh
  Integer :: record
  Integer :: error
  Integer :: i

  ! Initialise MPI
  Call mpi_init( error )
  Call mpi_comm_rank( mpi_comm_world, me   , error )
  Call mpi_comm_size( mpi_comm_world, nproc, error )

  ! Create a MPI derived type that will contain a line of the final
  ! output just before we write it using MPI-IO. Note this also
  ! includes the carriage return at the end of the line.
  Call mpi_type_contiguous( max_line_length + 1, mpi_character, record, error )
  Call mpi_type_commit( record, error )

  ! Open the file. prob want to change the path and name
  Call mpi_file_open( mpi_comm_world, '/home/ian/test/mpiio/stuff.dat', &
       mpi_mode_wronly + mpi_mode_create, &
       mpi_info_null, fh, error )

  ! Set the view for the file. Note the etype and ftype are both RECORD,
  ! the derived type used to represent a whole line, and the displacement
  ! is zero. Thus
  ! a) Each process can "see" all of the file
  ! b) The unit of displacement in subsequent calls is a line. 
  !    Thus if we have a displacement of zero we write to the first line,
  !    1 means we write to the second line, and in general i means
  !    we write to the (i+1)th line
  Call mpi_file_set_view( fh, 0_mpi_offset_kind, record, record, &
       'native', mpi_info_null, error )

  ! Make each process write to a different part of the file
  Do i = me, 50, nproc
     ! Use an internal write to transfer the data into the
     ! character buffer
     Write( line, '( "This is line ", i0, " from ", i0 )' ) i, me
     !Remember the line feed at the end of the line
     line( Len( line ):Len( line ) ) = lf
     ! Write with a displacement of i, and thus to line i+1
     ! in the file
     Call mpi_file_write_at( fh, Int( i, mpi_offset_kind ), &
          line, 1, record, mpi_status_ignore, error )
  End Do

  ! Close the file
  Call mpi_file_close( fh, error )

  ! Tidy up
  Call mpi_type_free( record, error )

  Call mpi_finalize( error )

End Program ascii_mpii

Also please note you're just getting lucky with your standard output "solution", you're not guaranteed to get it all nice sorted.

If you are not in a hurry, you can force the output from different tasks to be in order:

! Loop over processes in order
DO n = 0,numProcesses-1

  ! Write to file if it is my turn
  IF(nproc == n)THEN 
    ! Write output here
  ENDIF

  ! This call ensures that all processes wait for each other
#ifdef MPI
  CALL MPI_Barrier(mpi_comm_world,ierr)
#endif

ENDDO

This solution is simple, but not efficient for very large output. This does not seem to be your case. Make sure you flush the output buffer after each write. If using this method, make sure to do tests before implementing, as success is not guaranteed on all architectures. This method works for me for outputting large NetCDF files without the need to pass the data around.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top