Domanda

I want to run a parallel, scheduled(eg. static/dynamic/guided) for-loop, where each thread has its own set of variables, based on their thread-id. I know that any variable declared within the parallel pragma is private, but I don't want to re-declare the variables in every iteration of the for loop.

in my specific situation, I'm counting whenever a set of generating coordinates lies inside or outside of a circle to approximate pi. I'm using erand48(int[3] seed) to generate these coordinates in each of the threads, and by giving each thread a different set of values for 'seed', I will get a greater variety of numbers to use in the approximation(and is also a requirement for this simulation).

  long long int global_result = 0;
  int tID = omp_get_thread_num();
  int[3] seed;
  seed[0] = (((tid*tid + 15) * 3)/7);
  seed[1] = ((((tid + tid) * 44)/3) + 2);
  seed[2] = tid;
  int this_result = 0;
#   pragma omp parallel for num_threads(thread_count) schedule(runtime)
      for(i = 0; i < chunksize; i++){
        x = erand48(seed);
        y = erand48(seed);
        ((x*x+y*y)>=1)?(this_result++):;    
      }
#   pragma omp critical{
      global_result+= this_result;
    }

This is as best as I can represent what I'm trying to do. I want the values of 'this_result','tid' and 'seed' to have a private scope.

È stato utile?

Soluzione

I know that any variable declared within the parallel pragma is private, but I don't want to re-declare the variables in every iteration of the for loop.

Separate the #pragma omp parallel for into its two separate components #pragma omp parallel and #pragma omp for. Then you can declare the local variables in the parallel but outside the loop. Something like this

int global_result = 0; 
#pragma omp parallel reduction(+:global_result)
{
    int tid = omp_get_thread_num();
    int seed = (((tid*tid + 15) * 3)/7);
// Typo, as commented below
//  #   pragma omp parallel for schedule(runtime)
//  What is intended!
#   pragma omp for schedule(runtime)
      for(i = 0; i < chunksize; i++){
        float x = erand48(&seed);
        float y = erand48(&seed);
        if ((x*x+y*y)>=1)
            this_result++;    
      }
      global_result += this_result;
}

There are better ways to calculate pi, though :-)

Altri suggerimenti

You can use the clause "private" in your #pragma directive like that:

#pragma omp parallel for private(this_result, tid, seed) num_threads(thread_count) schedule(runtime)

If I understood your question correctly, that should do.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top