Domanda

I have a dual socket Xeon E5522 2.26GHZ machine (with hyperthreading disabled) running ubuntu server on linux kernel 3.0 supporting NUMA. The architecture layout is 4 physical cores per socket. An OpenMP application runs in this machine and i have the following questions:

  1. Does an OpenMP program take advantage (i.e a thread and its private data are kept on a numa node along the execution) automatically when running on a NUMA machine + aware kernel?. If not, what can be done?

  2. what about NUMA and per thread private C++ STL data structures ?

È stato utile?

Soluzione

The current OpenMP standard defines a boolean environment variable OMP_PROC_BIND that controlls binding of OpenMP threads. If set to true, e.g.

shell$ OMP_PROC_BIND=true OMP_NUM_THREADS=12 ./app.x

then the OpenMP execution environment should not move threads between processors. Unfortunately nothing more is said about how those threads should be bound and that's what a special working group in the OpenMP language comittee is addressing right now. OpenMP 4.0 will come with new envrionment variables and clauses that will allow one to specify how to distribute the threads. Of course, many OpenMP implementations offer their own non-standard methods to control binding.

Still most OpenMP runtimes are not NUMA aware. They will happily dispatch threads to any available CPU and you would have to make sure that each thread only access data that belongs to it. There are some general hints in this direction:

  • Do not use dynamic scheduling for parallel for (C/C++) / DO (Fortran) loops.
  • Try to initialise the data in the same thread that will later use it. If you run two separete parallel for loops with the same team size and the same number of iteration chunks, with static scheduling chunk 0 of both loops will be executed by thread 0, chunk 1 - by thread 1, and so on.
  • If using OpenMP tasks, try to initialise the data in the task body, because most OpenMP runtimes implement task stealing - idle threads can steal tasks from other threads' task queues.
  • Use a NUMA-aware memory allocator.

Some colleagues of mine have thoroughly evaluated the NUMA behavious of different OpenMP runtimes and have specifically looked into the NUMA awareness of the Intel's implementation, but the articles are not published yet so I cannot provide you with a link.

There is one research project, called ForestGOMP, which aims at providing a NUMA-aware drop-in replacement for libgomp. May be you should give it a look.

Altri suggerimenti

You can also check you make your memory placement and access in the right way with a new tool to profile NUMA applications and now open-source for Linux : NUMAPROF : https://memtt.github.io/numaprof/.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top