Question

I have a server with 2 CPU's and 64GB of RAM, 32GB per CPU.

I know that each CPU has it's own part of RAM, lets call them RAM1 and RAM2. I would like to make my program know on which RAM (RAM1 or RAM2) it allocates it's data.

I tried to check pointers values:

  // put the thread at i-th CPU, using pthread_setaffinity_np
TData *a = new TData[N];
...
cout << "CPU = " << i << " adress = " << a << endl; 

but the output looks random. I suppose that is because addresses are virtual. Is there any correspondence between virtual memory addresses and part of the RAM?

How to check in which RAM my array "a" is allocated?

Was it helpful?

Solution

Your question is answered here. I would only like to add some comments.

Note that calling new [] does not actually allocate physical memory. On modern operating systems this only results in an anonymous memory mapping begin made. Anonymous mappings do not correspond to files in the file system but rather are backed by the swap (if any). Initially the whole area points to a read-only page in the kernel that contains all zeros. Only when you actually write to the newly allocated memory, a new memory page is installed, which replaces the zero page for the page range where the accessed address falls. This is why we say that the zero page is copy-on-write (or CoW) mapped to the virtual address space of the process. The default policy is to try to allocate the new page on the same NUMA node, where the thread that accessed the memory area runs. This is called a "first-touch" NUMA policy. If there is not enough memory on that NUMA node, the page is allocated on some other node with enough free memory. It is also possible for small allocations to end up inside a bigger area (called arena), managed by the C library memory allocator malloc() (the C++ operator new [] calls malloc() in order to do the actual memory allocation). In this case the pages might be already present in the physical memory even before you write to the newly allocated memory.

Linux has the nasty habit of not preserving the NUMA association of memory areas when it swaps. That is, if a page was allocated on NUMA node 0, then swapped out and then swapped back in, there is no guarantee that the page won't be placed on NUMA node 1. This makes the question "where is my memory allocated" a bit tricky, since a consecutive swap-out followed by a swap-in could easily invalidate the result that you get from move_pages() just a fraction of a second ago. Therefore the question only makes sense in the following two special cases:

  • you explicitly lock the memory region: one could use the mlock(2) system call in order to tell the OS not to swap a particular range from the process virtual address space;
  • your system has no active swap areas: this prevents the OS altogether from moving the pages out from and back into the main memory.

OTHER TIPS

Memory is virtualized through the MMU, so each process sees a memory space of size equal to 2^64. Within the process, addresses are virtual, so they are meaningless. There isn't any corrispondence between the virtual addresses (seen by the application) and the physical addresses (on RAM) at the process-level.

Your application should query the operating system to know which physical addresses it is currenyly using.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top