Вопрос

This interesting code always allocates 3 GB memory in Linux systems even if the physical RAM is less than 3 GB.

How? (I have a 2.7 GB RAM in my system and this code allocated 3.054 MB memory!)

    #include <stdio.h>
    #include <string.h>
    #include <stdlib.h>
    int main(int argc, char *argv[])
    {
        void *ptr;
        int n = 0;
        while (1) {
            // Allocate in 1 MB chunks
            ptr = malloc(0x100000);
            // Stop when we can't allocate any more
            if (ptr == NULL)
                break;
            n++;
        }
        // How much did we get?
        printf("malloced %d MB\n", n);
        pause();
    }
Это было полезно?

Решение 2

By default in Linux, you don't actually get the RAM until you actually try to modify it. You might try modifying your program as follows and see if it dies sooner:

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
    char *ptr;
    int n = 0;
    while (1) {
        // Allocate in 4kb chunks
        ptr = malloc(0x1000);
        // Stop when we can't allocate any more
        if (ptr == NULL)
            break;
        *ptr = 1;  // modify one byte on the page
        n++;
    }
    // How much did we get?
    printf("malloced %d MB\n", n / 256);
    pause();
}

If you have sufficient swap but insufficient RAM, then this code will start thrashing the swapfile heavily. If you have insufficient swap, it may actually crash before it reaches the end.

As someone else pointed out, Linux is a virtual memory operating system and will use the disk as a backing store when the machine has less RAM than the application requests. The total space you can use is limited by three things:

  • The combined amount of RAM and disk allocated to swap
  • The size of the virtual address space
  • Resource limits imposed by ulimit

In 32-bit Linux, the OS gives each task 3GB virtual address space to play with. In 64-bit Linux, I believe the number is in 100s of terabytes. I'm not sure what the default ulimit is, though. So, find a 64-bit system and try the modified program on that. I think you'll be in for a long night. ;-)

Edit: Here's the default ulimit values on my 64-bit Ubuntu 11.04 system:

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

So, it appears that there isn't a default memory size limit for a task.

Другие советы

When machines run out of physical RAM to address they can use hard drive space to "act as RAM". This is SIGNIFICANTLY SLOWER but can still be done.

In general there are several levels that the machine can use to access information:

  1. Cache (fastest)
  2. RAM
  3. Hard disk (slowest)

It'll use the fastest when available but as you pointed out sometimes it will need to use other resources.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top