سؤال

i've googled a lot about what is "page share factor per proc" responsible for and found nothing. It's just interesting for me, i have no current problem with it for now, just curious (wnat to know more). In sysctl it is:

vm.pmap.shpgperproc

Thanks in advance

هل كانت مفيدة؟

المحلول

The first thing to note is that shpgperproc is a loader tunable, so it can only be set at boot time with an appropriate directive in loader.conf, and it's read-only after that.

The second thing to note is that it's defined in <arch>/<arch>/pmap.c, which handles the architecture-dependent portions of the vm subsystem. In particular, it's actually not present in the amd64 pmap.c - it was removed fairly recently, and I'll discuss this a bit below. However, it's present for the other architectures (i386, arm, ...), and it's used identically on each architecture; namely, it appears as follows:

    void
    pmap_init(void)
    {
            ...
            TUNABLE_INT_FETCH("vm.pmap.shpgperproc", &shpgperproc);
            pv_entry_max = shpgperproc * maxproc + cnt.v_page_count;

and it's not used anywhere else. pmap_init() is called only once: at boot time as part of the vm subsystem initialization. maxproc, is just the maximum number of processes that can exist (i.e. kern.maxproc), and cnt.v_page_count is just the number of physical pages of memory available (i.e. vm.stats.v_page_count).

A pv_entry is basically just a virtual mapping of a physical page (or more precisely a struct vm_page, so if two processes share a page and both have them mapped, there will be a separate pv_entry structure for each mapping. Thus given a page (struct vm_page) that needs to be dirtied or paged out or something requiring a hw page table update, the list of corresponding mapped virtual pages can be easily found by looking at the corresponding list of pv_entrys (as an example, take a look at i386/i386/pmap.c:pmap_remove_all()).

The use of pv_entrys makes certain VM operations more efficient, but the current implementation (for i386 at least) seems to allocate a static amount of space (see pv_maxchunks, which is set based on pv_entry_max) for pv_chunks, which are used to manage pv_entrys. If the kernel can't allocate a pv_entry after deallocating inactive ones, it panics.

Thus we want to set pv_entry_max based on how many pv_entrys we want space for; clearly we'll want at least as many as there are pages of RAM (which is where cnt.v_page_count comes from). Then we'll want to allow for the fact that many pages will be multiply-virtually-mapped by different processes, since a pv_entry will need to be allocated for each such mapping. Thus shpgperproc - which has a default value of 200 on all arches - is just a way to scale this. On a system where many pages will be shared among processes (say on a heavily-loaded web server running apache), it's apparently possible to run out of pv_entrys, so one will want to bump it up.

نصائح أخرى

I don't have a FreeBSD machine close to me right now but it seems this parameter is defined and used in pmap.c, http://fxr.watson.org/fxr/ident?v=FREEBSD7&im=bigexcerpts&i=shpgperproc

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top