Question

Is it possible to explicitly create static objects in the CPU cache, sort of to make sure those objects always stay in the cache so no performance hit is ever taken from reaching all the way into RAM or god forbid - hdd virtual memory?

I am particular interested in targeting the large L3 shared cache, not intending to target L1, L2, instruction or any other cache, just the largest on-die chub of memory there is.

And just to clarify to differentiate from other threads I searched before posting this, I am not interested in privatizing the entire cache, just a small, few classes worth of region.

Was it helpful?

Solution

No. Cache is not addressable, so you can't allocate objects in it.

What it seems like you meant to ask is: Having allocated space in virtual memory, can I ensure that I always get cache hits?

This is a more complicated question, and the answer is: partly.

You definitely can avoid being swapped out to disk, by using the memory management API of your OS (e.g. mlock()) to mark the region as non-pageable. Or allocate from "non-paged pool" to begin with.

I don't believe there's a similar API to pin memory into CPU cache. Even if you could reserve CPU cache for that block, you can't avoid cache misses. If another core writes to the memory, ownership WILL be transferred, and you WILL suffer a cache miss and associated bus transfer (possibly to main memory, possibly to the cache of the other core).

As Mathew mentions in his comment, you can also force the cache miss to occur in parallel with other useful work in the pipeline, so that the data is in cache when you need it.

OTHER TIPS

You could run another thread that loops over the data and brings it into the L3 cache.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top