Question

My assumptions are simple ones at this point. This is what I'd assume for dumping L3:

  • Stop normal execution / operations which might affect cache state.
  • Where A is the starting memory location of what's currently in L3, read all locations from A to A + L3width - 1, displaying each.
  • Do some ordinary program processing which affects the cache state, keeping track of latest A.
  • Repeat from the top.

Q1: What incorrect assumptions have I made above? What have I left out? More detail please.

Q2: Is there any way to avoid changing the cache state when I write out the dump?

Q3: Would this process change for cache levels 2 and 1, other than using a different width and waiting a shorter time for the data to arrive?

Was it helpful?

Solution

A.Q1. You can not be exactly sure what is in the cache right now. Therefore this algorithm doesn't really make sense. You might expect most of the data to be in the cache if you read A sequentially from A till A+L3width-1 and avoid doing ANYTHING else, but it is more or less bringing data to $ and expect it to stay there for some (short) time.

A.Q2. Nope. No way

A.Q3. Of course it would, even more than L3

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top