Question

I'm creating a simple row buffer simulator to go along with a simple cache simulator in order to count hits and misses in the row buffer. Whenever a cache block isn't in the cache I want to go look for it in the row buffers of the main memory and record whether it is present of not.

How accurate would it be to have just one long "row buffer" struct containing all the data found in all the individual row buffers of each corresponding bank in each DRAM chip? Say that each chip has 8 banks, I would then have to create 8 of these extra long row buffers to simulate these chips. This idea is based on the understanding that all these chips work in unison, so that if I want to load the cache block at address 0, the row buffers of bank 0 in each chip would fill with data from addresses starting at address 0 in chip 0, and ending at address 0 + (length of row buffer * number of DRAM chips) in the last chip. It would assume a row interleaved address mapping (with consecutive rows in consecutive banks) for simplicity's sake.

Are there any major misunderstandings about how DRAM works that causes this to be a very bad way to model row buffer behavior or is this a reasonable simplification? I'd also like to underscore that simplicity is the main goal here.

EDIT added clarification on question posted in comments below: I assume that all the row buffers for banks with ID, say, 0 work together in unison on the same read or write, with all of them containing data for that particular operation. Couldn't I model these 8 row buffers (if I have 8 DRAM chips) as just one very big row buffer? The total size of this combined row buffer would be the length of the normal row buffer*number of DRAM chips. So if I were to a model chips with 8 banks for example, I'd have 8 of these "combined" row buffers, one for each bank.

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top