Ok, so I'll answer this one in case anyone finds this question in the future...
I did some thorough researching experimenting with different Expires headers and different approaches and I came up with this that was tried on Chrome 31 and Firefox 26.
Without ANY doubt, I must say that the only reasonable thing to do is to create an array and store the image OBJECTS (not just the path references) globally. Like this:
<script type="text/javascript">
var images = new Array();
var imagePaths = [ <PATHS> ];
for(var i = 0; i < imagePaths.length; i++) {
var img = new Image();
img.src = imagePaths[i];
images[i] = img;
}
</script>
as opposed to...
<script type="text/javascript">
var imagePaths = [ <PATHS> ];
for(var i = 0; i < imagePaths.length; i++) {
var img = new Image();
img.src = imagePaths[i];
}
</script>
...which I have very often seen suggested. The latter does the job of getting the images into the cache but they won't necessarily stay in the actual memory cache of the browser. This is because the variable in the loop gets out of scope whereby it will, sooner or later, get garbage collected whereupon the image will have to be reloaded from disk cache again.
I observed this behaviour a great many times and the first loop never performed worse than the second one where images would have to be reloaded from time to time. I must have tried this more than a hundred times and Chrome didn't reload a single image (from cache) whereas Firefox would in some rare cases behave unpredictably and reload, but only when Cache-Control
headers were set to 0
, no-cache
or were altogether missing.
To sum up, use the first loop and set Cache-Control
to future expiry and you'll get rid of both conditional GET
requests and repeated reads from disk cache. With this the browser network consoles stay quiet, even if you leave the computer to come back an hour later to loop through the images :) (Remember tested on Chrome and Firefox only)