JavaScript is single-threaded which means all the work needs to be done on the same thread including queuing events (from setTimeout/rAF, keys etc.), rendering to canvas and so forth.
If the loop is very tight (time-budget-wise) there will simply not be any room for the browser to do other tasks such as GC - for Chrome this task seem to be secondary versus Firefox which gives this higher priority (likely to get more performance out of its engine). Basically the running code will block the browser from doing other things than executing the code itself.
A good indicator of this is when you lower the FPS leaving more space for event queue, clean-up etc. When the profiler is running it get more priority in order to catch all sort of things so for some reason GC gets to "sneak" in earlier when profiler is running (in lack of better term). But this is very browser specific and I do not know every underlying details here.
If the browser cannot purge events in the event queue it will eventually stack up and in worst case block/freeze/crash the browser.
In any case, it's hard to debug this (for pin-pointing reasons) as you won't, programmatically, have access to memory or CPU usage etc.
The closest thing is to use a high-resolution timer at the beginning and end of the code inside the loop to see if it comes close to the frame rate time.
For example:
function loop() {
var startTime = performance.now();
... other code ...
var innerLoopTime = performance.now() - startTime;
requestAnimationFrame(loop);
}
If your frame rate is 60 FPS then the time per frame would be 1000/60, or about 16.667ms.
If your innerLoopTime
is very close to this time you will know that you need to optimize the code executed inside the loop, or lower the frame rate.
You could use the debugger to get time-cost per step inside the function but the debugger itself will add an overhead to the total. So do measuring the time, but the cost is lower.. it will be a matter of compromise no matter how one twist and turn this one.