You are a bit mistaken (when saying that linear rescaling misses pixels). Assuming You are rescaling the image by at most factor of 2, Bilinear interpolation takes into account all the pixels of the source image. If you smooth the image a bit and use bilinear interpolation this gives you high quality results. For most practical cases even bi-qubic interpolation is not needed.
Since bi-linear interpolation is extremely fast (can be easily executed in fixed point calculations) it is by far the best image rescaling algorithm when dealing with real time processing.
If you intend to shrink the image by more than factor of 2 than bilinear interpolation is mathematically wrong and with larger factors even bi-cubic starts to make mistakes. That is why in image processing software (like photoshop) we use better algorithms (yet much more CPU demanding).
The answer to your question is speed consideration.
Given the speed of your CPU/GPU, the image size and desired frame rate you can easily compute how many operations you can do for every pixel. For example - with 2GHZ CPU and 1[Gpix] image size, you can only make few calculations for each pixel every second.
Given the amount of allowed calculations - you select the best algorithms. So the decision is usually not driven by image quality but rather by speed considerations.
Another issue about super sampling - Sometimes if you do it in frequency domain, it works much better. This is called frequency interpolation. But you will not want to calculate FFT just for rescaling an image.
Moreover - I don't know if you are familiar with back projection. This is a way to interpolate the image from destination to source instead of from source to destination. Using back projection you can enlarge the image by a factor of 10, use bilinear interpolation and still be mathematically correct.