Question

CDNs seem like a tremendously brute-force approach to improving website performance across the world, since they use some thousands of machines close to the end-user to ensure good throughput.

Are there any ways of improving performance over long distances with high latency or slow-ish links (e.g., UK to Australia) over the "usual" methods of reducing size and number of requests, or is the only other way to have servers closer to the user?

Was it helpful?

Solution

You can't circumvent latency by reducing size. Just make sure your server supports Keepalive, everything that should be cached actually comes with appropriate Expires: headers and that your HTML is reasonably sized (i.e. also use gzip compression) - then see how far you get with that and whether multi-homing is still necessary.

OTHER TIPS

Sounds mental and I'm not certain the HTML is valid, but put references to re-usable scripts at the bottom of your HTML, just before </body>.

In tests I've found the subjective impression of speed greatly improved as the HTML and images are shown while the script is downloaded and parsed.

Hat tip to the ACM for that one.

Ye cannae beat the laws of physics. The speed of light might be fast, but it's still finite, and so distance and the number of things you are actually downloading will matter, as well as their size.

Having server close to backbone IXP would help.

Excluding the 'usual' ways there isn't any other way to influence the performance of the traffic, you are at the mercy of the elements between you and your destination.

Some groups are starting to use p2p networks for distribution large files, Sky, BBC, etc. use it for their download services but to be honest, getting the file as close to the last mile as possible is without doubt the best solution.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top