Question

On my site, I have an average latency (when not backing up, etc) of ~150ms for a particular AJAX file (the crux of the UI). I've reduced that down from ~250ms by doing a few server-side/databases tricks, and I think there's one last trick that might drop it another 10ms or so from the current ~30ms total for the actual PHP/MySQL portion of the page.

I'm using keep-alive, so I think the ssl handshake is more or less totally out (but I hope to move to SPDY soon, so I don't really know how that helps after the initial handshake).

When I ping, it averages ~55ms.

I make a connection to MySQL at the beginning of the file and close it at the end. I'm pretty sure that costs around ~10ms.

So where does the remaining ~55ms come from?

This may seem totally obsessive, but this is for rapid dynamic pagination, and the effect is seriously degraded by each ms of latency.

Many thanks in advance!

Was it helpful?

Solution

If you have an HTTP connection established, you should be able to run single simple, short HTTP request in about same time as ping.

To test this, time how long it takes to GET a static file.

Next, try to GET a small chunk of data from a PHP page that doesn't use any libraries.

Next, try add your require's and libraries without changing the output. This can be significant, e.g. using Zend and a few of its packages easily takes 40ms with xcache and generally fast system. You may want to change the way PHP is ran, e.g. apache prefork mod_php has to start a new process and php has to load libraries for every request. If you switch to fastcgi, you can preload required libraries, open database connections in advance and remove corresponding time cost from perceived latency.

Next, add some database queries.

Next upgrade to AJAX.

Now AJAX usually makes a POST request, which in HTTP 1.1 means Expect: 100-continue header and add one more round trip. Try to disable that header.

Finally, record your query and response and try to strip out everything you don't need. Ideally you want request and response each to be under 1K, though if your connection is kept alive, tcp window grows and it may be ok to push, say, 16K messages after a while. Request is relatively easy -- it's usually smallish, remove unneeded cookies, etc; response is harder as it's your data, try compression or sending only the data that is actually used, without formatting, styles, or anything else that could be done on the client side.

OTHER TIPS

For all the questions related to the performance issues the only answer is - to use a profiler. Depending on your preferences, profiler features and other reasons you may choose from the list of tools (the list only contains the ones I've ever used or heard of):

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top