Question

I am currently working on an n-tier system and battling some database performance issues. One area we have been investigating is the latency between the database server and the application server. In our test environment the average ping times between the two boxes is in the region of 0.2ms however on the clients site its more in the region of 8.2 ms. Is that somthing we should be worried about?

For your average system what do you guys consider a resonable latency and how would you go about testing/measuring the latency?

Karl

Was it helpful?

Solution

In short : no !

What you should monitor is the global performance of your queries (ie transport to the DB + execution + transport back to your server)

What you could do is use a performance counter to monitor the time your queries usually take to execute. You'll probably see your results are over the millisecond area.

There's no such thing as "Reasonable latency". You should rather consider the "Reasonable latency for your project", which would vary a lot depending on what you're working on. People don't have the same expectation for a real-time trading platform and for a read only amateur website.

OTHER TIPS

Sorry for the very untimely response, but I stumbled on this question when I was looking for metrics of what network latencies others were achieving between their app server and db server. Anyway, I noticed that the other answers

Anyway, in short : yes, network latency (measured by ping) can make a huge difference.

If your database response is .001ms then you will see a huge impact from going from a 0.2ms to 8ms ping. I've heard that database protocols are chatty, which if true means that they would be affected more by slow network latency versus http.

And more than likely, if you are running 1 query, then adding 8ms to get the reply from the db is not going to matter. But if you are doing 10,000 queries which happens generally with bad code or non-optimized use of an ORM, then you will have wait an extra 80seconds for an 8ms ping, where for a 0.2ms ping, you would only wait 4 seconds.

As a matter of policy for myself, I never let client applications contact the database directly. I require that client applications always go through an application server (eg a REST web service). That way, if I accidentally have an "1+N" ORM issue, then it is not nearly as impactful. I would still try to fix the underlying problem...

On a linux based server you can test the effect of latency yourself by using the tc command.

For example this command will add 10ms delay to all packets going via eth0

tc qdisc add dev eth0 root netem delay 10ms

use this command to remove the delay

tc qdisc del dev eth0 root

More details available here: http://devresources.linux-foundation.org/shemminger/netem/example.html

All applications will differ, but I have definitely seen situations where 10ms latency has had a significant impact on the performance of the system.

One of the head honchos at answers.com said according to their studies, 400 ms wait time for a web page load is about the time when they first start getting people canceling the page load and going elsewhere. My advice is to look at the whole process, from original client request to fulfillment and if you're doing well there, there's no need to optimize further. 8.2 ms vs 0.2 ms is exponentially larger in a mathematical sense, but from a human sense, no one can really perceive an 8.0 ms difference. It's why they have photo finishes in races ;)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top