Question

I am writing a very basic TCP server. The server keeps track of the state it receives from clients. I documented the message format and published the source. On a 2009 MacBookPro (2.26 GHz Core 2 Duo, 4 GB RAM), the throughput is very low - 1 MB/s if server and client run on the same machine. I am looking for ways to dramatically increase the throughput.

Both, the main loop of the server and the client are fairly straightforward. After establishing the connection with the server, the client creates instances of UpdateOneMessage, and sends their byte[] representation to the server. From Client.run():

for (int i = 0; i < maxMessageCount; i++) {
  send(new UpdateOneMessage(1 + i, id, "updatedState"));
  // .. read response
}

Client.send() serializes the message and writes to the DataOutputStream.

private int send(final Message message) throws Exception {
  final byte[] bytes = message.serialize();
  out.write(bytes);
  out.flush();
  return bytes.length;
}

Profiling client and server with JVM Monitor, showed CPU time was dominated by reading from the InputStreamReader and writing to the DataOutputStream. But at 1 MB/s, this application is not even close to being IO-bound.

  • Which throughput can I expect from my app considering that each message is fairly small (55 bytes on average)?
  • What else can I do to find the bottlenecks in this simple application?
Was it helpful?

Solution

The following code

send(new UpdateOneMessage(1 + i, id, "updatedState"));
// .. read response

suggest that you switch direction of the traffic on each message. That is, you wait on response on each request before sending the next. This architecture is going to put some constraints on how fast you can run. The latency that each message is going to experience is going to hit the general throughput of your server.

If you move the client and server to two different locations with some distance between them, you will see an even slower transfer rate. With e.g 1500 km of network, the speed of light will ensure that you at a maximum gets a 100 round trips per second. With 55 bytes per message that's only 5.5 Kb per second.

If you need faster transfer you can do several things.

  • The most obvious fix is to increase message size. This will give the most on longer distances.
  • Don't wait for responses before sending the next messages. This can increase throughput tremendously.
  • Use new connection + thread for each request. This way you can have several parallel requests underway at the same time.

OTHER TIPS

For speed you are better off using another protocol that will save you both on number of bytes sent and on processing time.

For example Google protocol buffers are fast and bandwidth efficient.

http://code.google.com/p/protobuf/

Or if the objects really are as small as you are saying then just hand encode them with a custom protocol.

The aim is to get both the processing needed and the number of bytes sent over the network down to the minimum possible.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top