The following code
send(new UpdateOneMessage(1 + i, id, "updatedState"));
// .. read response
suggest that you switch direction of the traffic on each message. That is, you wait on response on each request before sending the next. This architecture is going to put some constraints on how fast you can run. The latency that each message is going to experience is going to hit the general throughput of your server.
If you move the client and server to two different locations with some distance between them, you will see an even slower transfer rate. With e.g 1500 km of network, the speed of light will ensure that you at a maximum gets a 100 round trips per second. With 55 bytes per message that's only 5.5 Kb per second.
If you need faster transfer you can do several things.
- The most obvious fix is to increase message size. This will give the most on longer distances.
- Don't wait for responses before sending the next messages. This can increase throughput tremendously.
- Use new connection + thread for each request. This way you can have several parallel requests underway at the same time.