Question

There are a few transports available for OpenEJB 4.0.0:

  1. ejbd
  2. ejbds
  3. httpejbd

Which one is lighter on the network?

Which one is faster?

Are there any pros and cons of choosing any one of then?

Our application as around 450 clients talking to remote EJBs on a OpenEJB 4.0.0 container. All in a local LAN (but using cascading switches with some redundancy).

Update:

This question is not related to another one on timeouts. We don't have any timeouts or application problems that we could identify. The application works very well when we have a limited number of clients, but when we try it with hundreds we are facing what seems to be network errors: the server logs show recurring "IoExpcetion unkown byte received". Since CORBA ORBs has been reported to have broadcast problems, we were suspecting it might be a RMI over IIOP kind of problem. We are going to try other protocol options to compare against our current set up.

Update (2012-oct-08):

we have run hundreds of tests now, with 450+ clients in a LAN. There is no one size fits all answer. If we have very few clients, EJBD is faster. If we have hundreds of clients, EJBD stops working (it seems to cause switch saturation). With hundreds of clients, httpejbd still works (it seems related to the fact that each remote call creates a short duration http request).

Was it helpful?

Solution

httpejbd with Jetty can serve many more clients (thousands) but ejbd is significantly faster in the 10s to low hundreds range.

This email has some good info on both from a purely performance perspective:

I'll state once again that the timeouts you're seeing are not related to the client/server performance. A faster client/server layer will actually increase the contingency in the server and make server-side locking issues more apparent.

What I'd recommend is dispelling the idea that it is the protocol layer causing your timeout issues. It is more likely the number of clients, not the fact that they are remote. It is possible to execute @Remote beans in the same VM as the server via looking them up from the LocalInitialContextFactory. When this is done you get a client reference that adheres to the remote EJB semantics but does not involve any network plumbing.

Have this client spawn off 450 threads, each thread and hit the server with continuous requests in a loop and doing the kind of work regular clients do. What you'll find is you can reach the timeouts with likely far less than 450 threads (i.e. 450 clients).

Here's a performance analysis of all the ways you can invoke. This is the same object on the same machine:

POJO

@Local

@Remote via LocalInitialContextFactory, server-side

@Remote via RemoteInitialContextFactory, client-side (ejbd)

So if your gut is telling you that it is the network layer slowing things down and result in access timeouts, validate that assumption by creating a small performance test and run it both with the LocalInitialContextFactory and the RemoteInitialContextFactory. The LocalInitialContextFactory will show you the theoretical maximum performance you could expect from any remoting layer.

If the problem goes away, you were right and you can proceed with efforts to tune the network layer. If the problem persists or becomes worse, then you know the issue is not the network layer and you'll need to change focus to make progress.

OTHER TIPS

I haven't used either of these protocols but I believe a generic view can get you started as I doesn't see a performance comparison of these 3 on internet. I believe the basic native and low level implementation is most of the time faster than higher level protocols. In this scenario ejbds is less performant than ejbd as there is a SSL handshake overhead. I also believe the ejbds is less performant than httpejbd.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top