can newer web containers having Servlet 3 extend BlazeDS max # of simultaneous users?

StackOverflow https://stackoverflow.com/questions/9121316

  •  22-04-2021
  •  | 
  •  

Question

BlazeDS is implemented as a servlet and thus limited to roughly hundreds of simultaneous users.

I wonder if the more recent web containers (Tomcat 7, GlassFish/Grizzly, Jetty, etc.) supporting Servlet 3 could be used to create NIO endpoints to increase the number of simultaneous users to the thousands?

Is this a valid and practical solution? Anyone do this in production?

Something like a mature version of this: http://flex.sys-con.com/node/720304 If this was of great importance back then, why now (when Servlet 3 is widely available) has there been no effort to try to implement NIO endpoints? (note, I'm a newbie here so feel free to state the obvious if I'm missing something)

Benefit of NIO: http://www.javalobby.org/java/forums/t92965.html

If not, is a load balancer and multiple application servers, each having an instance of BlazeDS, the recommended solution (outside of going to LCDS, etc.)?

Was it helpful?

Solution

GraniteDS & Asynchronous Servlets

GraniteDS is, as far as I know, the only solution that implements asynchronous servlets for real-time messaging, ie. data push. This feature is not only available for Servlet 3 containers (Tomcat 7, JBoss 7, Jetty 8, GlassFish 3, etc.) but also for older or other containers with specific asynchronous support (eg. Tomcat 6/CometProcessor, WebLogic 9+/AbstractAsyncServlet, etc.)

Other solutions don't have this feature (BlazeDS) or use RTMP (LCDS, WebORB and the last version of Clear Toolkit). I can't say much about RTMP implementations but BlazeDS is clearly missing a scalable real-time messaging implementation as it uses only a synchronous servlet model.

If you need to handle many thousands concurrent users, you can even create a cluster of GraniteDS servers in order to further improve scalability and robustness (see this video for example).

Asynchronous Servlets Performance

The scalabily of asynchronous servlets vs. classical servlets has been benchmarked several times and gives impressive results. See, for example, this post on the Jetty blog:

With a non NIO or non Continuation based server, this would require around 11,000 threads to handle 10,000 simultaneous users. Jetty handles this number of connections with only 250 threads.

Classical synchronous model:

  • 10,000 concurrent users -> 11,000 server threads.
  • 1.1 ratio.

Comet asynchronous model:

  • 10,000 concurrent users -> 250 server threads.
  • 0.025 ratio.

This kind of ratio can be roughly expected from other asynchronous implementations (not Jetty) and using Flex/AMF3 instead of plain text HTTP request shouldn't change much of the result.

Why Asynchronous Servlets?

The classical (synchronous) servlet model is acceptable when each request is processed immediately:

    request -> immediate processing -> response

The problem with data push is that there is no such thing as a true "data push" with the HTTP protocol: the server cannot initiate a call to the client to send data, it has to answer a request. That's why Comet implementations rely on a different model:

    request -> wait for available data -> response

With synchronous servlet processing, each request is handled by one dedicated server thread. However, in the context of data push processing, this thread is most of the time just waiting for available data and does nothing while consuming significant server resources.

The all purpose of asynchronous processing is to let the servlet container use these (often) idle threads in order to process other incoming requests and that's why you can expect dramatic improvements in terms of scalability when your application requires real-time messaging features.

You can find many other resources on the Web explaining this mechanism, just google on Comet.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top