Question

I know this problem has been brought up many times, and I have read pages of forum posts from Google and I still haven't found a clear solution. The issue is that some primefaces resources fail to be served which causes a timeout exception:

JSF1064: Unable to find or serve resource, schedule/schedule.js, from library, primefaces. java.io.IOException: java.util.concurrent.TimeoutException

The error isn't always for one specific resource, it can be many (most of the time it is schedule.js), and it only happens ~60% of the time.

I've even tried manually outputting the script and css files to no avail. All of the primefaces jars are in the appropriate WEB-INF/lib folder.

Web.xml:

<?xml version="1.0" encoding="UTF-8"?> <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param>
<context-param> <param-name>primefaces.THEME</param-name> <param-value>home</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> </web-app>

Stack trace:

    WARNING:   JSF1064: Unable to find or serve resource, schedule/schedule.js, from    library, primefaces.
    WARNING:   java.io.IOException: java.util.concurrent.TimeoutException
    at org.glassfish.grizzly.utils.Exceptions.makeIOException(Exceptions.java:81)
    at     org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:958)
    at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:682)
    at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:355)
    at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:342)
    at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:161)
    at java.nio.channels.Channels$WritableByteChannelImpl.write(Channels.java:458)
    at    com.sun.faces.application.resource.ResourceHandlerImpl.handleResourceRequest(ResourceHandler    Impl.java:343)
    at java x.faces.application.ResourceHandlerWrapper.handleResourceRequest(ResourceHandlerWrapper.java:153)
    at org.primefaces.application.PrimeResourceHandler.handleResourceRequest(PrimeResourceHandler.java:132)
    at javax.faces.webapp.FacesServlet.service(FacesServlet.java:643)
    at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:318)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
    at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
    at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
    at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:357)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:260)
    at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:188)
    at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191)
    at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168)
    at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189)
    at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
    at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
    at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
    at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838)
    at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
    at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564)
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258)
    at java.util.concurrent.FutureTask.get(FutureTask.java:119)
    at org.glassfish.grizzly.http.io.OutputBuffer.blockAfterWriteIfNeeded(OutputBuffer.java:951)
    ... 36 more
Was it helpful?

Solution

This problem is documented as Glassfish bug #20681. It is an issue with the version of Grizzly (the HTTP server) in Glassfish 4.

There are two issues in play that are causing the problem. First is the small socket write buffer size. The default write buffer size on RH and Ubuntu is 16K. In Grizzly, in order to conserve memory, will only allow 4x the write buffer size to be queued before we block the writer. In the case of the attached test case, there are several artifacts that are over 200K. When these are written, the initial writes made by the default Servlet will exceed the max queued data size and cause the writer to block. Because all worker threads are blocked, the async write queue is no longer being drained and the page, as far as the user is concerned, hangs when loading. Here's where the second issue comes in. By default, there are only five worker threads. In the case of the test case, there are at least five requests coming in that are requesting large resources. Due to this, all of the worker threads are blocked and all I/O comes to a halt.

As was observed, increasing the write buffer size that GF uses, helps alleviate the issue, but at the cost of memory per connection. One could also increase the number of threads, but there's no guarantee that all of those threads wouldn't eventually be blocked when up scaling the number of clients.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top