Question

My Application is being queried by a single http request every minute. The application temporarily timesout and becomes unresponsive when c3p0 runs the CullExpired and other background threads. This issue happens randomly and at infrequent intervals. All the instances when the application times out, i see c3p0 background threads running trying to do cleanup or evict idle connections. This happens very randomly and there are NO other exceptions in the log. After sometime the application recovers automatically and resumes processing. Has anyone experienced issues like this.

   c3p0 version is <version>0.9.1.2</version>
    hibernate version is <version>3.3.2.GA</version>

My c3p0 config is:

<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
        destroy-method="close" 
        p:driverClass="#{['app.jdbc.driverClassName']}"
        p:jdbcUrl="#{['app.jdbc.url']}" 
        p:user="#{['app.jdbc.username']}" 
        p:password="#{['app.jdbc.password']}"
        p:acquireIncrement="5" 
        p:idleConnectionTestPeriod="80"
        p:maxAdministrativeTaskTime="600" 
        p:numHelperThreads="10"
        p:maxIdleTime="60" 
        p:maxPoolSize="20" 
        p:maxStatements="30"
        p:minPoolSize="10" />

No correct solution

OTHER TIPS

c3p0's background threads are always around; they run in a thread pool. you've set that pool's size to 10 threads (c3p0.numHelperThreads). If you examine stack dumps, under your config you'll see c3p0 tasks like CullExpired running quite frequently. The frequency of these is at the same order of magnitude as the config settings that would expire resources. In your case, maxIdleTime is 60 seconds, so cull tasks are probably running every 20 seconds or so. c3p0's administrative tasks are carefully designed not to hold locks during IO and generally to be lightweight and not contend with other work as much as possible. So, something odd is happening, if it is these admin tasks that are causing your hangs. But it's hard to tell the difference between cause and coincidence here: c3p0's helper threads are always around and admin tasks are frequently running.

maxIdleTime is one possible explanation for your problem. The config you are using is not so great. One client Connection per minute is an exceeding small load for c3p0, yet you have a minPoolSize of 10 Connections. So, c3p0 grabs 10 Connections, holds them for about one minute, then expires and reacquires them all, which is a lot of simultaeous overhead. Your idleConnectionTestPeriod of 80 seconds is unhelpful: idle Connections will never be tested, because they will be expired after 60 seconds of idleness, before the test period has elapsed. I'd also drop acquireIncrement back down to its default of 3.

I'd try a better config, and see if that fixes the problem. Given the load you describe, I'd leave minPoolSize at its default of 3, and set numHelperThreads to 3. As a first pass, I'd set maxIdleTime to its default of zero, but set testConnectionOnCheckout to true. This is the simplest and most reliable form of Connection testing, but it exacts a client visible performance cost. To minimize that cost, you should set a preferredTestQuery rather than relying on the slow default Connection test. Often "SELECT 1" works, but it may depend on your database/JDBC driver. If things look good, you might go bolder and try a slightly more performant, slightly less robust Connection testing strategy: set idleConnectionTestPeriod to a relatively small value (e.g. 30), and set testConnectionOnCheckin to true (and testConnectionOnCheckout back to its default of false). See [ http://www.mchange.com/projects/c3p0/#configuring_connection_testing ]

Also I'd turn Statement caching off (set maxStatements to 0) for now, and turn it back on later when things are stable to test if it improves the performance of your application. [ that is an if -- see http://www.mchange.com/projects/c3p0/#known_shortcomings ]

I'd also recommend updating to the latest version of c3p0 [c3p0-0.9.2-pre5]. Connection acquisition is a bit more lightweight in 0.9.2, and part of your problem might have to do with you every-1-minute-flush-and-reacquire cycles. In general, I think the 0.9.2-pre series is pretty stable now and worth using.

I hope this helps!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top