Domanda

we experienced an out of memory error in our production environment last week. This out of memory error occurs maybe once a week and the current workaround is to restart the application server. We are using glassfish 3.0.1. The heap dump generated was around 5gb.

Please help in analyzing the heap dump below. Here is the leak suspects report generated using eclipse MAT. How do we analyze the report below?

One instance of 
"com.sun.enterprise.v3.services.impl.monitor.stats.ConnectionQueueStatsProvider" loaded by 
"org.apache.felix.framework.ModuleImpl$ModuleClassLoader @ 0x602650970" occupies 
2,104,143,312 (87.97%) bytes. The instance is referenced by 
org.glassfish.flashlight.impl.client.ReflectiveClientInvoker @ 0x600a63768 , loaded by 
"org.apache.felix.framework.ModuleImpl$ModuleClassLoader @ 0x60265dd38". The memory is 
accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded 
by "<system class loader>".

Keywords
org.apache.felix.framework.ModuleImpl$ModuleClassLoader @ 0x602650970
org.apache.felix.framework.ModuleImpl$ModuleClassLoader @ 0x60265dd38
java.util.concurrent.ConcurrentHashMap$Segment[]
com.sun.enterprise.v3.services.impl.monitor.stats.ConnectionQueueStatsProvider

shortest paths to accumulation point accumulated objects

È stato utile?

Soluzione 4

we think we found the answer. We saw a similar bug reported in the glassfish jira:https://java.net/jira/browse/GLASSFISH-16254. It seems to be a bug with glassfish 3.0.1.

They had an out of memory error when glassfish monitoring for thread pool and http service was turned on which is the exact setup we had.

We turned off the glassfish monitoring and now we are running stable for 1 week without any out of memory.

Thanks for everyone's help!

Altri suggerimenti

Please check your flow of functional calls as you are not closing the database connection that once opened creating the memory leaks

See this conversation for further reference Database connections and OutOfMemoryError: Java Heap Space

It's hard to analyse the reason with such a few information.

According to the report, it maybe db connections problem.

TRY:

  • Confirm what're being hold by ConnectionQueueStatsProvider(probably java.util.concurrent.ConcurrentHashMap$Segment[]).

  • Open the source code, find out what's in ConnectionQueueStatsProvider's ConcurrentHashMap.


If java.util.concurrent.ConcurrentHashMap$Segment[] token most of the space, you app may have db connection problems.

Java.util.concurrent.ConcurrentHashMap is only one usage in ConnectionQueueStatsProvider's:

66  private final Map<Integer, Long> openConnectionsCount = new ConcurrentHashMap<Integer, Long>();

Try to check you code and close the db connections.

Well, MAT is pretty obvious here. You have an instance of ConnectionQueueStatsProvider, which has a huge openConnectionsCount map. It seems you fill this map constantly, but never remove anything from it. Memory leak if I ever saw one :)

In the future, you might be interested in Plumbr, which is created to find such problems with much lesser effort required.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top