Question

I am using weblogic 12C in AIX OS. When I keep ulimit=unlimited in OS level and ulimit=8192 in commEnv.sh in weblogic, I get frequent "Too many file open " error.

But when I keep ulimit=2048 in OS level and ulimit=8192 in commEnv.sh in weblogic, the server works properly.

Please provide answer for the following questions.

  1. Will weblogic override the value in OS.
  2. How to calculate the ulimit value.
  3. Will error occur, If I increase the ulimit value dramatically or should it be restricted.
  4. Does other ulimit parameters like stack size, max memory size be set appropriately with file descriptor value, or can they be set to unlimited in OS level

I have also tried deploying the server with ulimit=2000 in OS Level and disable the function in commEnv.sh in weblogic, but again I get "Too many file open " error.

Was it helpful?

Solution

I'm a bit rusty with my AIX, but it doesn't sound like you're setting your ulimits correctly. I don't believe you can simply say "ulimit="; you need to tell it which ulimit you want to set. For example, to set specify the max number of open file descriptors for a process, you would do: ulimit -n 2000

To answer your specific questions:

  1. Your default ulimits are set in /etc/security/limits. There is a soft limit and a hard limit. The soft limit is the current setting, whereas the hard limit is the maximum it may be set to. That being said, Weblogic can "override" the soft limit so long as it does not exceed the hard limit.

  2. I'm not sure what you're asking here. You can display current ulimit settings by running "ulimit -a"

  3. This really depends on your environment and what you're doing. It's generally unwise to set your ulimits to unlimited for the very reason you're asking this question: you can get bad results if you exceed your system resources. I know companies like Oracle will tell you to set everything to unlimited, but that is poor system administration in my opinion.

  4. The file descriptor value (-n) is completely separate from stack size, etc... Whether you need to tweak the other ulimits really depends on what you're doing. I know that in our WebLogic environment, we did set the maximum file size ulimit (-f) to unlimited, against my better judgment, while making minimal changes, if any, changes to the other limit settings. I do believe we had to increase the nofiles descriptor to 2000, as you mentioned.

So back to your bigger question/problem. It sounds like you're simply not setting your ulimit correctly for nofiles (-n). It sounds like it needs to be increased, perhaps to 2000 as you said. Try adding "ulimit -n 2000" to your commEnv.sh, but make sure that does not exceed your hard limit or it won't work.

I hope that helps.

OTHER TIPS

Or else you can follow below link to set the ulimit value in OS itself which is more robust than setting as ulimit -n value.

http://www.techpaste.com/2011/07/12/tuning-application-server-file-descriptor-limits/

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top