Java Apache Mina what is the risk of too many open files error? And how to fix it?

StackOverflow https://stackoverflow.com/questions/22488409

  •  16-06-2023
  •  | 
  •  

Frage

I have a socket application written with Apache MINA, with Linux OS,

This time I got too many error when I see the log files with this code:

IoAcceptor acceptor = new NioSocketAcceptor();
acceptor.getFilterChain().addLast("logger", new LoggingFilter());
acceptor.getFilterChain().addLast("codec", new ProtocolCodecFilter(new TextLineCodecFactory(Charset.forName("UTF-8"))));
acceptor.setCloseOnDeactivation(true);
acceptor.setHandler(new ChatHandler());
acceptor.getSessionConfig().setReadBufferSize(2);
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 10);
acceptor.bind(new InetSocketAddress(15000));

When I tested it with 2-3 client at the same time, I got this error:

Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method) ~[?:?]
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65) ~[?:?]
at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) ~[?:?]
at java.nio.channels.Selector.open(Selector.java:227) ~[?:?]
at org.apache.mina.transport.socket.nio.NioProcessor.<init>(NioProcessor.java:59) ~[MC.jar:?]

I have googled it, but I don't know what the risk of this exception, will this error make my application fail at transaction or not?

If yes, can someone explain it ? and how to solve it?

War es hilfreich?

Lösung

It can be quite a problem. This error (ENFILE) means that either you, or the OS, has too many open file descriptors.

stdin, stdout and stderr are file descriptors; any open file is a file descriptor; any socket you create is a file descriptor as well.

To know your limit of open files, as a user, do:

ulimit -n

To know the limit of the OS, do:

cat /proc/sys/fs/file-max

Generally, it is the user limit which is the problem.

You can try and raise the limit using:

ulimit -n <a greater number here>

but more than likely it won't work. What you need to do is editing /etc/security/limits.conf of, preferred, create a new file in /etc/security/limits.d with a relevant name and add these two lines:

theuser soft nofile <somelargenumber>
theuser hard nofile <somelargenumber>

Note that in order for these limits to take effect, the user must log out and login again; if it is a user dedicated to a system service, then restarting this service will do.

Additionally, if you know the PID of the process running your application, you can see the number of currently open file descriptors by issuing the command:

ls /proc/<thepid>/fd|wc -l

If the kernel limit is the problem (very unlikely but who knows) then you'll have to edit /etc/sysctl.conf and change the proc.sys.fs.file-max entry, then run sysctl -- as root.

Andere Tipps

I'm sure you may not invoke the dispose method of the connector.

This method shutdown the business threads by invoke the ExecuteService's shutdown method.

Meanwhile set the inner flag of disposed to marked the connector which need to stop, the worker thread stop by this flag.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top