After extensive testing of my own application I might be able to shed some light on this. The phenomenon I was observing was the following:
When closing a Java SSLSocket
, which was opened and is handled in Thread
A, from a concurrent Thread
B, the close()
call sometimes blocks until the next read()
in Thread
A, which then retruns indicating EOF
. Between the asynchronous call to close()
in Thread
B and any read()
in Thread
A, A can successfully perform write()
operations on that socket.
I have now figured that this is only the case if Thread
B performs the close()
before the startHandshake()
call initiated by Thread
A has finished. After that, there seems to be no problem with closing the SSLSocket
asynchronously.
This leaves us with the question how to solve the issue. Obviously, a bit of a state-based behaviour would help.
If one can live with a delay for the asynchronous close()
in Thread
B, calling getSession()
before close()
seems to work very well, because it makes B wait until A has the SSL session ready. However, this may cause a delay per socket, and also may lead to additional effort in case the close()
does not get executed in Thread
B before A starts to use the socket.
A better, yet less simplistic solution would be to work with two uni-directional flags. One (handshakeDone
) would be used by A to indicate that the SSL handshake has been completed (there's no non-blocking API way for B to find this out). The other (toBeClosed
) would be used by B to indicate that the socket is supposed to be closed.
A would check toBeClosed
after the handshake has been performed. B would call close()
if handshakeDone
is false or set toBeClosed
otherwise.
Note that for this to succeed, there need to be atomic blocks both in A and B. I'll leave the specific implementation (possibly optimized as compared to the algorithm described above) up to you.
There may be other situations where asynchronous close()
calls on SSL sockets misbehave, though.