문제

Here is some elementary code trying to use OOB (Urgent) data. My problem is that the server part don't behave the same if the client is in C or in Java. Becareful, you may think that something's tricky in both client side but if I use a C server (to get finer control of OOB), then both clients behave exactly the same whatever is my server-side OOB control.

First the server (Java) part :

Socket s = ss.accept();
s.shutdownOutput();
s.setOOBInline(true);
InputStream is = s.getInputStream();
for (;;) {
  byte []d = new byte[3];
  int l = is.read(d);
  if (l==-1) break;
  for (int i=0; i<l; i++) System.out.print((char)d[i]);
  System.out.println();
  Thread.sleep(2000);
}

Then the client (Java) part :

Socket s = new Socket("localhost",61234);
s.shutdownInput();
OutputStream os = s.getOutputStream();
byte []n = new byte[10];
for (int i=0; i<n.length; i++) n[i] = (byte)('A'+i);
byte m = (byte)('0');   
os.write(n);
System.out.println("normal sent");
s.sendUrgentData(m);
System.out.println("OOB sent");
os.write('Z');
System.out.println("normal sent");

and then the alternate client (C) part :

s = socket(PF_INET,SOCK_STREAM,0);
bzero(&a,sizeof(a));
a.sin_family = AF_INET;
a.sin_port = htons(61234);
a.sin_addr.s_addr = inet_addr("127.0.0.1");
connect(s,(struct sockaddr *)&a,sizeof(a));
shutdown(s,SHUT_RD);
char m = '0';
char *n = "ABCDEFGHIJ";

printf("normal sent %d\n",write(s,n,strlen(n)));
printf("OOB sent %d\n",send(s,&m,1,MSG_OOB));
printf("normal sent %d\n",write(s,"Z",1));

Now this is what I get (first C client, then Java client) :

Accepting connection
ABC
DEF
GHI
J
Z
Accepting connection
ABC
DEF
GHI
J
0Z

It seems that the Java server isn't able to see the OOB data sent from C-client-side. Why the 0 seems to have been lost ? It has not, because the server have at least detected the oob boundary in the stream.

도움이 되었습니까?

해결책

Out of band data is not supported the same way on all Sockets implementations. It's as simple as that.

Microsoft's advice is here:

There are, at present, two conflicting interpretations of RFC 793 (where the concept is introduced).

The implementation of OOB data in the Berkeley Software Distribution (BSD) does not conform to the Host Requirements specified in RFC 1122.

Specifically, the TCP urgent pointer in BSD points to the byte after the urgent data byte, and an RFC-compliant TCP urgent pointer points to the urgent data byte. As a result, if an application sends urgent data from a BSD-compatible implementation to an implementation compatible with RFC 1122, the receiver reads the wrong urgent data byte (it reads the byte located after the correct byte in the data stream as the urgent data byte).

To minimize interoperability problems, applications writers are advised not to use OOB data unless this is required to interoperate with an existing service. Windows Sockets suppliers are urged to document the OOB semantics (BSD or RFC 1122) that their product implements.

If you are writing a new protocol and need to have out of band data, I would suggest that you need a separate connection for your urgent data, or you need to multiplex it at the application layer.

Therefore, my advice is to not use OOB data if you have a choice.

다른 팁

Ok, this seems to related to JVM implementation. I've made different tests, on different OS and JVMs.

Everything is correct on different Linuxes with JDK 1.6 (Java 7 not tested).

But things are wrong with my Mountain Lion, it behaves differently depending on Java versions. Seems to be a JVM bug related to Apple's implementation.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top