Question

I am currently learning to program with unix domain sockets and I have a question about it. What is the standard way to separate message? E.g. A server writes two messages and the client needs can do two reads to get the message. I guess I could "define" my own protocol by always appending a certain char sequence at the end of each message, but this does not seem right. The null char seems to get thrown away when writing to a socket. I would be really grateful for some clarification, especially if it comes within the next 2 hours :D.

Was it helpful?

Solution 2

First up "unix sockets" usually refers to "unix domain socket", a special form of IPC.

The null char seems to get thrown away when writing to a socket

That's unlikely. You're probably no writing right.

but this does not seem right

A simpler way would be to precede each "message" with a header containing the length. For example

         +---+---------+---+-------+
         | 3 |         | 5 | ...   |
         +---+---------+---+-------+

An even simpler approach would be to use a protocol that has notions of messages, i.e. something like UDP or SCTP where a send equates to at most one recv.

OTHER TIPS

With SOCK_DGRAM socket you'll get one-to-one correspondence between writes from the source and reads on the destination.

With SOCK_STREAM you do need your application-level protocol on top of the stream the socket provides. The usual choices are:

  • fixed-length messages, just read until you get enough bytes,
  • small fixed-length header for each message that tells length and maybe type of what follows,
  • delimited messages (drawback here is that the delimiter cannot appear in the messages themselves),
  • self-describing formats (xml, asn.1, etc.)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top