Question

What was the rationale behind making synchronization of input file streams implementation-specific. Doesn't it seem obvious that the stream will fill its buffer (partially or wholly) with content from the external device? It says in Standard C++ IOStreams and Locales:

For output files synchronization is defined as emptying the internal buffer by writing the buffer content to the file performed by a call to overflow(). For input files, the meaning of synchronization is not defined by the standard, but depends on the implementation of the IOStreams library.

Wouldn't it suffice to make the implementation symmetric and have the input file streams' buffers call underflow()? What was the reason for this decision?

Was it helpful?

Solution

Writing has the side effect of changing the file and multiple writers racing to write into the same file are not uncommon. Writing and reading the file simultaneously is also a common use case. The standard basically says that overflow() makes write() syscall (as opposed to buffered fwrite()) to delegate synchronization to the operating system.

Reading has no side effects for files (it may update atime to be pedantic), multiple readers do not race with each other so that no synchronization is necessary.

Reading a pipe or socket, on the other hand, has a side effect of changing the contents of the underlying buffer. However, readers competing to read from the same pipe, TCP or stream UNIX socket do not make much sense. It may make sense for datagram sockets, but I am not sure if IOStreams being streams are designed to work with datagram sockets. I gather the standard writers could not come up with a good use case for read synchronization and hence left it unspecified.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top