Question

I have a simple thread that grabs bytes from a Bluetooth RFCOMM (serial-port-like) socket and dumps them into a Queue.Queue (FIFO), which seems like the typical method to exchange data between threads. Works fine.

Is this overkill though? Could I just use a bytearray then have my reader thread .append(somebyte) and the processing function just .pop(0)? I'm not sure if the protections in Queue are meant for more complex "multi-producer, multi-consumer queues" and a waste for a point-to-point byte stream. Doing things like flushing the queue or grabbing multiple bytes seem more awkward with the Queue vs. a simpler data type.

I guess the answer might have to do with if .pop() is atomic, but would it even matter then?...

Was it helpful?

Solution

With Queue, you're guaranteed to be threadsafe in any implementation and version of Python. Relying on this or that method of some other object being "atomic" (in a given implementation and version) typically leaves you at the mercy of this "atomicity" not being a strong guarantee (just an implementation artifact for the specific point release &c you're using) and therefore subtle, VERY hard-to-debug race conditions being introduced with any upgrade or port to other Python implementations.

If your profiling tells you that Queue's strong and general guarantees are being a bottleneck for your specific producer-consumer use case, make your own simpler guaranteed-to-be-threadsafe FIFO queue/stream. For example, if you've found out that (net of race conditions) append and pop would be perfect for your use, just make a class that protects each with a lock acquire/release (use a with statement) -- Queue adds miniscule overhead to support multiple producers and consumers and you can shave those few nanoseconds off!-)

OTHER TIPS

Yes, pop() is atomic, but I'd stick with Queue if performance is not super important.

If the rate of input is fast enough, you can always buffer bytes up into a string before pushing that onto the Queue. That will probably increase throughput by reducing the amount of locking done, at the expense of a little extra latency on the receiving end.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top