문제

When using mmap() for shared memory (from Linux, or other UNIX-like systems) is it possible (and portable) to use fcntl() (or flock() or lockf() functions) to co-ordinate access to the mapping?

Responses to this SO question seems to suggest that it should work.

The idea I have in mind would be to have the shared memory structured with a process/page map to minimize the locking contention. Processes could each work with their pages concurrently, and a lock would only need to be acquired when updating the process/page mappings. (Read access from unowned pages would involve checking a serial number, copying the desired data, then validating that the serial number of that block hadn't changed).

Conceptually each process sharing this file mapping would perform the mmap(), find a free block therein, acquire a lock to the process/page area, update that with its own assignment, release the lock and then go on merrily with its work. Any process could search for stale mappings (using kill() with zero as the signal) and clean up the process/page table mapping.

(In rough, generic terms, I'm toying with a producer/consumer processing engine using shared memory from Python over Linux; I'd like the solution to be portable to BSD and to other programming languages --- so long as the support mmap() and the necessary interfaces to fcntl(), flock() or lockf(). I'd also be interested in psuedo-code showing how one would measure lock contention and detect any synchronization failures. I am aware that the threading and multiprocessing with their respective Queue() objects are the most straightforward way to implement a Python producer/consumer processing model).

도움이 되었습니까?

해결책

I'm sure the locks will provide mutual exclusion, but I don't know if they will give you a memory barrier. It seems like jumping into the kernel (which fcntl, flock, and lockf will do) is likely to do something that forces out of order memory reads and writes to commit, but I doubt you'll get a hard guarantee. I think this is one of those things where it probably works, and testing will show that it does work, but you won't know that it always works unless you find a reference saying as much.

I've done something similar to this from C, but I used atomic spinlocks in the shared memory itself. It used to be that you had to do a little bit of inline assembly, but gcc now has some intrinsic operations that you can use:

http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html

If you're willing to write a very simple Python extension, you could wrap __sync_lock_test_and_set(...) and __sync_lock_release(...) to do what you need. Those should be pretty portable.

I believe there is a way to put pthread mutexes into shared memory too, but I don't have any experience with that. Again, you'd have to write a simple C extension to get access to that from Python.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top