Question

I'm considering using a TVar to store some state in a web application (that can be recreated on restart). However, the contention aspects of TVar concern me. It seems that a frequent short running transaction can starve out longer transactions by continually interrupting them. Also, as more longer running transactions keep restarting, this would increase load on the CPU, tending to further increase the length of these transactions. Eventually I feel this could cause the server to become completely unresponsive.

Considering this, I have these questions:

(1) Can TVar (or another data type) use locks, not simultaneous attempts/retries.

(2) Can TVar (or another data type) have some different contention mechanism, i.e. "let transactions run for a second before running another transaction", or at least some guarantee that transactions will eventually complete (i.e. a contention algorithm that prevents starvation for longer running transactions).

Was it helpful?

Solution

This is only a concern if you have many cheap transactions that update data and a few expensive ones that read it. Analytics on a live-updated dataset, perhaps.

If you're actually concerned about this, consider using a flag TVar. Set it to false and check that it is false at the start of every cheap transaction, calling retry otherwise. Then simply set it to true before entering into your longrunning transaction, and set it to false on exit. Alternately, you can simply guard your state behind a TMVar. Your longrunning computation takes the tmvar atomically, does what it feels like, and then returns it. The other transactions take place entirely within a single actual STM transaction.

Remember also, a long running STM transaction is sort of a tricky beast. Due to laziness, you can put an expensive value into a var cheaply. You can also read a "snapshot" of data from a whole bunch of vars very quickly. To have a genuinely long-running transaction you need to read from a whole bunch of vars, then based on what you've read, compute what vars you are going to write new values into (or read values from), and that computation itself must be expensive. So likely you're not even in that scenario to begin with!

OTHER TIPS

I don't think there's a way to guarantee starvation freedom, unless you change the runtime code of the STM system itself. In my opinion, bringing in locks to avoid contention among TVars defeats the purpose of having STM in the first place, since the whole point of using STM is to get rid of the classic error-prone lock-based approach to concurrent programming.

Sure, starvation might cause significant performance loss, but only under the assumption that such large transactions are actually necessary. One design principle that I try to keep in mind, is to use TVars at a low granularity level. For example, instead of putting an entire Data.Map into a TVar, which might cause contention everytime an entry is updated, you can use a more STM-friendly data structure, like skiplists [1].

[1] http://hackage.haskell.org/package/tskiplist

This was written as a comment to one of Clinton's comments on Peters answer. But it became to long.

Consider, you have two bank accounts: A and B. Each is protected by its own lock. Now, you have two transaction the first transfers money from A to B, and the second from B to A. Each takes first the lock of the source account and then of the destination account, transfers the money and releases the locks. If your unlucky the two transactions will run into a dead lock, and nothing will ever be done with those two accounts. If you do that in STM they will run after each other. If you have infinite many of the first kind, they might starve the second transaction. But you still get a lot done. While with locking nothing ever happens.

STM guarantees that there are no data races with TVars! None what so ever. With locking you can arrive at that conclusion after very careful inspection of your code. And every line you add might invalidate your conclusion completely.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top