Question

I have a rather basic application hosted on Kubernetes, which connects to a Mongo database.

The app has a wallet feature. A user can put money in their wallet using real-world payments (e.g. via PayPal). Each payment is registered as a transaction for that user. The money in the wallet is then used to pay for orders, which may come from different sources - Shopify, API, placed manually, etc. - at random times. Current user balance is inferred by aggregating the transactions (double-entry basically).

Consider the following scenario: a user with $100 in their wallet receives two orders at the same time, each worth $80. Obviously, only one of these orders should be placed. Unfortunately, a wallet payment is not an atomic procedure - I need to calculate the balance first and then, if it is sufficient, record a payment transaction. Even if I do this inside a database transaction, these two simultaneous orders might still think that there is enough balance, if these transactions are executed in parallel. To ensure that this does not happen I used locking. Each order will thus:

  1. place a lock on the user's wallet so that only a single wallet payment is executed at a time;
  2. "execute" the payment by recording a transaction;
  3. place the order;
  4. unlock the wallet.

This means that all wallet payments for a single user should be processed sequentially. I feel like it would make sense to place users' wallet payments into queues - as soon as one payment is completed (the wallet is unlocked) the next one proceeds. These would have to be per-user queues - separate users' payments can be safely processed in parallel.

Unfortunately, I don't know how to properly solve this. Implementing such queues in memory would be trivial but also non-resilient. I was thinking about utilising some MQ, but I have little experience and am faced with challenges:

  • it would be nice if it's a distributed queue, which I could easily run on Kubernetes;
  • I actually need many parallel queues - one queue per user; let's assume tens of thousands of users;
  • the load needs to be distributed evenly across the application pods. I reckon the queues ought to somehow push the payments to the application pods rather than have the pods pull messages - I don't want to couple the pods with specific users.

My questions:

  1. Is the basic idea reasonable? Are there any obvious problems here that I don't see?
  2. What mechanism do I need to achieve resilient evenly distributed processing of many queues in parallel? Do I need a messaging queue + load balancing or some Pub/Sub solution, or something else?
Was it helpful?

Solution

How many simultaneous transactions per user do you receive? and how frequently? If it's a lot, then having a queue would not help anyway, your response times would take a hit.

Generally, there are not a lot of simultaneous transactions per wallet (unless the wallet is compromised). Serializing the transactions at the database layer and failing fast in case of no balance in the wallet is an easier and a more reliable approach.

Possibly adding a retry or asking the user to retry the payment can be done as well.

Queue seems like an overkill for this usecase.

Licensed under: CC-BY-SA with attribution
scroll top