Skip to main content
24 events
when toggle format what by license comment
Dec 26, 2025 at 18:42 comment added Ewan OR, customers A through Z start purchase, counter checks pass, Customers A through Z finish purchases counter++ * 26, more coupons used than allowed.
Dec 26, 2025 at 18:40 comment added Ewan I'd have to see the whole code I guess. but it sounds like you will have a race condition to me. Customer A starts purchase,counter check ok, counter++, Customer B starts purchase, counter check fail, Customer A cancels/fails purchase. counter is now incorrect and you lost Customer Bs sale.
Dec 26, 2025 at 18:35 comment added tusharRawat No purchase is not something we are doing in transaction. 1. we update redis counters atomically and make an entry in the log table (just a simple insert query), there's no transaction involved anywhere
Dec 26, 2025 at 16:41 comment added Ewan I'd not heard of that, interesting! although if you are doing the purchase in the transaction doesn't that negate the purpose of using redis for speed?
Dec 25, 2025 at 18:58 comment added tusharRawat We will use atomic lua script so 2 operations donot conflict.
Dec 23, 2025 at 21:36 comment added Ewan Using redis you will have the race condition. you check the count, but you don't lock it. so by the time your operation completes the count will be higher.
Dec 23, 2025 at 20:12 comment added tusharRawat Correct thus I asked, that we are planning to update the low contention counters like user * offer as a transaction [log + counter dec/int], and high contention counters being checked in cache and updated in the background eventually to DB. Wanted to get a point of view on this, if that would be the right approach to separate out low vs high contention counters and build a logic based on that, instead of following the same logic for both of the counters, like redis atomic inc/dec.
Dec 23, 2025 at 20:03 comment added Ewan No, a db transaction would prevent that
Dec 23, 2025 at 18:37 comment added tusharRawat Correct, the idea is we would also not want same user to place 2 orders and get both offers both times, even if the redemption allowed was only one, if we do a db check only then race condition might allow same user to get offer twice.
Dec 23, 2025 at 18:30 comment added Ewan but a user making a payment is surely going to take some time, a db check on the number of coupons used by the user can be done because other users purchases wont affect the count
Dec 23, 2025 at 18:13 comment added tusharRawat The use-case for per customer counter is, based on business needs a business wants to say give an offer on payment method a specific card, but they want this offer to be given to every user at once (basically fair distribution) instead of giving just a cap on total offer budget, where same user can just get the complete budget, basically they want different users to be activated on say a card of specific bank.
Dec 23, 2025 at 12:17 comment added Ewan I have updated with a longer example
Dec 23, 2025 at 12:15 history edited Ewan CC BY-SA 4.0
added 1828 characters in body
Dec 23, 2025 at 12:00 comment added Ewan I'm not sure why you have the per customer counters at all. Presumably the coupon is only used when an order completes and this requires a db transaction? so no need for a shared state counter?
Dec 22, 2025 at 19:44 comment added tusharRawat Also, should we take hybrid approach, like update high contention counters on cache and rest inside db txns only - bot log and inc/dec counter value for them ?
Dec 22, 2025 at 19:42 comment added tusharRawat Yeah, it seems for business, it's better to give upfront screen(graceful handling on client) on order page that cannot be placed as coupon exhausted please retry and in retry we don't show back the coupon now, vs user placed with a coupon and we cancel that saying that sorry coupon was not available but you still somehow ordered the product.
Dec 21, 2025 at 22:30 comment added Ewan yes, but the alternative is to err in the other direction and not take orders.. which is bad for business.
Dec 21, 2025 at 20:00 comment added tusharRawat "Maybe you even cancel orders after placement if they have gone over the budget if its a big deal "- This can be a big deal from user experience, as telling user that their orders are cancelled because coupon was not available, might be bad from user experience perspective.
Dec 21, 2025 at 19:56 comment added tusharRawat We were also thinking if we separate counters into 2 buckets low contention (like user_offer_id) and high contention(like offer_budget) and update low contention counters in transaction and high contention with cache counters & background sync them from logs to DB. This can help us to save cache memory as high cardinality counters like user_offer_id can be read from DB directly to perform redemption / refunds, how ever low cardinality counters like offer_budget (as at any point in time total offers can be limited only in number) can stay in cache and operated there for redemption / refund.
Dec 21, 2025 at 19:43 comment added Ewan Lets say for whatever reason using the db naively was a bottleneck. You could have a local memory cache, which rolls up to the db every 1000 coupons used. Now each box only read/writes once every 3 seconds. You might go over the limit by worse case number of nodes * 1000, but, you on the plus side you can not pay for reddis. Maybe you even cancel orders after placement if they have gone over the budget if its a big deal
Dec 21, 2025 at 19:17 comment added tusharRawat I mean to say survive latency under 200-300ms with 300 request per second contention on any transaction db like mysql/postgresql ?
Dec 21, 2025 at 19:10 comment added tusharRawat Hey @Ewan, makes sense—we could maintain separate refund counters alongside redemption counters. In redeem, we’d validate used − refunds < limit, and in refund, ensure used − refunds ≥ 0, with async rollups from logs to DB. However, this doubles Redis counter cardinality (e.g., user_offer_id, offer_budget), which can be expensive. When you say the Redis approach might be overkill, do you mean it’s better to handle both redeem and refund flows purely via DB transactions? Would MySQL/PostgreSQL realistically handle ~200 RPS safely here?
Dec 21, 2025 at 17:56 history edited Ewan CC BY-SA 4.0
added 553 characters in body
Dec 21, 2025 at 17:47 history answered Ewan CC BY-SA 4.0