I recently had an interview where I was asked to show how to prevent race conditions in C or C++ between two threads operating on shared data. I used a mutex as follows :

pthread_mutex_t mutex;
int shared_var = 0;

void thread1()
{
    pthread_mutex_lock(&mutex); // Lock the mutex
    if (shared_var == 3)
    {
        // do something, then decrement shared_var 
        shared_var--;
    }
    pthread_mutex_unlock(&mutex); // Unlock the mutex
}

void thread2()
{
    pthread_mutex_lock(&mutex); // Lock the mutex
    shared_var++;
    pthread_mutex_unlock(&mutex); // Unlock the mutex
}

Then I was asked to do this in a lockless way using atomic variables instead of using a mutex.

I got stuck here a bit. Spinlocks aside, Is there a way to do this by just having shared_var an atomic variable?

At the hardware level, I guess atomic operations would use something like a compare and swap operation. If thread1 detects that shared_var == 3, but then thread2 updates shared_var by time thread1 goes to decrement it, what will happen? Will the decrement fail because thread1 detects that shared_var has since been updated? Then what? I'm generally confused as to how to implement lockless operations.

5 Replies 5

At the hardware level, I guess atomic operations would use something like a compare and swap operation.

Correct.

If thread1 detects that shared_var == 3, but then thread2 updates shared_var by time thread1 goes to decrement it, what will happen?

Thats impossible to happen.

Why is it impossible? If there is no mutex / lock, then thread2 can update shared_var as thread1 is doing something else right?

If the CaS fails you need to retry the operation.

Most modern hardware has a single instruction for atomic fetch_add, so you don't need a CAS or LL/SC retry loop. (x86 xadd = exchange-and-add, ARMv8.1 single-instruction atomics, and I think RISC-V.) See also Is incrementing an int effectively atomic in specific cases? re: how atomic RMWs work in CPU architecture. Taking a lock costs an atomic RMW, so you might as well just do the atomic RMW on the shared variable directly.

#include <atomic>
std::atomic<int> shared_var;

...

shared_var.fetch_add(1, std::memory_order_relaxed);

shared_var--;  // if you don't mind the default seq_cst ordering

For your use-case where you only want to do something if the value before decrement was 3, if you're touching shared variables inside the critical section you probably still need a mutex to get an actual critical section.

But if not, you could do something like

if (3 == shared_var.fetch_add(-1, std::memory_order_acq_rel)) {
     // do something
}

shared_var-- is equivalent to shared_var.fetch_add(-1, seq_cst) so you could use that instead.

In the version with shared_var protected by a mutex, it doesn't matter whether you decrement it before or faster the do something. If it reads shared_var, you would of could just change it to use a constant 3 since in the lock-free version, other threads could have modified shared_var before reaching the end of the block.

Other threads could increment and then decrement (unless you have some other synchronization which prevents this) and start running another instance of do something concurrent with the first, so the if body is no longer a critical section. This isn't a drop-in replacement, and there can't be one without re-inventing locking.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.