35

When using multiple threads, shared memory needs to be locked by critical sections. However, using critical sections causes potential deadlocks. How can they be avoided?

7
  • 12
    Why the downvote? And more importantly, why is this being closed? Commented Jun 29, 2009 at 8:08
  • 2
    make this community wiki - right now it just looks like reputation farming Commented Jun 29, 2009 at 8:51
  • 9
    Hm. It sounded like a perfectly valid question to me, one I'dd really like to see answered. Commented Jun 29, 2009 at 9:04
  • 4
    Closing this as "not a real question" was stupid. While this may be a duplicate, this definitely is a real question. Commented Nov 17, 2009 at 12:12
  • 3
    voting for reopen. this is a very important, very interesting question. Commented Nov 17, 2009 at 12:13

8 Answers 8

15

One way is to use a hierarchy of critical sections. If you ensure that a parent critical section is never entered within one of its children, deadlocks cannot happen. The difficulty is to enforce this hierarchy.

Sign up to request clarification or add additional context in comments.

Comments

6

The Related list to the right on this page contains a few links that provides interesting information on the topic.

In addition to that list, there are many other SO questions discussing the topic, such as

...and many more

1 Comment

Thanks, I'll have a look at those questions.
2

You can avoid critical sections by using message passing instead (synchronous and asynchronous calls). When using synchronous calls, you still have to make sure not to make a circular call, in which thread A asks thread B a question, and B needs to ask A a question to be able to respond.

Another option is to make asynchronous calls instead. However, it is more difficult to get return values.

Note: Indeed, a message passing system is implemented using a critical section that locks the call queue, but it is abstracted away.

Comments

2

Among the various methods to enter critical sections -- semaphores and mutexs are the most popular.

  • A semaphore is a waiting mechanism and mutex is a locking mechanism, well the concept is confusing to the most, but in short, a thread activating a mutex can only deactivate it. with this in mind...

  • Dont allow any process to lock partial no of resources, if a process need 5 resources, wait until all the are available.

  • if u use semaphore here, u can unblock/un-wait the resource occupied by other thread. by this i mean pre-emption is another reason.

These 2 according to me are the basic conditions, the remaining 2 of the common 4 precautions can be related to these.

If u dont agree ps add comments. I've gtg already late, I will later add a cleaner and clearer explanation.

Comments

2

When I work in C++, the following works for me:

  1. all public methods (excluding ctor and dtor) of a threadsafe class lock

  2. private methods cannot call public methods

It's not a general deadlock avoidance method.

Comments

1

You must code multi-thread programs very carefully. There's no short-cut, you must understand the flow of your program, otherwise you'll be doomed.

Comments

0

THE FOLLOWING ALGORITHM IS USED TO AVOID DEADLOCK:

Banker’s Algorithm

–Impose less stringent conditions than in deadlock prevention in an attempt to get better resource utilization

–Safe state

•Operating system can guarantee that all current processes can complete their work within a finite time

–Unsafe state

•Does not imply that the system is deadlocked, but that the OS cannot guarantee that all current processes can complete their work within a finite time

–Requires that resources be allocated to processes only when the allocations result in safe states. –It has a number of weaknesses (such as requiring a fixed number of processes and resources) that prevent it from being implemented in real systems

1 Comment

Hello, and welcome. You could post a better answer by explaining more in detail what you have done. Have you used the algorithm? In which situation? Why are you recommending it in this case?
0

One way is by using a non-blocking locking function. As an example, in rust You could use std::sync::Mutex::try_lock instead of std::sync::Mutex::lock.

So so if you have this example code:

fn transfer(tx: &Mutex<i32>, rx: &Mutex<i32>, amount: i32) -> () {
    let mut tx = tx.lock().unwrap();
    let mut rx = rx.lock().unwrap();

    *tx -= amount;
    *rx += amount;
}

You could instead do something like this:

fn transfer(tx: &Mutex<i32>, rx: &Mutex<i32>, amount: i32) -> () {
    loop {
        // Attempt to lock both mutexes
        let mut tx = tx.try_lock();
        let mut rx = rx.try_lock();

        // If both locks were successfull,
        // i.e. if they currently are not
        // locked by an other thread
        if let Ok(ref mut tx) = tx {
            if let Ok(ref mut rx) = rx {
                // Perform the operations needed on
                // the values inside the mutexes
                **tx -= amount;
                **rx += amount;

                // Exit the loop
                break;
            }
        }
        // If at least one of the locks were
        // not successful, restart the loop
        // and try locking the values again.

        // You may also want to sleep the thread
        // here for a short period if You think that
        // the mutexes might be locked for a while.
    }
}

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.