Chapter 8 Flashcards
System Model
A system consists of a finite number of resources to be distributed among a
number of competing threads. The resources may be partitioned into several
types (or classes), each consisting of some number of identical instances. CPU
cycles, files, and I/O devices (such as network interfaces and DVD drives) are
examples of resource types. If a system has four CPUs, then the resource type
CPU has four instances. Similarly, the resource type network may have two
instances. If a thread requests an instance of a resource type, the allocation
of any instance of the type should satisfy the request. If it does not, then the
instances are not identical, and the resource type classes have not been defined
properly.
The various synchronization tools discussed in Chapter 6, such as mutex
locks and semaphores, are also system resources; and on contemporary computer
systems, they are the most common sources of deadlock. However, definition
is not a problem here. Alock is typically associated with a specific data
structure—that is, one lock may be used to protect access to a queue, another
to protect access to a linked list, and so forth. For that reason, each instance of
a lock is typically assigned its own resource class.
Note that throughout this chapter we discuss kernel resources, but threads
may use resources fromother processes (for example, via interprocess communication),
and those resource uses can also result in deadlock. Such deadlocks
are not the concern of the kernel and thus not described here.
A thread must request a resource before using it and must release the
resource after using it. A thread may request as many resources as it requires
to carry out its designated task. Obviously, the number of resources requested
may not exceed the total number of resources available in the system. In other
words, a thread cannot request two network interfaces if the system has only
one.
Under the normal mode of operation, a thread may utilize a resource in
only the following sequence:
1. Request. The thread requests the resource. If the request cannot be
granted immediately (for example, if a mutex lock is currently held by
another thread), then the requesting thread must wait until it can acquire
the resource.
2. Use. The thread can operate on the resource (for example, if the resource
is a mutex lock, the thread can access its critical section).
3. Release. The thread releases the resource.
The request and release of resources may be system calls, as explained in
Chapter 2. Examples are the request() and release() of a device, open()
and close() of a file, and allocate() and free() memory system calls.
Similarly, as we saw in Chapter 6, request and release can be accomplished
through the wait() and signal() operations on semaphores and through
acquire() and release() of a mutex lock. For each use of a kernel-managed
resource by a thread, the operating system checks to make sure that the thread
has requested and has been allocated the resource. A system table records
whether each resource is free or allocated. For each resource that is allocated,
the table also records the thread to which it is allocated. If a thread requests
a resource that is currently allocated to another thread, it can be added to a
queue of threads waiting for this resource.
Aset of threads is in a deadlocked statewhen every thread in the set iswaiting
for an event that can be caused only by another thread in the set. The events
with which we are mainly concerned here are resource acquisition and release.
The resources are typically logical (for example, mutex locks, semaphores, and
files); however, other types of events may result in deadlocks, including reading
from a network interface or the IPC (interprocess communication) facilities
discussed in Chapter 3.
To illustrate a deadlocked state, we refer back to the dining-philosophers
problem from Section 7.1.3. In this situation, resources are represented by
chopsticks. If all the philosophers get hungry at the same time, and each
philosopher grabs the chopstick on her left, there are no longer any available
chopsticks. Each philosopher is then blocked waiting for her right chopstick to
become available.
Developers of multithreaded applications must remain aware of the possibility
of deadlocks. The locking tools presented in Chapter 6 are designed
to avoid race conditions. However, in using these tools, developers must pay
careful attention to how locks are acquired and released. Otherwise, deadlock
can occur, as described next.
A deadlock
is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is only one track, none of the trains can move once they are in front of each other. A similar situation occurs in operating systems when there are two or more processes that hold some resources and wait for resources held by other(s).
Deadlock in Multithreaded Applications
Prior to examining how deadlock issues can be identified and managed,
we first illustrate how deadlock can occur in a multithreaded
Pthread program using POSIX mutex locks. The pthread mutex init()
function initializes an unlocked mutex. Mutex locks are acquired and
released using pthread mutex lock() and pthread mutex unlock(),
respectively. If a thread attempts to acquire a locked mutex, the call to
pthread mutex lock() blocks the thread until the owner of the mutex lock
invokes pthread mutex unlock().
Two mutex locks are created and initialized in the following code example:
pthread mutex t first mutex;
pthread mutex t second mutex;
pthread mutex init(&first mutex,NULL);
pthread mutex init(&second mutex,NULL);
Next, two threads—thread one and thread two—are created, and both
these threads have access to both mutex locks. thread one and thread two
run in the functions do work one() and do work two(), respectively, as
shown in Figure 8.1.
In this example, thread one attempts to acquire the mutex locks in the
order (1) first mutex, (2) second mutex. At the same time, thread two
attempts to acquire the mutex locks in the order (1) second mutex, (2)
first mutex. Deadlock is possible if thread one acquires first mutex
while thread two acquires second mutex.
Livelock
is another form of liveness failure. It is similar to deadlock; both
prevent two or more threads from proceeding, but the threads are unable to
proceed for different reasons. Whereas deadlock occurs when every thread
in a set is blocked waiting for an event that can be caused only by another
thread in the set, livelock occurs when a thread continuously attempts an action
that fails. Livelock is similar to what sometimes happens when two people
attempt to pass in a hallway: One moves to his right, the other to her left, still
obstructing each other’s progress. Then he moves to his left, and she moves to her right, and so forth. They aren’t blocked, but they aren’t making any
progress.
Livelock can be illustrated with the Pthreads pthread mutex trylock()
function, which attempts to acquire a mutex lock without blocking. The code
example in Figure 8.2 rewrites the example from Figure 8.1 so that it now uses
pthread mutex trylock(). This situation can lead to livelock if thread one
acquires first mutex, followed by thread two acquiring second mutex.
Each thread then invokes pthread mutex trylock(), which fails, releases
their respective locks, and repeats the same actions indefinitely.
Livelock typically occurs when threads retry failing operations at the same
time. It thus can generally be avoided by having each thread retry the failing
operation at random times. This is precisely the approach taken by Ethernet
networks when a network collision occurs. Rather than trying to retransmit a
packet immediately after a collision occurs, a host involved in a collision will
backoff a random period of time before attempting to transmit again.
Livelock is less common than deadlock but nonetheless is a challenging
issue in designing concurrent applications, and like deadlock, it may only occur
under specific scheduling circumstances.
Adeadlock situation can arise if the following four conditions hold simultaneously
in a system:
- Mutual exclusion. At least one resource must be held in a nonsharable
mode; that is, only one thread at a time can use the resource. If another
thread requests that resource, the requesting thread must be delayed until
the resource has been released. - Hold and wait. A thread must be holding at least one resource and
waiting to acquire additional resources that are currently being held by
other threads. - No preemption. Resources cannot be preempted; that is, a resource can
be released only voluntarily by the thread holding it, after that thread has
completed its task. - Circular wait. Aset {T0, T1, …, Tn} of waiting threads must exist such that
T0 is waiting for a resource held by T1, T1 is waiting for a resource held
by T2, …, Tn−1 is waiting for a resource held by Tn, and Tn is waiting for a
resource held by T0.
We emphasize that all four conditions must hold for a deadlock to occur.
Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called
a system resource-allocation graph. This graph consists of a set of vertices V
and a set of edges E. The set of vertices V is partitioned into two different types
of nodes: T = {T1, T2, …, Tn}, the set consisting of all the active threads in the
system, and R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system.
A directed edge from thread Ti to resource type Rj is denoted by Ti → Rj;
it signifies that thread Ti has requested an instance of resource type Rj and is
currently waiting for that resource. A directed edge from resource type Rj to
thread Ti is denoted by Rj → Ti; it signifies that an instance of resource type
Rj has been allocated to thread Ti. A directed edge Ti → Rj is called a request
edge; a directed edge Rj → Ti is called an assignment edge.
Pictorially, we represent each thread Ti as a circle and each resource type
Rj as a rectangle. As a simple example, the resource allocation graph shown
in Figure 8.3 illustrates the deadlock situation from the program in Figure 8.1.
Since resource type Rj may have more than one instance, we represent each
such instance as a dot within the rectangle. Note that a request edge points
only to the rectangle Rj, whereas an assignment edge must also designate one
of the dots in the rectangle.
When thread Ti requests an instance of resource type Rj, a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled,
the request edge is instantaneously transformed to an assignment edge.When
the thread no longer needs access to the resource, it releases the resource. As a
result, the assignment edge is deleted.
The resource-allocation graph shown in Figure 8.4 depicts the following
situation.
* The sets T, R, and E:
◦ T = {T1, T2, T3}
◦ R = {R1, R2, R3, R4}
◦ E = {T1 → R1, T2 → R3, R1 → T2, R2 → T2, R2 → T1, R3 → T3}
* Resource instances:
◦ One instance of resource type R1
◦ Two instances of resource type R2
◦ One instance of resource type R3
◦ Three instances of resource type R4
* Thread states:
◦ Thread T1 is holding an instance of resource type R2 and is waiting for
an instance of resource type R1.
◦ Thread T2 is holding an instance of R1 and an instance of R2 and is
waiting for an instance of R3.
◦ Thread T3 is holding an instance of R3.
Given the definition of a resource-allocation graph, it can be shown that, if
the graph contains no cycles, then no thread in the system is deadlocked. If the
graph does contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a
deadlock has occurred. If the cycle involves only a set of resource types, each
of which has only a single instance, then a deadlock has occurred. Each thread
involved in the cycle is deadlocked. In this case, a cycle in the graph is both a
necessary and a sufficient condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a
necessary but not a sufficient condition for the existence of deadlock.
To illustrate this concept, we return to the resource-allocation graph
depicted in Figure 8.4. Suppose that thread T3 requests an instance of resource
type R2. Since no resource instance is currently available, we add a request edge T3 → R2 to the graph (Figure 8.5). At this point, two minimal cycles exist
in the system:
T1 → R1 → T2 → R3 → T3 → R2 → T1
T2 → R3 → T3 → R2 → T2
Threads T1, T2, and T3 are deadlocked. Thread T2 is waiting for the resource
R3, which is held by thread T3. Thread T3 is waiting for either thread T1 or
thread T2 to release resource R2. In addition, thread T1 is waiting for thread T2
to release resource R1.
Now consider the resource-allocation graph in Figure 8.6. In this example,
we also have a cycle:
T1 → R1 → T3 → R2 → T1
However, there is no deadlock. Observe that thread T4 may release its instance
of resource type R2. That resource can then be allocated to T3, breaking the
cycle.
In summary, if a resource-allocation graph does not have a cycle, then the
system is not in a deadlocked state. If there is a cycle, then the system may or
may not be in a deadlocked state. This observation is important when we deal
with the deadlock problem.
Methods for Handling Deadlocks
Generally speaking, we can deal with the deadlock problem in one of three
ways:
* We can ignore the problem altogether and pretend that deadlocks never
occur in the system.
* We can use a protocol to prevent or avoid deadlocks, ensuring that the
system will never enter a deadlocked state.
* We can allow the system to enter a deadlocked state, detect it, and recover.
The first solution is the one used by most operating systems, including Linux
and Windows. It is then up to kernel and application developers to write
programs that handle deadlocks, typically using approaches outlined in the
second solution. Some systems—such as databases—adopt the third solution,
allowing deadlocks to occur and then managing the recovery.
Next, we elaborate briefly on the three methods for handling deadlocks.
Then, in Section 8.5 through Section 8.8,we present detailed algorithms. Before
proceeding, we should mention that some researchers have argued that none of
the basic approaches alone is appropriate for the entire spectrum of resourceallocation
problems in operating systems. The basic approaches can be combined,
however, allowing us to select an optimal approach for each class of
resources in a system.
To ensure that deadlocks never occur, the system can use either a deadlockprevention
or a deadlock-avoidance scheme. Deadlock prevention provides a
set of methods to ensure that at least one of the necessary conditions (Section
8.3.1) cannot hold. These methods prevent deadlocks by constraining how
requests for resources can be made.We discuss these methods in Section 8.5.
Deadlock avoidance requires that the operating system be given additional
information in advance concerningwhich resources a threadwill request
and use during its lifetime.With this additional knowledge, the operating system
can decide for each request whether or not the thread should wait. To
decide whether the current request can be satisfied or must be delayed, the
system must consider the resources currently available, the resources currently
allocated to each thread, and the future requests and releases of each thread.
We discuss these schemes in Section 8.6.
If a system does not employ either a deadlock-prevention or a deadlockavoidance
algorithm, then a deadlock situation may arise. In this environment,
the system can provide an algorithm that examines the state of the system to
determine whether a deadlock has occurred and an algorithm to recover from
the deadlock (if a deadlock has indeed occurred). We discuss these issues in
Section 8.7 and Section 8.8.
In the absence of algorithms to detect and recover from deadlocks, we may
arrive at a situation in which the system is in a deadlocked state yet has no
way of recognizing what has happened. In this case, the undetected deadlock
will cause the system’s performance to deteriorate, because resources are being
held by threads that cannot run and because more and more threads, as they
make requests for resources, will enter a deadlocked state. Eventually, the
system will stop functioning and will need to be restarted manually.
Although this method may not seem to be a viable approach to the deadlock
problem, it is nevertheless used in most operating systems, as mentioned
earlier. Expense is one important consideration. Ignoring the possibility of
deadlocks is cheaper than the other approaches. Since in many systems, deadlocks
occur infrequently (say, once per month), the extra expense of the other
methods may not seem worthwhile.
In addition, methods used to recover from other liveness conditions, such
as livelock, may be used to recover from deadlock. In some circumstances, a
system is suffering from a liveness failure but is not in a deadlocked state.
We see this situation, for example, with a real-time thread running at the
highest priority (or any thread running on a nonpreemptive scheduler) and
never returning control to the operating system. The system must have manual
recoverymethods for such conditions and may simply use those techniques for
deadlock recovery
Deadlock Prevention Mutual Exclusion
The mutual-exclusion condition must hold. That is, at least one resource must
be nonsharable. Sharable resources do not require mutually exclusive access
and thus cannot be involved in a deadlock. Read-only files are a good example
of a sharable resource. If several threads attempt to open a read-only file at
the same time, they can be granted simultaneous access to the file. A thread
never needs to wait for a sharable resource. In general, however, we cannot
prevent deadlocks by denying the mutual-exclusion condition, because some
resources are intrinsically nonsharable. For example, a mutex lock cannot be
simultaneously shared by several threads.
Deadlock Prevention Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system,we must
guarantee that, whenever a thread requests a resource, it does not hold any
other resources. One protocol that we can use requires each thread to request
and be allocated all its resources before it begins execution. This is, of course,
impractical for most applications due to the dynamic nature of requesting
resources.
An alternative protocol allows a thread to request resources only when it
has none. A thread may request some resources and use them. Before it can
request any additional resources, it must release all the resources that it is
currently allocated.
Both these protocols have two main disadvantages. First, resource utilization
may be low, since resourcesmay be allocated but unused for a long period.
For example, a thread may be allocated a mutex lock for its entire execution,
yet only require it for a short duration. Second, starvation is possible. Athread
that needs several popular resources may have to wait indefinitely, because at
least one of the resources that it needs is always allocated to some other thread.
Deadlock Prevention No Preemption
The third necessary condition for deadlocks is that there be no preemption
of resources that have already been allocated. To ensure that this condition
does not hold, we can use the following protocol. If a thread is holding some
resources and requests another resource that cannot be immediately allocated
to it (that is, the thread must wait), then all resources the thread is currently
holding are preempted. In other words, these resources are implicitly released.
The preempted resources are added to the list of resources forwhich the thread
is waiting. The threadwill be restarted onlywhen it can regain its old resources,
as well as the new ones that it is requesting.
Alternatively, if a thread requests some resources, we first check whether
they are available. If they are, we allocate them. If they are not, we check
whether they are allocated to some other thread that is waiting for additional
resources. If so, we preempt the desired resources from the waiting thread and
allocate them to the requesting thread. If the resources are neither available nor
held by a waiting thread, the requesting thread must wait.While it is waiting,
some of its resources may be preempted, but only if another thread requests
them. A thread can be restarted only when it is allocated the new resources
it is requesting and recovers any resources that were preempted while it was
waiting.
This protocol is often applied to resources whose state can be easily saved
and restored later, such as CPU registers and database transactions. It cannot
generally be applied to such resources as mutex locks and semaphores,
precisely the type of resources where deadlock occurs most commonly.
Circular Wait
The three options presented thus far for deadlock prevention are generally
impractical in most situations. However, the fourth and final condition for
deadlocks — the circular-wait condition — presents an opportunity for a
practical solution by invalidating one of the necessary conditions. One way
to ensure that this condition never holds is to impose a total ordering of
all resource types and to require that each thread requests resources in an
increasing order of enumeration.
To illustrate, we let R = {R1, R2, …, Rm} be the set of resource types. We
assign to each resource type a unique integer number, which allows us to
compare two resources and to determine whether one precedes another in our
ordering. Formally,we define a one-to-one function F: R→N, whereN is the set
of natural numbers.We can accomplish this scheme in an application program
by developing an ordering among all synchronization objects in the system.
For example, the lock ordering in the Pthread program shown in Figure 8.1
could be
F(first mutex) = 1
F(second mutex) = 5
We can now consider the following protocol to prevent deadlocks: Each
thread can request resources only in an increasing order of enumeration. That
is, a thread can initially request an instance of a resource—say, Ri. After that,
the thread can request an instance of resource Rj if and only if F(Rj) > F(Ri).
For example, using the function defined above, a thread that wants to use
both first mutex and second mutex at the same time must first request
first mutex and then second mutex. Alternatively, we can require that a
thread requesting an instance of resource Rj must have released any resources
Ri such that F(Ri) ≥ F(Rj). Note also that if several instances of the same resource
type are needed, a single request for all of them must be issued.
If these two protocols are used, then the circular-wait condition cannot
hold. We can demonstrate this fact by assuming that a circular wait exists
(proof by contradiction). Let the set of threads involved in the circular wait be
{T0, T1, …, Tn}, where Ti is waiting for a resource Ri, which is held by thread
Ti+1. (Modulo arithmetic is used on the indexes, so that Tn is waiting for a
resource Rn held by T0.) Then, since thread Ti+1 is holding resource Ri while
requesting resource Ri+1, we must have F(Ri) < F(Ri+1) for all i. But this condition
means that F(R0) < F(R1) < … < F(Rn) < F(R0). By transitivity, F(R0) < F(R0),
which is impossible. Therefore, there can be no circular wait.
Keep in mind that developing an ordering, or hierarchy, does not in itself
prevent deadlock. It is up to application developers to write programs that
follow the ordering. However, establishing a lock ordering can be difficult,
especially on a system with hundreds—or thousands—of locks. To address
this challenge, many Java developers have adopted the strategy of using
the method System.identityHashCode(Object) (which returns the default
hash code value of the Object parameter it has been passed) as the function
for ordering lock acquisition.
It is also important to note that imposing a lock ordering does not guarantee
deadlock prevention if locks can be acquired dynamically. For example,
assume we have a function that transfers funds between two accounts.
To prevent a race condition, each account has an associated mutex lock that
is obtained from a get lock() function such as that shown in Figure 8.7.
Deadlock is possible if two threads simultaneously invoke the transaction()
function, transposing different accounts. That is, one thread might invoke
transaction(checking account, savings account, 25.0)
and another might invoke
transaction(savings account, checking account, 50.0)
Deadlock Avoidance
Deadlock-prevention algorithms, as discussed in Section 8.5, prevent deadlocks
by limiting how requests can be made. The limits ensure that at least
one of the necessary conditions for deadlock cannot occur. Possible side effects
of preventing deadlocks by this method, however, are low device utilization
and reduced system throughput.
An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested. For example, in a system
with resources R1 and R2, the system might need to know that thread P
will request first R1 and then R2 before releasing both resources, whereas
thread Q will request R2 and then R1. With this knowledge of the complete
sequence of requests and releases for each thread, the system can decide for
each requestwhether or not the thread should wait in order to avoid a possible
future deadlock. Each request requires that in making this decision the system
consider the resources currently available, the resources currently allocated to
each thread, and the future requests and releases of each thread.
The various algorithms that use this approach differ in the amount and
type of information required. The simplest and most useful model requires that
each thread declare the maximum number of resources of each type that it may
need. Given this a priori information, it is possible to construct an algorithm
that ensures that the system will never enter a deadlocked state. A deadlockavoidance
algorithm dynamically examines the resource-allocation state to
ensure that a circular-wait condition can never exist. The resource-allocation
state is defined by the number of available and allocated resources and the
maximum demands of the threads. In the following sections, we explore two
deadlock-avoidance algorithms.
LINUX LOCKDEP TOOL
Although ensuring that resources are acquired in the proper order is the
responsibility of kernel and application developers, certain software can be
used to verify that locks are acquired in the proper order. To detect possible
deadlocks, Linux provides lockdep, a tool with rich functionality that can be
used to verify locking order in the kernel. lockdep is designed to be enabled
on a running kernel as it monitors usage patterns of lock acquisitions and
releases against a set of rules for acquiring and releasing locks. Two examples
follow, but note that lockdep provides significantly more functionality than
what is described here:
* The order in which locks are acquired is dynamically maintained by the
system. If lockdep detects locks being acquired out of order, it reports a
possible deadlock condition.
* In Linux, spinlocks can be used in interrupt handlers. A possible source
of deadlock occurs when the kernel acquires a spinlock that is also used
in an interrupt handler. If the interrupt occurs while the lock is being
held, the interrupt handler preempts the kernel code currently holding
the lock and then spins while attempting to acquire the lock, resulting
in deadlock. The general strategy for avoiding this situation is to disable
interrupts on the current processor before acquiring a spinlock that is
also used in an interrupt handler. If lockdep detects that interrupts are
enabledwhile kernel code acquires a lock that is also used in an interrupt
handler, it will report a possible deadlock scenario.
lockdep was developed to be used as a tool in developing or modifying
code in the kernel and not to be used on production systems, as it can
significantly slow down a system. Its purpose is to test whether software
such as a new device driver or kernel module provides a possible source
of deadlock. The designers of lockdep have reported that within a few
years of its development in 2006, the number of deadlocks from system
reports had been reduced by an order of magnitude.⣞ Although lockdep
was originally designed only for use in the kernel, recent revisions of this
tool can now be used for detecting deadlocks in user applications using
Pthreads mutex locks. Further details on the lockdep tool can be found at
https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt.
Safe State
A state is safe if the system can allocate resources to each thread (up to its
maximum) in some order and still avoid a deadlock. More formally, a system
is in a safe state only if there exists a safe sequence. A sequence of threads
<T1, T2, …, Tn> is a safe sequence for the current allocation state if, for each
Ti, the resource requests that Ti can still make can be satisfied by the currently
available resources plus the resources held by all Tj, with j < i. In this situation,
if the resources that Ti needs are not immediately available, then Ti can wait
until all Tj have finished. When they have finished, Ti can obtain all of its
needed resources, complete its designated task, return its allocated resources,
and terminate. When Ti terminates, Ti+1 can obtain its needed resources, and
so on. If no such sequence exists, then the system state is said to be unsafe.
A safe state is not a deadlocked state. Conversely, a deadlocked state is
an unsafe state. Not all unsafe states are deadlocks, however (Figure 8.8).
An unsafe state may lead to a deadlock. As long as the state is safe, the
operating system can avoid unsafe (and deadlocked) states. In an unsafe state,
the operating system cannot prevent threads from requesting resources in such
away that a deadlock occurs. The behavior of the threads controls unsafe states.
To illustrate, consider a system with twelve resources and three threads:
T0, T1, and T2. Thread T0 requires ten resources, thread T1 may need as many
as four, and thread T2 may need up to nine resources. Suppose that, at time
t0, thread T0 is holding five resources, thread T1 is holding two resources, and
thread T2 is holding two resources. (Thus, there are three free resources.)
Maximum Needs Current Needs
T0 10 5
T1 4 2
T2 9 2
At time t0, the system is in a safe state. The sequence <T1, T0, T2> satisfies
the safety condition. Thread T1 can immediately be allocated all its resources
and then return them (the system will then have five available resources); then
thread T0 can get all its resources and return them (the system will then have ten
available resources); and finally thread T2 can get all its resources and return
them (the system will then have all twelve resources available).
Asystem can go froma safe state to an unsafe state. Suppose that, at time t1,
thread T2 requests and is allocated one more resource. The system is no longer
in a safe state. At this point, only thread T1 can be allocated all its resources.
When it returns them, the system will have only four available resources.
Since thread T0 is allocated five resources but has a maximum of ten, it may
request five more resources. If it does so, it will have to wait, because they are
unavailable. Similarly, thread T2 may request six additional resources and have
to wait, resulting in a deadlock. Our mistake was in granting the request from
thread T2 for one more resource. If we had made T2 wait until either of the other threads had finished and released its resources, then we could have avoided
the deadlock.
Given the concept of a safe state, we can define avoidance algorithms that
ensure that the system will never deadlock. The idea is simply to ensure that the
system will always remain in a safe state. Initially, the system is in a safe state.
Whenever a thread requests a resource that is currently available, the system
must decide whether the resource can be allocated immediately or the thread
must wait. The request is granted only if the allocation leaves the system in a
safe state.
In this scheme, if a thread requests a resource that is currently available, it
may still have to wait. Thus, resource utilization may be lower than it would
otherwise be
Resource-Allocation-Graph Algorithm
If we have a resource-allocation system with only one instance of each resource
type, we can use a variant of the resource-allocation graph defined in Section
8.3.2 for deadlock avoidance. In addition to the request and assignment edges
already described, we introduce a new type of edge, called a claim edge. A
claim edge Ti→Rj indicates that thread Ti may request resource Rj at some time
in the future. This edge resembles a request edge in direction but is represented
in the graph by a dashed line. When thread Ti requests resource Rj, the claim
edge Ti → Rj is converted to a request edge. Similarly, when a resource Rj is
released by Ti, the assignment edge Rj →Ti is reconverted to a claim edge Ti →
Rj.
Note that the resources must be claimed a priori in the system. That is,
before thread Ti starts executing, all its claim edges must already appear in the
resource-allocation graph.We can relax this condition by allowing a claim edge
Ti → Rj to be added to the graph only if all the edges associated with thread Ti
are claim edges.
Now suppose that thread Ti requests resource Rj. The request can be
granted only if converting the request edge Ti → Rj to an assignment edge
Rj → Ti does not result in the formation of a cycle in the resource-allocation
graph.We check for safety by using a cycle-detection algorithm. An algorithm
for detecting a cycle in this graph requires an order of n2 operations, where n
is the number of threads in the system.
If no cycle exists, then the allocation of the resource will leave the system
in a safe state. If a cycle is found, then the allocation will put the system in
an unsafe state. In that case, thread Ti will have to wait for its requests to be
satisfied.
To illustrate this algorithm, we consider the resource-allocation graph of
Figure 8.9. Suppose that T2 requests R2. Although R2 is currently free, we
cannot allocate it to T2, since this action will create a cycle in the graph (Figure
8.10). A cycle, as mentioned, indicates that the system is in an unsafe state. If
T1 requests R2, and T2 requests R1, then a deadlock will occur.