Deadlock
Deadlock
Q.1 What is
deadlock? Explain it.
Deadlock occurs when someone going up the stairs meets
someone coming down, and each refuses to leave to a wider place. This creates a
deadlock,
There’s no simple and immediate solution to a
deadlock; no one can move forward until someone moves out of the way, but no
one can move out of the way until either someone advances or the rear of a line
moves back. Obviously it requires outside intervention to remove one. Only then
can the deadlock be resolved. Deadlocks became prevalent with the introduction
of interactive systems, which generally improve the use of resources through
dynamic resource sharing, but this capability also increases the possibility of
deadlocks.
Q.2 Explain
the seven cases of deadlock. Or explain two cases of Deadlock.
A deadlock usually occurs when nonsharable,
nonpreemptable resources, such as files, printers, or scanners, are allocated
to jobs that eventually require other nonsharable, nonpreemptive
resources—resources that have been locked by other jobs. However, deadlocks
aren’t restricted to files, printers, and scanners. They can also occur on sharable
resources that are locked, such as disks and databases.
Case 1: Deadlocks on File
Requests
If jobs are allowed to request and hold files for the
duration of their execution, a deadlock can occur as the simplified directed
graph For example, consider the case of a home construction company with two
application
programs, purchasing (P1) and sales (P2), which are
active at the same time. Both need to access two files, inventory (F1) and
suppliers (F2), to read and write transactions. One day the system deadlocks
when the
following
sequence of events takes place:
1. Purchasing (P1) accesses the supplier file (F2) to
place an order for more lumber.
2. Sales (P2) accesses the inventory file (F1) to
reserve the parts that will be required to build the home ordered that day.
3. Purchasing (P1) doesn’t release the supplier file
(F2) but requests the inventory file (F1) to verify the quantity of lumber on
hand before placing its order for more, but P1 is blocked because F1 is being
held by P2.
4. Meanwhile, sales (P2) doesn’t release the inventory
file (F1) but requests the supplier file (F2) to check the schedule of a
subcontractor. At this point, P2 is also blocked because F2 is being held by
P1.
Any other programs that require F1 or F2 will be put
on hold as long as this situation continues. This deadlock will remain until
one of the two programs is closed or forcibly removed and its file is released.
Only then can the other program continue and the system return to normal.
Case 2: Deadlocks in Databases
A deadlock can also occur if two processes access and
lock records in a database.
System that locks each record when it is accessed
until the process is completed. There are two processes (P1 and P2), each of
which needs to update two records (R1 and R2), and the following sequence leads
to a deadlock:
1. P1 accesses R1 and locks it.
2. P2 accesses R2 and locks it.
3. P1 requests R2, which is locked by P2.
4. P2 requests R1, which is locked by P1.
An alternative, of course, is to avoid the use of
locks—but that leads to other difficulties. If locks are not used to preserve
their integrity, the updated records in the database might include only some of
the data—and their contents would depend on the order in which each process
finishes its execution. This is known as a race between processes
Case 3:
Deadlocks in Dedicated Device Allocation
The use of a group of dedicated devices, such as a
cluster of DVD read/write drives, can also deadlock the system.
Let’s say two users from the local board of education
are each running a program (P1 and P2), and both programs will eventually need
two DVD drivers to copy files from one disc to another. The system is small,
however, and when the two programs are begun, only two DVD-R drives are
available and they’re allocated on an “as requested” basis. Soon the following
sequence transpires:
1. P1 requests drive 1 and gets it.
2. P2 requests drive 2 and gets it.
3. P1 requests drive 2 but is blocked.
4. P2 requests drive 1 but is blocked.
Neither job can continue because each is waiting for
the other to finish and release its drive—an event that will never occur. A
similar series of events could deadlock any group of dedicated devices.
Case 4:
Deadlocks in Multiple Device Allocation
Deadlocks aren’t restricted to processes contending
for the same type of device; they can happen when several processes request,
and hold on to, several dedicated devices. Consider the case of an engineering
design firm with three programs (P1, P2, and P3) and three dedicated devices:
scanner, printer, and plotter. The following sequence of events will result in
deadlock:
1. P1 requests and gets the scanner.
2. P2 requests and gets the printer.
3. P3 requests and gets the plotter.
4. P1 requests the printer but is blocked.
5. P2 requests the plotter but is blocked.
6. P3 requests the scanner but is blocked.
As in the earlier examples, none of the jobs can
continue because each is waiting for a resource being held by another.
Case 5: Deadlocks in Spooling
Printers are usually sharable devices, called virtual
devices, that use high-speed storage to transfer data between it and the CPU.
The spooler accepts output from several users and acts as a temporary storage
area for all output until the printer is ready to accept it. This process is
called spooling. If the printer needs all of a job’s output before it
will begin printing, but the spooling system fills the available space with
only partially completed output, then a deadlock can occur. It happens like
this.
Let’s say it’s one hour before the big project is due
for a computer class. Twenty-six frantic programmers key in their final changes
and, with only minutes to spare, all issue print commands. The spooler receives
the pages one at a time from each of the students but the pages are received
separately, several page ones, page twos, etc. The printer is ready to print
the first completed document it gets, but as the spooler canvasses its files it
has the first page for many programs but the last page for none of them. Alas,
the spooler is full of partially completed output so no other pages can be
accepted, but none of the jobs can be printed out because the printer only
accepts completed output files. It’s an unfortunate state of affairs. This
scenario isn’t limited to printers. Any part of the system that relies on
spooling, such as one that handles incoming jobs or transfers files over a
network, is vulnerable to such a deadlock.
Case 6:
Deadlocks in a Network
A network that’s congested or has filled a large
percentage of its I/O buffer space can become deadlocked if it doesn’t have
protocols to control the flow of messages through the network For example, a
medium-sized word-processing center has seven computers on a network, each on
different nodes. C1 receives messages from nodes C2, C6, and C7 and sends
messages to only one: C2. C2 receives messages from nodes C1, C3, and C4 and
sends messages to only C1 and C3. indicates the flow of messages. Messages
received by C1 from C6 and C7 and destined for C2 are buffered in an output
queue. Messages received by C2 from C3 and C4 and destined for C1 are buffered
in an output queue. As the traffic increases, the length of each output queue increases
until all of the available buffer space is filled. At this point C1 can’t
accept any more messages (from C2 or any other computer) because there’s no
more buffer space available to store them. For the same reason, C2 can’t accept
any messages from C1 or any other computer, not even a request to send. The
communication path between C1 and C2 becomes deadlocked; and because C1 can’t
send messages to any other computer except C2 and can only receive messages
from C6 and C7, those routes also become deadlocked. C1 can’t send word to C2
about the problem and so the deadlock can’t be resolved without outside
intervention.
Case 7:
Deadlocks in Disk Sharing
Disks are designed to be shared, so it’s not uncommon
for two processes to be accessing different areas of the same disk. This
ability to share creates an active type of deadlock, known as livelock.
Processes use a form of busy-waiting that’s different from a natural wait. In
this case, it’s waiting to share a resource but never actually gains control of
it.
For example, at an insurance company the system
performs many daily transactions. One day the following series of events ties
up the system:
1. Customer Service (P1) wishes to show a payment so
it issues a command to read the balance, which is stored on track 20 of a disk.
2. While the control unit is moving the arm to track
20, P1 is put on hold and the I/O channel is free to process the next I/O
request.
3. While the arm is moving into position, Accounts
Payable (P2) gains control of the I/O channel and issues a command to write
someone else’s payment to a record stored on track 310. If the command is not
“locked out,” P2 will be put on hold while the control unit moves the arm to
track 310.
4. Because P2 is “on hold” while the arm is moving,
the channel can be captured again by P1, which reconfirms its command to “read
from track 20.”
5. Because the last command from P2 had forced the arm
mechanism to track 310, the disk control unit begins to reposition the arm to
track 20 to satisfy P1. The I/O channel would be released because P1 is once
again put on hold, so it could be captured by P2, which issues a WRITE command
only to discover that the arm mechanism needs to be repositioned.
Q.3 Explain
the four condition for deadlock. Or List out four conditions required for the existence
of a deadlock and discuss about them (2013)
Each deadlock was preceded by the simultaneous
occurrence of four conditions that the operating system could have recognized: mutual exclusion, resource holding, no
preemption, and circular wait. It’s important to remember that each of
these four conditions is necessary for the operating system to work smoothly.
When two people meet between landings, they can’t pass
because the steps can hold only one person at a time. Mutual exclusion,
the act of allowing only one person to have access to a step is the first
condition for deadlock.
When two people meet on the stairs and each one holds
ground and waits for the other to move back, that is an example of resource
holding , the second condition for deadlock.
Each step is dedicated to the climber it is allocated
to the holder for as long as needed. This is called no preemption, the
lack of temporary reallocation of resources, and is the third condition for
deadlock.
These three lead to the fourth condition of circular
wait in which each person involved in the impasse is waiting for another to
voluntarily release the step (or resource) so that at least one will be able to
continue on and eventually arrive at the destination.
All four conditions are required for the deadlock to
occur, and as long as all four conditions are present the deadlock will
continue; but if one condition can be removed, the deadlock will be resolved.
Q.4 Explain
the strategies for holding the deadlock. Or
What is
deadlock? Discuss avoidance and detection strategies to deal with it.
In general, operating systems use one of three
strategies to deal with deadlocks:
• Prevent one of the four conditions from occurring (prevention).
• Avoid the deadlock if it becomes probable (avoidance).
• Detect the deadlock when it occurs and recover from
it gracefully (detection).
1) Prevention
To prevent a deadlock, the operating system must
eliminate one of the four necessary conditions, a task complicated by the fact
that the same condition can’t be eliminated from every resource.
Mutual exclusion is necessary in any computer system because some resources such as memory,
CPU, and dedicated devices must be exclusively allocated to one user at a time.
In the case of I/O devices, such as printers, the mutual exclusion may be
bypassed by spooling,
Resource holding, where a job holds on to one resource while waiting for another one that’s
not yet available, could be sidestepped by forcing each job to request, at
creation time, every resource it will need to run to completion. For example,
if every job in a batch system is given as much memory as it needs, then the
number of active jobs will be dictated by how many can fit in memory
No preemption
could be bypassed by allowing the operating system to deallocate resources from
jobs. This can be done if the state of the job can be easily saved and restored,
as when a job is preempted in a round robin environment or a page is swapped to
secondary storage in a virtual memory system.
Circular wait
can be bypassed if the operating system prevents the formation of a circle. One
such solution was proposed by Havender and is based on a numbering system for
the resources such as: printer = 1, disk = 2, tape = 3, plotter = 4, and so
on.The system forces each job to request its resources in ascending order: any
“number
one” devices required by
the job would be requested first; any “number two” devices would be requested
next; and so on. So if a job needed a printer and then a plotter, it would
request them in this order: If the job required the plotter first and then the
printer, it would still request the printer first (which is a #1) even though
it wouldn’t be used right away.
2) Avoidance
Even if the operating
system can’t remove one of the conditions for deadlock, it can avoid one if the
system knows ahead of time the sequence of requests associated with each of the
active processes.
One such algorithm was
proposed by Dijkstra in 1965 to regulate resource allocation to avoid deadlocks.
The Banker’s Algorithm is based on a bank with a fixed amount of capital that
operates on the following principles:
• No customer will be
granted a loan exceeding the bank’s total capital.
• All customers will be
given a maximum credit limit when opening an account.
• No customer will be
allowed to borrow over the limit.
• The sum of all loans
won’t exceed the bank’s total capital.
Although the Banker’s
Algorithm has been used to avoid deadlocks in systems with a few resources, it
isn’t always practical for most systems for several reasons:
• As they enter the system,
jobs must predict the maximum number of resources needed. As we’ve said before,
this isn’t practical in interactive systems.
• The number of total
resources for each class must remain constant. If a device breaks and becomes
suddenly unavailable, the algorithm won’t work.
• The number of jobs must
remain fixed, something that isn’t possible in interactive systems where the
number of active jobs is constantly changing.
• The overhead cost
incurred by running the avoidance algorithm can be quite high when there are
many active jobs and many devices because it has to be invoked for every
request.
• Scheduling suffers as a
result of the poor utilization and jobs are kept waiting for resource allocation.
A steady stream of jobs asking for a few resources can cause the indefinite
postponement of a more complex job requiring many resources.
3) Detection
Unlike the avoidance algorithm, which must be
performed every time there is a request, the algorithm used to detect
circularity can be executed whenever it is appropriate: every hour, once a day,
only when the operator notices that throughput has deteriorated, or when an
angry user complains.
The detection algorithm can be explained by using
directed resource graphs and “reducing” them. Begin with a system that is in
use, The steps to reduce a graph are these:
1. Find a process that is currently using a resource
and not waiting for one. This process can be removed from the graph and
the resource can be returned to the “available list.” This is possible because
the process would eventually finish and return the resource.
2. Find a process that’s waiting only for resource
classes that aren’t fully allocated. This process isn’t contributing to
deadlock since it would eventually get the resource it’s waiting for, finish
its work, and return the resource to the “available list”
3. Go back to step 1 and continue with steps 1 and 2
until all lines connecting resources to processes have been removed
4) Recovery
There are several recovery algorithms, but they
all have one feature in common: They all require at least one victim, an
expendable job, which, when removed from the deadlock, will free the system.
Unfortunately for the victim, removal generally requires that the job be
restarted from the beginning or from a convenient midpoint.
The first
and simplest recovery method, and the most drastic, is to terminate every job that’s
active in the system and restart them from the beginning.
The second
method is to terminate only the jobs involved in the deadlock and ask their users
to resubmit them.
The third
method is to identify which jobs are involved in the deadlock and terminate them
one at a time, checking to see if the deadlock is eliminated after each removal,
until the deadlock has been resolved. Once the system is freed, the remaining
jobs are allowed to complete their processing and later the halted jobs are
started again from the beginning.
The fourth
method can be put into effect only if the job keeps a record, a snapshot, of its
progress so it can be interrupted and then continued without starting again
from the beginning of its execution. The snapshot is like the landing in our
staircase example: Instead of forcing the deadlocked stair climbers to return
to the bottom of
the stairs, they need to retreat only to the nearest
landing and wait until the others have passed. Then the climb can be resumed.
In general, this method is favored for long-running jobs to help them make a
speedy recovery.
Several factors must be considered to select the
victim that will have the least-negative effect on the system. The most common
are:
• The priority of the job under
consideration—high-priority jobs are usually untouched
• CPU time used by the job—jobs close to completion
are usually left alone
• The number of other jobs that would be affected if
this job were selected as the victim
Q.5 Explain starvation.
the result of conservative
allocation of resources in which a single job is prevented from execution
because it’s kept waiting for resources that never become available.
Five philosophers are sitting at a round table, each
deep in thought, and in the center lies a bowl of spaghetti that is accessible
to everyone. There are forks on the table—one between each philosopher, as
illustrated in Figure. Local custom dictates that each philosopher must use two
forks, the forks on either side of the plate, to eat the spaghetti, but there
are only five forks—not the 10 it would require for all five thinkers to eat at
once—and that’s unfortunate for Philosopher 2. When they sit down to dinner,
Philosopher 1 (P1) is the first to take the two forks (F1 and F5) on either
side of the plate and begins to eat. Inspired by his colleague, Philosopher 3
(P3) does likewise, using F2 and F3. Now Philosopher 2 (P2) decides to begin
the meal but is unable to start because no forks are available: F1 has been allocated
to P1, and F2 has been allocated to P3, and the only remaining fork