-->

Process Synchronization

10 minute read

Process Synchronization

1. What is parallel processing?

Simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other.

Most computers have just one CPU, but some models have several. There are even computers with thousands of CPUs. With single-CPU computers, it is possible to perform parallel processing by connecting the computers in a network.

Parallel processing is also called parallel computing

 

Note: that parallel processing differs from multitasking, in which a single CPU executes several programs at once.

 

2.   Explain the working of a master slave configuration in 

    a multiprocessor environment.

Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. In some systems a master is elected from a group of eligible devices, with the other devices acting in the role of slaves

The master/slave configuration is an asymmetric multiprocessing system. The master processor is responsible for managing the entire system—all files, devices, memory, and processors. Therefore, it maintains the status of all processes in the system, performs storage management activities, schedules the work for the other processors, and executes all control programs. This configuration is well suited for computing environments in which processing time is divided between front-end and back-end processors; 

The primary advantage of this configuration is its simplicity.


Disadvantages:

 • Its reliability is no higher than for a single-processor system because if the master processor fails, the entire system fails.

 • It can lead to poor use of resources because if a slave processor should become free while the master processor is busy, the slave must wait until the master becomes free and can assign more work to it.

• It increases the number of interrupts because all slave processors must interrupt the master processor every time they need operating system intervention, such as for I/O requests.

 

3.   How does a loosely coupled configuration of a 

    multiprocessing system work like? 

The loosely coupled configuration features several complete computer systems, each with its own memory, I/O devices, CPU, and operating system. This configuration is called loosely coupled because each processor controls its own

Difference between a loosely coupled multiprocessing system and a collection of independent single-processing systems is that each processor can communicate and cooperate with the others. When a job arrives for the first time, it’s assigned to one processor. Once allocated, the job remains with the same processor until it’s finished. Therefore, each processor must have global tables that indicate to which processor each job has been allocated. To keep the system well balanced and to ensure the best use of resources, job scheduling is based on several requirements and policies. For example, new jobs might be assigned to the processor with the lightest load.

When a single processor fails, the others can continue to work independently. However, it can be difficult to detect when a processor has failed.

 

4.   Explain how lack of process synchronization could lead to 

a probable situation of starvation.

The success of process synchronization centers on the capability of the operating system to make a resource unavailable to other processes while it is being used by one of them. These “resources” can include printers and other I/O devices, a location in storage, or a data file. In essence, the used resource must be locked away from other processes until it is released. Only when it is released is a waiting process allowed to use the resource. A mistake could leave a job waiting indefinitely (starvation) or, if it’s a key resource, cause a deadlock.

 

5.   What is a critical region?

A part of a program that must complete execution before other process can have access to the resourced being used.

 

6.   How can test and set lock mechanism be used to 

     get synchronization? What are its drawbacks?

Test-and-set is a single, indivisible machine instruction known simply as TS and was introduced by IBM for its multiprocessing System 360/370 computers. In a single machine cycle it tests to see if the key is available and, if it is, sets it to unavailable.

The actual key is a single bit in a storage location that can contain a 0 (if it’s free) or a 1 (if busy). We can consider TS to be a function subprogram that has one parameter and returns one value, with the exception that it takes only one machine cycle.

Therefore, a process (Process 1) would test the condition code using the TS instruction before entering a critical region. If no other process was in this critical region, then Process 1 would be allowed to proceed and the condition code would be changed from 0 to 1. Later, when Process 1 exits the critical region, the condition code is reset to 0 so another process can enter. On the other hand, if Process 1 finds a busy condition code, then it’s placed in a waiting loop where it continues to test the condition code and waits until it’s free.

Although it’s a simple procedure to implement, and it works well for a small number of processes,

It has two major drawbacks.

1) When many processes are waiting to enter a critical region, starvation could occur because the processes gain access in an arbitrary fashion.

2)  Waiting processes remain in unproductive, resource-consuming wait loops, requiring context switching. This is known as busy waiting

 

7.   Explain the need for mutual exclusion.

The requirement for mutual exclusion when several jobs were trying to access the same shared physical resources. The concept is the same here, but we have several processes trying to access the same shared critical region. We’ve looked at the problem of mutual exclusion presented by interacting. Parallel processes using the same shared data at different rates of execution. This can apply to several processes on more than one processor, or interacting (codependent) processes on a single processor. In this case, the concept of a critical region becomes necessary because it ensures that parallel processes will modify shared data only while in the critical region.

In sequential computations mutual exclusion is achieved automatically because each operation is handled in order, one at a time. However, in parallel computations the order of execution can change, so mutual exclusion must be explicitly stated and maintained. In fact, the entire premise of parallel processes hinges on the requirement that all operations on common variables consistently exclude one another over time.

 

8.   What do you understand by busy waiting? When does a  process go into busy waiting?

BUSY WAITING: a method by which processes, waiting for an event to occur, continuously test to see if the condition has changed and remain in unproductive, resource consuming wait loops.

 

When Process 1 exits the critical region, the condition code is reset to 0 so another process can enter. On the other hand, if Process 1 finds a busy condition code, then it’s placed in a waiting loop where it continues to test the condition code and waits until it’s free.

 

9.   How can wait and signal lock mechanism be used to 

    achieve synchronization?

WAIT and SIGNAL is a modification of test-and-set that’s designed to remove busy waiting. Two new operations, which are mutually exclusive and become part of the process scheduler’s set of operations, are WAIT and SIGNAL. WAIT is activated when the process encounters a busy condition code. WAIT sets the process’s process control block (PCB) to the blocked state and links it to the queue of processes waiting to enter this particular critical region. The Process Scheduler then selects another process for execution. SIGNAL is activated when a process exits the critical region and the condition code is set to “free.” It checks the queue of processes waiting to enter this critical region and selects one, setting it to the READY state. Eventually the Process Scheduler will choose this process for running. The addition of the operations WAIT and SIGNAL frees the processes from the busy waiting dilemma and returns control to the operating system, which can then run other jobs while the waiting processes are idle.

 

10.        How can lock and key mechanism be used to 

        achieve synchronization?

Synchronization is sometimes implemented as a lock-and-key arrangement: Before a process can work on a critical region, it must get the key. And once it has the key, all other processes are locked out until it finishes, unlocks the entry to the critical region, and returns the key so that another process can get the key and begin work. This sequence consists of two actions: (1) the process must first see if the key is available and (2) if it is available, the process must pick it up and put it in the lock to make it unavailable to all other processes. In wait and signal mechanism WAIT is activated when the process encounters a busy condition code and SIGNAL is activated when a process exits the critical region and the condition code is set to “free.

 

11.        What are semaphores?  What are the operations that can be performed  on a semaphore 



In an operating system, a semaphore performs a similar function: It signals if and when a resource is free and can be used by a process. Dijkstra (1965) introduced two operations to overcome the process synchronization problem we’ve discussed. Dijkstra called them P and V, and that’s how they’re known today. The P stands for the Dutch word proberen (to test) and the V stands for verhogen (to increment). The P and V operations do just that: They test and increment. Here’s how they work. If we let s be a semaphore variable, then the V operation on s is simply to increment s by 1. The action can be stated as:

V(s): s: = s + 1

Like the test-and-set operation, the increment operation must be performed as a single indivisible action to avoid deadlocks. And that means that s cannot be accessed by any other process during the operation. The operation P on s is to test the value of s and, if it’s not 0, to decrement it by 1.The action can be stated as:

P(s): If s > 0 then s: = s – 1

This involves a test, fetch, decrement, and store sequence. Again this sequence must be performed as an indivisible action in a single machine cycle or be arranged so that the process cannot take action until the operation (test or increment) is finished. If s = 0, it means that the critical region is busy and the process calling on the test operation must wait until the operation can be executed and that’s not until s > 0

 

12.  Using Producers and consumers problem show how 

    process co-operation is achieved. Or explain how 

    semaphores can be useful in solving problem of producer

    and consumer.

There are occasions when several processes work directly together to complete a common task.  Famous examples are the problems of producers and consumers,. In this case requires both mutual exclusion and synchronization, and each is implemented by using semaphores.

The classic problem of producers and consumers is one in which one process produces some data that another process consumes later. The synchronization between two of the processors (the cook and the bagger) represents a significant problem in operating systems. The cook produces hamburgers that are sent to the bagger (consumed). Both processors have access to one common area, the hamburger bin, which can hold only a finite number of hamburgers (this is called a buffer area). The bin is a necessary storage area because the speed at which hamburgers are produced is independent from the speed at which they are consumed.

Problems arise at two extremes: when the producer attempts to add to an already full bin (as when the cook tries to put one more hamburger into a full bin) and when the consumer attempts to draw from an empty bin (as when the bagger tries to take a hamburger that hasn’t been made yet). In real life, the people watch the bin and if it’s empty or too full the problem is recognized and quickly resolved. However, in a computer system such resolution is not so easy. Consider the case of the prolific CPU. The CPU can generate output data much faster than a printer can print it. Therefore, since this involves a producer and a consumer of two different speeds, we need a buffer where the producer can temporarily store data that can be retrieved by the consumer at a more appropriate speed.

Because the buffer can hold only a finite amount of data, the synchronization process must delay the producer from generating more data when the buffer is full. It must also be prepared to delay the consumer from retrieving data when the buffer is empty. This task can be implemented by two counting semaphores—one to indicate the number of full positions in the buffer and the other to indicate the number of empty positions in the buffer. A third semaphore, mutex, will ensure mutual exclusion between processes.