-->

Device Management

7 minute read

 

DEVICE MANAGEMENT

Q.1 Explain types of devices.

The system’s peripheral devices generally fall into one of three categories: dedicated, shared, and virtual.

 

Dedicated devices are assigned to only one job at a time; they serve that job for the entire time it’s active or until it releases them. Some devices, such as tape drives, printers, and plotters, demand this kind of allocation scheme, because it would be awkward to let several users share them.

Shared devices can be assigned to several processes. For instance, a disk, or any other direct access storage device (often shortened to DASD), can be shared by several processes at the same time by interleaving their requests, but this interleaving must be carefully controlled by the Device Manager

Virtual devices are a combination of the first two: They’re dedicated devices that have been transformed into shared devices. For example, printers (which are dedicated devices) are converted into sharable devices through a spooling program that reroutes all print requests to a disk.

 

Q.2 Explain the components of the I/O subsystem. 

Components of I/O subsystem are channel, control unit and disk. I/O channels are programmable units placed between the CPU and the control units. Their job is to synchronize the fast speed of the CPU with the slow speed of the I/O device, and they make it possible to overlap I/O operations with processor operations so the CPU and I/O can process concurrently. Channels use I/O channel programs, which can range in size from one to many instructions. Each channel program specifies the action to be performed by the devices and controls the transmission of data between main memory and the control units.

The channel sends one signal for each function, and the I/O control unit interprets the signal, which might say “go to the top of the page” if the device is a printer or “rewind” if the device is a tape drive. Although a control unit is sometimes part of the device, in most systems a single control unit is attached to several similar devices, so we distinguish between the control unit and the device. Some systems also have a disk controller, or disk drive interface, which is a special purpose device used to link the disk drives with the system bus. Disk drive interfaces control the transfer of information between the disk drives and the rest of the computer system. The operating system normally deals with the controller, not the device.

At the start of an I/O command, the information passed from the CPU to the channel is this:

• I/O command (READ, WRITE, REWIND, etc.)

• Channel number

• Address of the physical record to be transferred (from or to secondary storage)

• Starting address of a memory buffer from which or into which the record is to be transferred

 



 Because the channels are as fast as the CPU they work with, each channel can direct several control units by interleaving commands (just as we had several mechanics directed by a single dispatcher). In addition, each control unit can direct several devices (just as a single mechanic could repair several vehicles of the same type). A typical configuration might have one channel and up to eight control units, each of which communicates with up to eight I/O devices. Channels are often shared because they’re the most expensive items in the entire I/O subsystem.

 

Q.3 Explain the communication among devices. Or

Discuss about Channel Status Word and Direct Memory Access in detail. 

The Device Manager relies on several auxiliary features to keep running efficiently under the demanding conditions of a busy computer system, and there are three problems that must be resolved:

• It needs to know which components are busy and which are free.

• It must be able to accommodate the requests that come in during heavy I/O traffic.

• It must accommodate the disparity of speeds between the CPU and the I/O devices.

The first is solved by structuring the interaction between units. The last two problems are handled by buffering records and queuing requests.

Each unit in the I/O subsystem can finish its operation independently from the others. For example, after a device has begun writing a record, and before it has completed the task, the connection between the device and its control unit can be cut off so the control unit can initiate another I/O task with another device. Meanwhile, at the other end of the system, the CPU is free to process data while I/O is being performed, which allows for concurrent processing and I/O.

 

The success of the operation depends on the system’s ability to know when a device has completed an operation. This is done with a hardware flag that must be tested by the CPU. This flag is made up of three bits and resides in the Channel Status Word (CSW), which is in a predefined location in main memory and contains information indicating the status of the channel. Each bit represents one of the components of the I/O subsystem, one each for the channel, control unit, and device. Each bit is changed from 0 to 1 to indicate that the unit has changed from free to busy.

 

The use of interrupts is a more efficient way to test the flag. Instead of having the CPU test the flag, a hardware mechanism does the test as part of every machine instruction executed by the CPU. If the channel is busy, the flag is set so that execution of the current sequence of instructions is automatically interrupted and control is transferred to the interrupt handler, which is part of the operating system and resides in a predefined location in memory.

 

Direct memory access (DMA) is an I/O technique that allows a control unit to directly access main memory. This means that once reading or writing has begun, the remainder of the data can be transferred to and from memory without CPU intervention. However, it is possible that the DMA control unit and the CPU compete for the system bus if they happen to need it at the same time. To activate this process, the CPU sends enough information—such as the type of operation (read or write), the unit number of the I/O device needed, the location in memory where data is to be read from or written to, and the amount of data (bytes or words) to be transferred—to the DMA control unit to initiate the transfer of data; the CPU then can go on to another

task while the control unit completes the transfer independently. The DMA controller sends an interrupt to the CPU to indicate that the operation is completed. This mode of data transfer is used for high-speed devices such as disks.

 

Q.4 Give the definition of the following words.

SCAN: a scheduling strategy for direct access storage devices that’s used to optimize seek time. The most common variations are N-step SCAN and C-SCAN.

Search time: the time it takes to rotate the disk from the moment an I/O command is issued until the requested record is moved under the read/write head. Also known as rotational delay.

seek time: the time required to position the read/write head on the proper track from the time the I/O request is issued.

C-LOOK: a scheduling strategy for direct access storage devices that’s an optimization of C-SCAN.

C-SCAN: a scheduling strategy for direct access storage devices that’s used to optimize seek time. It’s an abbreviation for circular-SCAN.

 

Q.5 What is raid? Explain it.

RAID is a set of physical disk drives that is viewed as a single logical unit by the operating system. It was introduced to close the widening gap between increasingly fast processors and slower disk drives. RAID assumes that several small-capacity disk drives are preferable to a few large-capacity disk drives because, by distributing the data among several smaller disks, the system can simultaneously access the requested data

from the multiple drives, resulting in improved I/O performance and improved data recovery in the event of disk failure.

A typical disk array configuration may have five disk drives connected to a specialized controller, which houses the software that coordinates the data transfer of data from the disks in the array to a large-capacity disk connected to the I/O subsystem, This configuration is viewed by the operating system as a single large capacity disk, so that no software changes are needed.

Data is divided into segments called strips, which are distributed across the disks in the array. A set of consecutive strips across disks is called a stripe and the whole process is called striping.