C191-Terms-Chapter-11 Flashcards
device controller (device adapter)
an electronic circuit capable of operating a specific I/O device using binary signals. The interface to a device controller is a set of hardware registers and flags, which may be set by and/or examined by device drivers.
device driver
a device-specific program that implements I/O operations, requested by user applications or the OS, by interacting with the device controller.
Since devices vary greatly in speed, latency, and the size of data being transmitted, the ability to perform random vs sequential access, and other aspects, device drivers are supplied by the device manufacturers.
To be able to incorporate new devices into a system without modifications to the OS, the I/O system supports a small set of generic API instructions. The typical subdivision includes instructions for:
Block-oriented devices (magnetic disks, CD ROMs, FLASH disks)
Character-oriented devices (keyboards, pointing devices, cameras, media players, printers)
Network devices (modems, Ethernet adapters)
The driver of any new device must implement the generic I/O instructions of an API for the specific device.
A generic interface to a device controller consists of a set of registers:
Opcode: The register specifies the type of operation requested. Ex: read or write. Storing a new value into the register starts the I/O operation.
Operands: One or more operand registers are used to describe the parameters of the requested operation. The values are written by the CPU prior to starting the operation.
Busy: The register (a 1-bit flag) is set by the controller to indicate whether the device is busy or idle.
Status: The register is set by the controller to indicate the success or failure of the last I/O operation.
Data buffer: The data buffer holds the data to be transferred between the device and main memory. Depending on the device type, the buffer may hold a single character or a block of data.
Programmed I/O
a style of I/O programming where the CPU, running the device driver, performs the copying of all data between the I/O device controller and main memory.
Polling
a technique to determine whether a device is busy or idle by reading a flag set and reset by the device controller.
To perform programmed input with polling, the CPU first issues an input request to the device controller by writing a new value into the opcode register. Then the CPU repeatedly polls the busy flag until the operation completes. If the operation was successful, the CPU copies the data from the controller buffer to main memory.
Programmed output with polling is analogous. When the device is not busy, the CPU copies the data from main memory to the controller buffer and issues an output request. The CPU then polls the busy flag until the operation completes. If the operation was successful, the CPU may proceed with the next output operation.
Programmed I/O with interrupts
When interrupts are used for I/O processing, the interface to the controller remains the same, but the controller is equipped with the ability to issue interrupts to the CPU.
The operand and opcode registers are used to describe and start an I/O operation. The status register indicates the success or failure of the last operation. The controller buffer holds the data transferred to or from the device.
The busy flag is still present but is used only initially to determine whether the device is available to accept a new I/O request. Then, after starting the I/O operation, the current process blocks itself, instead of repeatedly polling the busy flag to detect the termination of the data transfer.
The controller issues an interrupt when the operation has terminated, which reactivates the blocked process. The process examines the status of the operation and, depending on the outcome, proceeds with the next I/O operation or takes appropriate corrective actions.
I/O with direct memory access
With programmed I/O, the CPU needs to transfer all data between the controller buffer and the main memory. The resulting overhead is acceptable with slow, character-oriented devices, but to liberate the CPU from the frequent disruptions caused by fast devices, direct memory access may be used.
direct memory access (DMA)
controller is a hardware component that allows devices to access main memory directly, without the involvement of the CPU. Using DMA, the CPU only initiates a data transfer, which can consists of a line of characters, a block of data, or even multiple blocks of data, as part of a single I/O operation.
The process executing the device driver then blocks itself, which frees the CPU to serve other processes in the meantime. The device controller in collaboration with the DMA controller executes the I/O operation by transferring the data directly between the device and main memory. When the operation terminates, the device controller issues an interrupt to reactivate the blocked process.
Polling vs interrupts
Polling and interrupts both result in overhead during I/O processing, but the overhead sources are different. Executing a single poll requires only a few machine instructions. On the other hand, blocking and later reactivating a process using an interrupt constitutes significant CPU overhead.
Polling is a good choice in dedicated systems, where only a single process is running. The CPU can busy-wait by executing a polling loop because no other computation is available to use the CPU in the meantime.
Polling is also suitable for devices that complete an I/O operation within a few microseconds. Ex: non-volatile solid-state (flash) memories. With such very fast devices, a context switch using interrupts would be more time-consuming than a short polling loop.
In a general-purpose multi-process environment, interrupts are a better choice with most devices. Blocking a process after issuing an I/O instruction and later processing the interrupt to reactivate the process represents constant, predictable overhead for each I/O instruction.
The device is restarted immediately after completing the data transfer and CPU time is never wasted on long busy-loops when a device is slow in responding.
buffer
a register or an area of main memory used to hold data generated by a producer process or an input device and removed from the buffer at a later time by a consumer process or an output device. Depending on the intended use, a single buffer can hold one character at a time or a larger block of data.
The main purpose of using a single buffer is to decouple the producer from the consumer in time. The producer can generate a data item without the consumer being active simultaneously. Similarly, the consumer can copy the item from the buffer without the producer being active simultaneously.
A buffer also permits the consumer to accumulate and act upon multiple data items generated by the producer one at a time.
Buffer swapping
a technique that allows the operations of a producer process and a consumer process to overlap by using two buffers. While the producer is filling buffer 1, the consumer is copying buffer 2. When both terminate, the two buffers are swapped. The producer starts filling buffer 2 while the consumer starts copying buffer 1.
Buffer swapping improves performance by overlapping the execution of the producer and the consumer, but only when data is produced and consumed at the same constant rate.
In situations where data is produced in bursts of varying lengths or frequency and the consumer is unable to handle the incoming data at the rate of a burst, multiple buffer slots are useful to hold and process the data at the slower rate of the consumer.
Conversely, if the consumer needs to process incoming data in bursts or at a higher granularity, then multiple buffer slots can be used to accumulate the data prior to each burst.
circular buffer
a fixed array of buffer slots filled by the producer and emptied by the consumer one slot at a time in ascending order, modulo the buffer size.
disk block cache
To speed up access to disk blocks, modern disk controllers include an internal cache to hold the most recently accessed blocks. In addition, the OS may maintain an additional larger cache in main memory. A disk block cache is a set of main memory buffers that contain the most recently accessed disk blocks.
Since a disk block cache can contain a large number of blocks, hashing is used to organize blocks into separate lists for faster access. In addition, all blocks are divided into several categories and are linked together using separate linked lists.
Blocks critical to performance (Ex: blocks used internally by the OS) must remain resident at all times and are kept on a list of Locked blocks.
Blocks expected to be accessed again in the near future (Ex: blocks holding data from regular user files) reside on a list that implements the LRU (least recently used) policy.
Whenever a block is accessed, the block is moved to the rear of the list. Blocks not accessed again gradually migrate to the front of the list and are removed when no free buffers are available.
Blocks expected to be accessed only once or very infrequently (Ex: blocks containing file control blocks, which are accessed when a new file is opened) are added to the front of the same LRU list and thus will be removed quickly.
All free buffers are kept on a separate list, possibly segregated by buffer size, and are added to the Locked list or the LRU list whenever a new non-resident block is accessed.
track
one of many concentric rings on a magnetic disk surface. To access information on a track, a read/write head, mounted on a movable arm, is mechanically positioned over the track. Information is written onto or read from the track as the disk rotates under the r/w head.
sector
a portion of a track and is the smallest unit of data that can be read or written with a single r/w operation.