I/O System (12) Flashcards
1
Q
Typical pc bus structure
A

2
Q
Type of I/O devices
A
- Storage
- transmission
- human-interaction
3
Q
I/O Port
A
Connection point
4
Q
I/O: Bus
A
- Could be daisy chain or shared direct acces
- PCI bus common in PCs and servers
- Exansion bus connects relatively slow devices
- Serial-attached SCSI: common disk interface
5
Q
I/O Controller
A
- AKA host adapter
- Sometimes it is integrated, sometimes it is in a separate circuit board
- Contains processor, microcode, private memory and bus controller
6
Q
Polling
A
For each byte of I/O:
- Read busy bit from status register until 0
- This is busy-waiting
- Ok if device is fast
- inefficient if device is slow
- Hosts sets read or write bit and copies data into data-out register if write is true
- This is busy-waiting
- Host sets command-ready bit
- controller sets busy bit, executes transfer
- controller clears busy bit, error bit, command-ready bit when transfer done
7
Q
Interrupts
A
- Interrupts are a better choice if the busy bit is often 1;
- CPU interrupt-request line triggered by I/O device (processor checks it every epoch)
- Interrupt handler: handles it
- Can be masked
- Interrupt vector:
- to call the right interrupt handler
- some handlers are not maskable
- can be prioritized
- Interrupt chaining if more than one device at same interrupt number
- Used also for exceptions
- Page fault when memory access error
- Terminate process, crash system due to hw error
- Multi-cpu sys. can handle interrupt concurrently
8
Q
DMA
A
- Direct Memory Access
- Used to avoid programmed I/O (one word at a time) for large data movement
- Requires DMA controller
- Bypasses to CPU to transfer data directly between I/O and memory
- OS writes DMA command into memory block
- source and dest location
- read/write mode
- bytes count
- writes location of command block to DMA controller
- Steals cycles (uses bus) from CPU, but still more efficient
- Interrupt on completion
- DVMA: aware of virtual addresses.

9
Q
Application I/O interface
A
- Abstract interface
- Device layer hides differences among I/O controller from kernel
- I/O sys calls encapsule device behaviours in different classes
- Behaviours
- Character stream vs. block
- sequential vs random access
- Sync vs Async vs both
- Sharable vs dedicated
- speed of operation
- read-write vs read only vs write only
- Behaviours
- Each OS has its own I/O subsystem that handles this stuff

10
Q
I/O devices distinctive traits
A
- Device driver handle in a different way devices based on the traits
- Usually can be grouped by OS into:
- block i/o
- character i/o (stream)
- memory-mapped file access
- network sockets
- Usually there is a way to have raw access
- UNIX: ioctl() -> sends arbitrary bits to device control register
*
- UNIX: ioctl() -> sends arbitrary bits to device control register

11
Q
Block and character devices
A
- Block devs:
- disk drives
- commands
- read
- write
- seek
- raw i/o, direct i/o, filesystem access
- memory-mapped file possible
- DMA
- Character devs:
- keyboard, mouse, serial ports
- commands
- put
- get
12
Q
Network devices
A
- Can vary:
- block
- character (stream)
- Linux, Unix, Windows include socket interface
- separates network protocol from network op.
- many different approaches
- pipes
- fifo
- streams
- queues
13
Q
Clocks and timers
A
- Used to get time, elapsed time, timer
- Used to generate recurring waveforms
- ioctl() covers aspects of these
14
Q
Nonblocking and async I/O
A
- Blocking: process suspended until I/O completed
- Nonblocking: I/O returns as much as it is available
- UI, data copy
- done via multithreading
- returns quickly with count of bytes read or written
- Async: process runs while I/O executes
- Difficult to use
- I/O subsystem signals completion

15
Q
Vectored I/O
A
- Allows one system call to do I/O in parallel on multiple devices
- Example: readve
- accepts a vector of multiple buffers to read/write from/to
- Better than individuals calls
- decreases context switching
- some versions provide atomiciy
- preventing threads to change data while read/writes are being performed
16
Q
Kernel I/O subsystem Scheduling
A
- Scheduling
- can be done via queue per device
- can be done using fairness as a metric
- can be done using Quality of Service as a metric (IPQOS)

17
Q
Kernel I/O subsystem Buffering
A
- Buffering: store data in memory while transferring between devs
- must be done because of:
- speed mismatch
- transfer size mismatch
- To maintain copy-semantics: the version of the data writte to disk is the correct one
- simple way: copy application data into kernel buffer before returning control to application
- disk wirte performed from kernel buffer, so subsequent changes do not matter.
- simple way: copy application data into kernel buffer before returning control to application
- double buffering: two copies of data
- kernel and user
- varying sized
- full/being processed and not-full/being used
- copy-on-write can be used for max efficiency in some cases
- must be done because of:
18
Q
Kernel I/O Subsystem caching
A
- The faster device should cache data
- should be a copy
- impactful on performance
- can be combined with buffering sometimes
19
Q
Kernel I/O Subsystem Spooling
A
- Hold data to output until the device completes the I/O request.
- Used in printing, because the printer is very very very slow
20
Q
Kernel I/O Subsystem Device reservation
A
- Provides exclusive access to device
- there are sys calls for alloc/dealloc
- be aware of deadlock
21
Q
Error Handling
A
- OS can recover from
- disk read
- unavailable device
- write failures
- How:
- retrying
- more advanced system stop using the device after frequency of errors gets high
- operations return error codes
- Errors are reported in log
22
Q
I/O protection
A
- User may disrupt normal operation via illegal I/O instr.
- all i/o instructions must need privileges
- I/O must be done via sys calls
- Memorymapped i/o and i/o port memory location must be protected too.

23
Q
Kernel Data Structures
A
- Kernel keeps state of IO comp:
- open files
- network connection
- character device state
- Kernel has also complex data structures to track buffers, memory allocation, dirty blocks
- Some use object oriented methods and message passing to handle I/O
- windows uses message passing: message flows from user to kernel

24
Q
Power management
A
- Related to I/O
- Computers use electricity, generate head -> require cooling
- OSes can help manage and improve use:
- Virtual machine orchestration: move VMs around server to balance power use
- Mobile computing has power management as one of the main modules in OS
- modern sysstem use ACPI firmware that has routines for dev discovery, management, error, and power management that can be called by the kernel.
25
Q
Kernel I/O subsystem general view
A
- Coordinates extensive collection of services, made available to applications and kernel:
- access control
- operation control (can’t seek on a modem)
- fs allocation
- buffering, caching, spooling
- i/o scheduling
- dev status monitoring
- dev-driver config
- power management
- Upper level of the I/O subsystem access devices via the standard device driver interface.
26
Q
Life Cycle of I/O req.
A

27
Q
STREAMS
A
- Implemented from Unix sys. V on
- Full duplex comm. channel between user process and a device
- STREAM:
- stream head interfaces with user proc.
- driver end interface with dev.
- 0 or more modules between the above points
- Modules contain read and write queue
- Message passing is used to communicate between queues
- Async inside, sync communication with stream head.

28
Q
Performance and how to improve it
A
- Factors:
- context switch between interrupts
- data copying
- network traffic
- Request to CPU to execute device driver and kernel I/O code
- Improving
- reduce context switch
- reduce data copying
- reduce interrupts by
- making large transfers using dma
- using smart controllers
- polling
- move user process / daemons to kernel threads
- balance CPU, memory, bus and I/O for max. throughput

29
Q
Device I/O code impact on performance
A

30
Q
Advantages of kernel buffering
A
- In a system with page and/or swapping it is possible to enable swapping out a process while waitining on I/O, while the kernel buffers receives the data that the process is waitinig for.
Simply put: we can decouple work from the user memory to disk space. - The advantage over a single buffer is that the single buffe is also having a pipelined behaviour, in which we can transfer from kernel to user and from disk to kernel at the same time.
- Kernel buffer can act as a cache, the process could find the kernel buffer filled without having to wait.