Computers & The Internet Flashcards
What is an OS?
An operating system (OS) is a program that controls the execution of application programs and acts as an interface between applications and the computer hardware.
What are the objectives of an OS
It needs to be convenient, efficient, and have the ability to evolve.
What services do the operating system provide?
File management, input/output management, process management, memory management, and multiprocessor management.
What is the kernel?
The most central part of the OS. Its functionality depends on the OS design.
What is a kernel context switch?
Transition between user mode and kernel mode, they are computationally expensive (i.e. require a lot of CPU usage) therefore an OS tried to limit the number of context switches.
What is a monolithic OS structure?
All services are implemented by a large kernel, any new feature is added to the kernel, “everything is connected to everything”.
What are the pros and cons of a monolithic OS structure?
Pros – Communication with kernel is fast; Cons – Difficult to understand, difficult to modify, lack of security.
What is a layered OS structure?
Services are implemented by a large kernel which is organised into layers. Each layer can only communicate with adjacent layers, and any given layer only needs to know the functionality of adjacent layers.
What are the pros and cons of a layered OS structure?
Pros – It is easy to debug; Cons – Poor performance due to requiring traversal through multiple layers to obtain a service.
What is a microkernel OS structure?
Services are implemented by servers, and a small kernel delivering messages between them.
What are the pros and cons of microkernel OS structure?
Pros – Secure and reliable; Cons – Poor performance due to increased system-function overhead.
What is a modular OS structure?
Starts with a small kernel and additional services are loaded on demand via modules.
What are the pros and cons of a modular OS structure?
Pros – Fast because we don’t load unnecessary services and any module can directly communicate with any other module; Cons – As more modules are loaded it becomes similar to monolithic structure.
What is a virtual machine?
A host operating system provides virtual hardware to one or multiple guest OS’s. Enables multiple environments with different OS’s on one machine.
What is a file?
A file is a named collection of related information that is recorded on secondary storage (non-volatile memory). File extensions help the OS determine how to interpret the file.
What are the attributes of a file?
Typical file attributes include: The name of the file, the identifier of the file, the location of the file on a storage device, the size of the file, the protection mode of the file (permissions), and the times when the file was created, accessed and modified.
How does file management work in linux?
In Linux everything is represented by a file in the file system. There are six types of files: Regular files, directories, special files, pipes, links, and symbolic links. There is also a tree-like inode pointer structure.
What are hard and soft links?
Hard links – point to a file via its inode, this means if the file is moved/deleted the link will still work. Soft (symbolic) links – pointer to a filename, this means if the file is moved/deleted the link will still work.
What are blocks?
In Linux, files are allocated in blocks, with each block typically being 4096 bytes.
What is the Inode Pointer Structure?
This is used by the inode of a file to list the addresses of files data blocks. This consists of fifteen pointers, twelve of which point directly to blocks of the file’s data (direct pointers). Then there is a single indirect pointer (a pointer that points to a block of pointers that then point to blocks of the file’s data), a doubly indirect pointer (a pointer that points to a block of pointers that point to other blocks of pointers that then point to blocks of the file’s data), and a triply indirect pointer (a pointer that points to a block of pointers that point to other blocks of pointers that point to other blocks of pointers that then point to blocks of the file’s data).
What are magnetic disks and how do they work?
These are primarily HDD’s but also floppy disks. A magnetic disk has a number of spinning circular platters, over which hover some heads attached to a movable arm. The magnetic disk is read/written by having the head sense/change the magnetism of a sector. At a bit level magnetism in one direction represents a one, and magnetism in the other direction represents a zero. Each platter is divided into circular tracks, and each track is divided into sectors. A set of tracks across different platters at a given position make up a cylinder. Each sector has a fixed amount of data (usually 512 bytes), which is the smallest unit of data you can transfer to or from a disk. A magnetic disk is read/written by moving the arms in/out to the requires cylinder. All heads/arms move together. The platters rotate; the rotation speed is related to the data transfer rate.
What are SSDs’ and how do they work?
A SSD has no moving parts and instead stores data using flash memory. A SSD has a controller (an embedded processor), buffer memory (volatile memory), and flash memory. A typical SSD might have 4 kB pages and a 512kB block size, which means each block has 128 pages. The act of erasing flash memory requires a high amount of voltage, is rather slow, and can only be done on block level. Reading and Writing is done on page level and is fast to do. Overwriting requires an erase operation, and is therefore slower than reading or writing to an empty drive. A SSD is read by: copying a flash memory page into the buffer and reading data from the page in the buffer. A SSD is written by copying a memory block into the buffer, erasing the block in flash memory, modifying the block in the buffer, and writing the block from the buffer to the flash memory.
What are the pros and cons of SSDs’ over Magnetic Disks?
SSDs are faster, more reliable, more expensive, and more power efficient than magnetic disks. However SSD’s deteriorate with every write process.
What is Wear-Levelling?
A block will fail once it reaches a critical number of writes. Wear-levelling spreads out the writes evenly among the blocks. There are two types: Dynamic and Static. It is applied by the controller.
What is Dynamic Wear-Levelling?
One type of wear-levelling is dynamic wear-levelling. New data is written to the least-recently-used block. Thereby we avoid wearing out certain blocks by writing to the same block again and again. The issue with this is that cold data is not moved.
What is Static Wear-Levelling?
This does the same as dynamic wear-levelling, but also periodically moves existing data to the least-recently-used block. Thereby we avoid wearing out certain blocks while blocks with cold data that is never moved.
What is an I/O device?
I/O devices enable computers to receive or output information to humans or devices. Examples include human interface devices such as monitors and keyboards, storage devices such as SSD’s and HDD’s, and transmission disks such as network cards. The communication with I/O devices is performed over a bus, and the I/O devices are operated using controllers.
What are controller registers?
The processor communicates with the controller by reading and writing to the controller registers. There is the data-in register, which is data coming from the device; the data-out register, which is data going to the device; the status register, which indicates the status of the device; and the control register, which commands the device.
What is Port-Based I/O?
The CPU uses special instructions for sending and receiving data to and from an I/O port.
What is Memory-Mapped I/O?
The CPU treats the device “as memory”; control registers are mapped to specific memory addresses. This is more commonly used nowadays than port-based I/O.
What is polling?
An approach to control. The CPU repeatedly checks the controller’s status register to see whether the controller is busy. When the controller is ready and the CPU wants to give new instructions, the CPU writes to the data-out register and signals that it has done so to the control register.
What are Interrupts?
An approach to control. The CPU regularly senses an interrupt-request line. When the CPU gets an interrupt signal through the interrupt-request line, it stops the current process and initiates a response.
What are the differences between Polling and Interrupts?
Interrupts can monitor the status of several devices at the same time and serve them based on priority. Polling needs to check each device individually in a round-robin fashion. Interrupts are much more commonly used since this frees the CPU from polling devices that do not need service. Polling may be efficient if the controller and the device are fast, the I/O rate is high, some I/O data can be ignored, and the CPU has nothing better to do.
What are large data transfers?
Some devices such as disk drives will often do large data transfers. The transfer will inefficient if the CPU has to feed data, byte to byte, to the controller’s registers. A better approach is to offload this work to a special-purpose processor (a DMA controller).
How does Direct Memory Access work?
The CPU writes a command block into memory, specifying the source and destination of the transfer. The DMA can then perform multiple transfers via a single command. When the transfer is complete, the CPU receives an interrupt from the DMA controller. This enables to CPU to spend more resources on other tasks.
What is a device driver?
A device driver hides differences between various device controllers by defining an interface between the OS and I/O devices for a specific class of I/O devices.
What is a system call?
A system call is a request of kernel service. For example, you can have system calls for character I/O, block I/O and network I/O.
What happens in a system call for character I/O?
A character device transfers bytes one by one. Characters must be processed in the order that they arrive in the stream. The interface includes the get operation, to return the next character in the stream; the put operation, to add a character to the stream; and libraries for line-by-line access, with integrated editing services (such as backspace to remove the preceding character from the stream.
What happens in a system call for block I/O?
Block devices (typically non-volatile mass storage devices such as HDD’s) are used to transfer blocks of data. The interface includes a read operation, for reading blocks of data; and a write operation, for writing blocks of data. Block devices are high volume devices.
What happens in a system call for memory-mapped I/O?
Layered on top of block device drivers. Rather than read and write operations, a memory-mapped interface provides access to disk storage via a location in main memory; there is a system call that maps a file on the device into memory. The OS deals with transferring data between memory and the device and can perform transfers when needed. Accessing memory-mapped files is generally faster than using read/write operations since it does not require a context switch. Memory-Mapped files are not the same as memory-mapped IO.
What happens in a system call for network I/O?
A network socket interface has system calls for:
- Creating and connecting sockets
- Sending and receiving packets over the connection
- The select function, for determining the status of one or more sockets (whether the socket is ready for reading and writing).
- Checking whether a transfer was successful
- Recovering gracefully from unsuccessful transfers.
The main difference between network and other I/O devices is that things regularly go wrong with network I/O devices.
What is a program?
A set of instructions for performing a specific task. It is stored on disk.
What is a process?
A program in execution. It requires CPU resources, primary memory and I/O. There may be multiple processes executing a single program.
Why do we need processes?
It makes it easier to build and and maintain the OS as modules can be improved 1 by 1. We can build methods for organising and executing a large number of concurrent processes. It means we can execute tasks in parallel which is faster.
What are the elements of the process lifecycle?
Born: A process has been created but not admitted to the pool of executable processes.
Ready: When the OS is ready, the process is transferred to the ready state.
Running: The process is executed by a processor. It may go back and forth between running and ready depending on priority.
Waiting: The process has made a request it has to wait for, such as an input or access to a file.
Died: The process has been completed or aborted
What is a process control block?
Each process is represented by a PCB. It is stored in memory. It contains the process number (its identity), the process state (its registers; stores temporary results, the next instruction etc), the process address space (its memory) and the process I/O (I/O devices and files allocated to the process).
What is a process context switch?
When one process is stopped and another is started. It changes the process scheduling state of A, saves the context of A, loads the context of B, changes the process scheduling state of B. It is pure overhead and the time depends on several factors.
What are the issues with process context switches?
With too few of them, there is no fairness between processes and some processes have to wait for a long time. With too many, the processor has to spend too many resources on overhead. The aim is to optimise the number of switches.
What does the process address space contain?
A stack for temp data, a heap for dynamically-allocated data, a data section for static data, a text section for program code. The data and text sections are fixed size, while the stack and heap sections can shrink and grow.
What is the stack in the process address space?
Whenever a function is called, temporary variables are added to the top of the stack. When exiting a function they are removed from the top of the stack. Push adds a variable and Pop removes a variable. Last in - First out.
What is the heap in the process address space?
It is dynamically allocated. Blocks are allocated and removed in an arbitrary order. It’s used to store a large block of memory for a longer period of time, and variables that can change size dynamically.
What is process spawning?
A process is created at the request of a different process. The first process is called the parent process, and the second is the child process. In linux there are four system calls for process spawning.
What is Fork?
Fork creates a child process with a unique process ID. The child is given a copy of the process address space of the parent process. The processes know whether they are child/parent.
What is Exec?
Exec runs an executable file that overwrites the process’s address space with a new program. This is usually used after the fork process. The use of fork then exec allows the parent and child to communicate but go their separate ways.
What is Exec?
Exec runs an executable file that overwrites the process’s address space with a new program. This is usually used after the fork process. The use of fork then exec allows the parent and child to communicate but go their separate ways.
What is Wait and Exit with parent/child processes?
The parent process is put in a waiting state until the child process has finished. When done the child process issues the exit system call, and the parent process resumes.
What is shared memory?
Shared memory is a memory management technique in which multiple processes can access the same block of memory. Memory space from within one processes address space is shared between them. The processes read or write to the shared region. Both processes must make system calls to make the shared space available to both processes.
What is message passing?
Processes communicate by sending messages
to each other. The messages are sent to/received from an
agreed “mailbox” that is not part of the
address space of any of the processes.
What are the advantages of shared memory and message passing?
Shared memory: faster than message passing because message passing requires system calls. Message Passing: Good for small amount of data, as it avoids setting up the shared space.
What is First Come First Served?
The first-come-first-served scheduling algorithm allocates the processor on the
basis of creation time (like you queue at a shop with a single queue).
What are the pros and cons of First Come First Served?
Pros: No unnecessary switching between processes. You will eventually always provide processing time to a given process. Cons: Long average waiting time.
What is Round Robin?
Identical to first come first served, except that no process can occupy the
processor longer than a predefined time length (the time quantum). After a arriving at a processor, a process will either 1) be interrupted and placed
at the end of the (circular) queue, or 2) complete before it runs out of time.
What are the Pros and Cons of Round Robin?
Pros: Distributes resources fairly and quick processes can pass through quick. Cons: Long waiting time when processes require multiple time quanta. Performance depends on time quantum.
What is Shortest Process Next?
The shortest process next scheduling algorithm shares the processor on the basis
of shortest predicted execution time. It can be preemptive or non-preemtive.
What are the pros and cons of Shortest Process Next?
Pros: Gives the minimum average waiting time for a given set of processes. Cons: Execution time has to be estimated, and long processes may have to wait ages.
What is multilevel queuing?
It organizes processes into different priority levels or queues, and processes are scheduled for execution based on their priority level. For example there could be specific queues for interactive processes (lots of I/O), normal processes (system services), and batch processes (processes without I/O). Fixed Priority scheduling gives some processes priority. Time slicing gives different CPU time to different processes. Different algorithms may be used for different processes.
What are the pros and cons of multilevel queuing?
Pros: Can accommodate a range of different performance objectives. Cons: Complex and difficult to calibrate.
What is User-Oriented Criteria?
Focus of performance as perceived by an individual user. This can include turnaround time (time between submission of a process and its completion), and response time (the time between the submission of a process and the first response from the process).
What are system oriented criteria?
Focus on efficient utilization of the processor. For example: Throughput (the number of completed processes per unit time) and Processor Utilization (The percentage of time that the processor is busy).
What is a multi-core processor?
A processor with several cores/processors. Each core has its own cache memory.
What are the two approaches to Multi-Processor Scheduling?
A common ready queue: When a processor comes available it chooses a new process from a common queue. Private queues: When a processor becomes available, it chooses a new process from its own private queue.
What is load balancing?
A method of improving the performance of multiprocessor scheduling. It balances processes evenly among processors. Common ready queues automatically have this.
What is Processor Affinity?
A method of improving the performance of multiprocessor scheduling. It keeps a process running on the same processor to keep the cache warm. Private queues automatically have this.
What are the Memory Management Tasks?
Ability to relocate a process, Protection from other processes and Sharing of data.
What is Fixed Partitioning?
Memory is divided into partitions
of a fixed size; the size may vary
between partitions. A process can be loaded into any
partition whose size is equal to or
greater than the size of the
process. One-to-one mapping between
partitions and processes. Suffers from Internal Fragmentation.
What is Internal Fragmentation?
When space is wasted
inside partitions because the size of the process is smaller than the size of the partition.
What are the pros and cons of Fixed Partitions?
Pros: Easy to understand and implement. Cons: Internal Fragmentation, a pre specified limit on the number of processes and the largest process.
What is Dynamic Partitioning?
The partition sizes adjust to the
sizes of processes. One-to-one mapping between
partitions and processes. Suffers from External Fragmentation.
What are External Fragmentation and Compaction?
External fragmentation is when space is wasted between partitions. Compaction is when processes are shifted so all the free memory is gathered in one continuous block. This takes a lot of time.
What are the pros and cons of Dynamic Partitioning?
Pros: No internal fragmentation. No limit on the number of
processes or on the size of the
largest process. Cons: External fragmentation. Time is wasted on compaction.
What is Simple Segmentation?
Each program is divided into
segments (e.g., text and data
segments). Segments are loaded into memory
as in dynamic partitioning. All segments need to be loaded
into memory, but they do not
need to be contiguous.
What are the pros and cons of Simple Segmentation?
Pros: Easier to fit processes in memory
than with dynamic partitioning. No internal fragmentation. No limit on the number of
processes or on the size of the
largest process. Cons: External fragmentation (less than dynamic partitioning). Time is wasted on compaction.
What is Virtual Memory Segmentation?
Use disk storage
as if it was main memory.
Segments are loaded into memory
as in dynamic partitioning, except
that not all segments need to be
loaded at the same time.
What are the pros and cons of Virtual Memory Segmentation?
Pros: Easier to fit processes in memory
than with simple segmentation. No internal fragmentation. No limit on the number of
processes or on the size of the
largest process. Cons: External fragmentation (less than dynamic partitioning). Time is wasted on compaction.
What is Simple Paging?
Memory is divided into fixed-size
frames of equal size. Each process is divided into pages
of the same size. Internal fragmentation is small.
What are the pros and cons of Simple Paging?
Pros: No external fragmentation. All pages fit perfectly into frames. Cons: Small internal fragmentation. Each process requires a page
table (which consumes memory).
What is Virtual Memory Paging?
Pages are loaded into frames as in simple paging, except that not all pages
need to be loaded at the same time.
What are the pros and cons of Virtual Memory Paging?
Pros: Easier to fit processes in memory
than with simple paging. No external fragmentation. All pages fit perfectly into frames. Cons: Small internal fragmentation. Each process requires a page
table (which consumes memory). More overhead required for
virtual memory.
What is the CPU?
The CPU controls the execution of instructions. A CPU will use a specific instruction set architecture, such as ARM or x86.
What are Registers?
The CPU stores temporary results
in internal registers. When executing a given process,
that process’s registers are loaded
into the CPU. Some registers are
interchangeable general-purpose
registers.
What are Special-Purpose Registers?
CPU registers used for specific tasks like instruction tracking and memory management. Examples include program counter, stack pointer, link register, and base pointer.