Terminology Flashcards
What are the access methods for files?
•Sequential access
–Information is processed record after record
–Possible operations:
- Reading the next record/block
- Adding a record at the end
- Changing current position pointer
- Direct access
–Location of a record to be processed is given as the argument of adequate operation
•Indexed access
–Records are identified via special index keys
–Faster access in comparison to the free access
What is CISC?
A computer in which individual instructions may perform many operations and take many cycles to execute, in contrast with reduced instruction set computer (RISC).
What are the requirements of mutual exclusion?
•Enforcement
–only one process at a time allowed into its critical section among all processes that have a critical section for the same resource or shared object
- A process that halts in its non-critical section must not interfere with other processes
- A process requiring access to a critical section must not be delayed indefinitely
–no deadlock or starvation
- Access to a process’s critical section must not be delayed when no other process is in a critical section
- No assumptions are made about relative process speeds or number of processes
- A process can remains inside its critical section for a finite time only
What is Cache coherence?
Cache coherence is the regularity or consistency of data stored in cache memory. Maintaining cache and memory consistency is imperative for multiprocessors or distributed shared memory (DSM) systems. Cache management is structured to ensure that data is not overwritten or lost. Different techniques may be used to maintain cache coherency, including directory based coherence, bus snooping and snarfing. To maintain consistency, a DSM system imitates these techniques and uses a coherency protocol, which is essential to system operations. Cache coherence is also known as cache coherency or cache consistency.
What are the types of scheduling?
First-come First-served Scheduling (FCFS)
First-come First-served Scheduling follow first in first out method. As each process becomes ready, it joins the ready queue. When the current running process ceases to execute, the oldest process in the Ready queue is selected for running. That is first entered process among the available processes in the ready queue.The average waiting time for FCFS is often quite long. It is non-preemptive.
Shortest Job First Scheduling (SJF)
This algorithm associates ith each process the length of the next CPU burst.Shortest-job-first scheduling is also called as shortest process next (SPN). The process with the shortest expected processing time is selected for execution, among the available processes in the ready queue. Thus, a short process will jump to the head of the queue over long jobs. If the next CPU bursts of two processes are the same then FCFS scheduling is used to break the tie.SJF scheduling algorithm is probably optimal. It givs the minimum average time for a given set of processes.It cannot be implemented at the level of short term CPU scheduling. There is no way of knowing the shortest CPU burst.SJF can be premptive or non-preemptive.
A premptive SJF algorithm will preempt the currently executing process if the next CPU burst of newly arrived process may be shorter than what is left to the currently executing process.
A Non-premptive SJF algorithm will allow the currently running process to finish.Preemptive SJF Scheduling is sometimes called Shortest Remaining Time First algorithm
**Priority Scheduling** The SJF is a special case of general priority scheduling algorithm. A Priority (an integer) is associated with each process.The CPU is allocated to the process with the highest priority.Generally smallest integer is considered as the highest priority.Equal priority processes are scheduled in First Come First serve order.It can be preemptive or Non-preemptive.
Non-preemptive Priority Scheduling
In this type of scheduling the CPU is allocated to the process with the highest priority after completing the present running process.
Preemptive Priority Scheduling
In this type of scheduling the CPU is allocated to the process with the highest priority immediately upon the arrival of the highest priority process. If the equal priority process is in running state, after the completion of the present running process CPU is allocated to this even though one more equal priority process is to arrive.
Highest Response Ratio Next (HRRN) scheduling is a non-preemptive discipline, similar to Shortest Job Next (SJN), in which the priority of each job is dependent on its estimated run time, and also the amount of time it has spent waiting. Jobs gain higher priority the longer they wait which prevents indefinite postponement (process starvation). In fact, the jobs that have spent a long time waiting compete against those which are estimated to have short run times.
Round-Robin Scheduling
This type of scheduling algorithm is basically designed for time sharing system. It is similar to FCFS with preemption added.Round-Robin Scheduling is also called as time-slicing scheduling and it is a preemptive version based on a clock. That is a clock interrupt is generated at periodic intervals usually 10-100ms. When the interrupt occurs, the currently running process is placed in the ready queue and the next ready job is selected on a First-come, First-serve basis. This process is known as time-slicing, because each process is given a slice of time before being preempted.
One of the following happens:
The process may have a CPU urst of less than the time quantum or
CPU burst of currently executing process be longer than the time quantum. In this case the a context switch occurs the process is put at the tail of the ready queue.
In round-robin scheduling, the principal design issue is the length of the time quantum or time-slice to be used. If the quantum is very short, then short processes will move quickly.
What are some example systems with monolytic kernels?
- Amoeba
- MINIX
- QNX
- BeOS
- Haiku
- Hurd
What is livelock?
Livelock is a condition that takes place when two or more programs change their state continuously, with neither program making progress. Processes enter a state of livelock when they clash with each other’s state and fail to progress because both of them are changing the state, hence having the same state at a given time.
Livelock can be best explained with the help of an analogy of two people passing through a passageway and each tries to step around the other, but they end up swaying from side to side, getting in each other’s way as they try to get out of the way. Livelock is different from deadlock in a way that both the processes involved in livelock are repeatedly changing their states with regard to each other and not progressing. Algorithms are produced to get out of the state of livelock by randomly picking a process and stopping its state change.
The system kernel is responsible for managing of computer resources and allowing other processes and programs to run and use these resources. The resource in this case means:
- The CPU - kernel decides at what time (and for how long) the running processes/programs should be allocated to the processor (or porcessors in multiprocessor systems)
- The memory - kernel decides of which memory each process can use, and determines what to do when not enough memory is available
- I/O devices - kernel allocates requests from applications to perform I/O to an appropriate device and provides convenient methods for using the device (hides details of the device/access implementation, only convenient interfaces are provided)
What is starvation?
Starvation is the name given to the indefinite postponement of a process because it requires some resource before it can run, but the resource, though available for allocation, is never allocated to this process. It is sometimes called livelock, though sometimes that name might be reserved for cases where the waiting process is doing something, but nothing useful, as in a spin lock. However it happens, starvation is self-evidently a bad thing; more formally, it’s bad because we don’t want a non-functional system. If starvation is possible in a system, then a process which enters the system can be held up for an arbitrary length of time.
To avoid starvation, it is often said that we want the system’s resources to be shared “fairly”. This is an appealing suggestion, but in practice it isn’t much use because it doesn’t define what we mean by “fairly”, so it’s really more of a description of the problem than a contribution to its solution. It’s more constructive to investigate the paths by which a system can reach a livelocked state, and try to control them.
What are Resources?
hardware or software component of the computer system required by the process to operate.
Resources classification:
- Reusable (memory after process completion, CPU, file handles after process completion, swap space)
- Consumable (battery power in mobile computers, available free disk space, available free memory)
- Shared (more than one process may use them in the same time – for example shared memory, pipes)
- Exclusive (assigned to a selected process, unavailable for others – allocated memory block)
Draw a diagram displaying the responsiblities of logical files
What is Scheduling?
The process of deciding how to commit resources between a variety of possible/running tasks (processes). Scheduling is supervised by a special system process called scheduler.
What are Access Control Lists (ACL)?
a table that tells a computer operating system which access rights each user has to a particular system object, such as a file directory or individual file. Each object has a security attribute that identifies its access control list. The list has an entry for each system user with access privileges. The most common privileges include the ability to read a file (or all the files in a directory), to write to the file or files, and to execute the file (if it is an executable file, or program).
What is a kernel?
a kernel is the central component of most computer operating systems. Its responsibilities include managing the system’s resources (the communication between hardware and software components).
What are the different types of directory structures?
- One level structure
- Two level structure
- Tree structure
- A-cyclic graph
- Generic Graph
What is a Resource Descriptor?
storing information about availability of a specified resource. Depending on resource type (file, memory segment, etc.) the descriptor structure may be of various form. The structure is very often forced by the system architecture specification (for example, depending on logical memory structure it may relate to segments, pages, etc.).
The resource descriptor may for example contain information about:
- Resource ID
- Resource Type
- List and number of available resources
- List (queue) of processes waiting for the resource
- Allocation procedure
What is RISC?
A processor architecture that shifts the analytical process of a computational task from the execution or runtime to the preparation or compile time. By using less hardware or logic, the system can operate at higher speeds. RISC cuts down on the number and complexity of instructions, on the theory that each one can be accessed and executed faster, and that less semiconductor real estate is required to process them. The result is that for any given semiconductor technology, a more powerful microprocessor can be produced with RISC than with complex instruction set computer (CISC) architectures.
This simplification of computer instruction sets gains processing efficiencies. That theme works because all computers and programs execute mostly simple instructions. RISC has five design principles:
• Single-cycle execution — In most traditional central processing unit (CPU) designs, the peak possible execution rate is one instruction per basic machine cycle, and for a given technology, the cycle time has some fixed lower limit. Even on complex CPUs, most compiler-generated instructions are simple. RISC designs emphasize single-cycle execution, even at the expense of synthesizing multi-instruction sequences for some less-frequent operations.
• Hard-wired control, little or no microcode — Microcode adds a layer of interpretive overhead, raising the number of cycles per instruction, so even the simplest instructions can require several cycles.
• Simple instructions, few addressing modes — Complex instructions and addressing modes, which entail microcode or multicycle instructions, are avoided.
• Load and store, register-register design — Only loads and stores access memory; all others perform register-register operations. This tends to follow from the previous three principles.
• Efficient, deep pipelining — To make convenient use of hardware parallelism without the complexities of horizontal microcode, fast CPUs use pipelining. An n-stage pipeline keeps up to “n” instructions active at once, ideally finishing one (and starting another) every cycle. The instruction set must be carefully tuned to support pipelining.
What is a memory table?
A memory table is a set of data which is stored in the computer’s memory.
The advantage of a memory table is it is very fast - you don’t have to read the table from disk.
The disadvantage is the table only exists while the software is running. If the software stops for any reason, or crashes, the memory table is deleted.
There are various strategies for saving data from memory tables to disk, when the contents of the memory table have to be preserved.
In the old days, creating a memory table used to be a big deal - memory was in such short supply, that you had to have a very good reason, to tie up 10s of kilobytes of memory into a memory table.
What types of Interrupts are there?
•Program interrupts
–Run-time interrupts
- Arithmetic overflow,
- Division by zero,
- Memory violation (reference outside allowed memory space),
•Hardware interrupts
–Timer,
•Overflow/zero/equal to
–I/O,
•A piece of hardware generates an interrupt wanting to be handled
What is address space?
a set (range) of all allowed (valid) memory addresses.
What is memory partitioning?
•Partition memory into blocks that are allocated to processes
–provide contiguous address spaces
•efficient mapping of logical to physical address
–fixed or static partitions
- equal-size
- unequal-size
How does address validation works for the conversion of logical address into physical address.