CSC 330 - Exam One Flashcards
Processes
A process is an instance of an executing program; a dynamic entity in a system because it exhibits behavior (state changes) and is capable of carrying out computational activity
They are created at system startup, by another process, by the user or by a batch job. Differently they are terminated by a normal exit, an error exit, a fatal error or by another process.
There are two types of processes - user and system processes.
A data structure called a process descriptor is created for each process in a system. This is where the OS stores all data about a process.
Threads
A thread is multiple flows of control within a process.
They can be considered mini-processes or lightweight processes. They allow computing and I/O to overlap and they also allow a balanced workload.
The Process Manager is in charge of swapping out the threads.
Describe the states a process can be in
Processes exhibit their behavior by changing from one state to the next. They can be in one of the following states:
Created - A job arrived and there are sufficient resources available (like memory)
Waiting for CPU - The process is ready
Executing - The process is receiving service from the CPU
Waiting for I/O service - The process is “blocked”
Receiving I/O service
Waiting for one or more passive resources - The process is “blocked” because it needs a resource that is not available yet
Interrupted by the OS - OS will interrupt when it requests a resource that is not available
Terminated - The process satisfied all service requests
Describe the states a thread can be in
The relevant states of a tread are: Ready, Running and Blocked. A thread is blocked if it is waiting for an I/O operation to complete.
Describe how to use threads in Java
There are two options or ways that one could use threads in Java. The first is to extend the Thread class, which is the preferred method. The second is to implement the Runnable interface. One would implement Runnable if a class has to extend another class.
An outline of the steps for using threads in Java are as follows:
- Define one or more classes with thread behavior
- Redefine the thread method run for each thread class - the run method is never called directly
- Create one or more thread objects for each of these classes
- Execute the thread objects with the thread method .start() - the .start() function calls the run method in the class you write
- Manipulate the thread objects with the thread methods available in the Thread class ex: threadName.isAlive();, threadName.getPriority();, threadName.join();
Describe the major components of an operating system
There are five major components of an operating system, the process manager, the memory manager, the resource manager, the file manager and the device manager.
The Process Manager creates, suspends, executes, terminates and destroys processes.
The Memory Manager controls allocation & deallocation of memory.
The Resource Manager facilitates the allocation & deallocation of resources to the requesting process; the OS is considered a huge Resource Manager.
The File Manager allows users and processes to create and delete files and/or directories.
The Device Manager provides an appropriate level of abstraction.
Describe the operational details of computer systems
Central Processing Unit (CPU) is the portion of the computer that does the processing; it issues commands to the devices in the system and is small in size. Its components are:
Registers - Temporary storage locations; Highest speed / Highest cost
ALU - Arithmetic Logic Unit
Control Unit - Responsible for controlling what’s happening in the ALU; determines the sequence of events; decodes an instruction
Main Memory is working storage for programs and information. It is volatile, meaning it can only store things while the power is on. Any running program must be in main memory along with any data the computer is working with.
Secondary Storage are external devices to store data and software. They range in speed, amount of storage and cost. They are non-volatile memory, meaning things will still be stored when disconnected from power. An example would be a hard drives or a USB drive.
I/O Devices provide input or displays output. They communicate with other computers. Some examples of an I/O device would be a monitor, keyboard, mouse, modem, printer or scanner.
A Bus is the connection between the components. They have multiple wires, which is where bits are transmitted in parallel. Most systems have multiple buses, but only one thing can use a bus at a time. The wider the bus, the faster it can transmit data.
The Operating System manages the system resources (CPU, Memory, Disk space, Network access). It can have a user interface - either a command line or a GUI. It can also have an Application Programming Interface (API), which is a library of routines to access operating system functionality.
Describe different system architectures
Multiprocessor Systems have a separate CPU and a shared memory system. They have multicore chips that abide by Moore’s Law (the number of transistors on a chip doubles every two years). In the Multiprocessor OS, each CPU shares the operating system code, but has its own operating system data structures.
Multicomputer systems are a cluster of computers and workstations (a stripped down PC and a high performance network). The communication between nodes is critical. It is a way of parallel computing. It has load balancing that uses heuristic based processor allocation algorithms.
Distributed Systems are a collection of full computers that can be spread over a widespread area ex: World Wide Web
Virtualization systems are multiple virtual machines sharing one computer. They combine multiple servers and a small ‘hypervisor’ controls the virtual machine. Most system failures are due to software.
Algorithms an operating system uses to schedule processes
First Come First Served
Round Robin
Shortest Job First
Shortest Remaining Time
Longest Job Next
Priority Scheduling *
Guaranteed Scheduling
Lottery Scheduling
Fair Share Scheduling
Multiple Queues
First Come First Served
This is a non-preemptive method. The order of process arrival to the ready queue determines the order of selection for CPU service. This method is easy to understand and implement. Many I/O bound processes can be delayed by one computer bound process.
Round Robin
This is a preemptive method. The processes are queued in the order they arrived. A process can only execute until its time slice expires. If a process is interrupted, it will return to the end of the queue. A reasonable quantum time is about 20 to 50 times the context-switch overhead.
Shortest Job First
This is a non-preemptive method. Processes are queued in order of increasing remaining execution time. The process with the shortest CPU burst is the one selected next from the ready queue. This method creates optimal average turnaround times when all processes are available simultaneously.
Shortest Remaining Time
This is a preemptive method. Processes are queued in order of increasing remaining execution time. If a new process arrives and its CPU service period is less that the remaining service period of the currently executing, it will be interrupted. The new process will then be started immediately. This method favors processes with short CPU bursts.
Longest Job Next
This is a preemptive or nonpreemptive method. Processes are queued in order of decreasing remaining execution time. The process with the longest CPU burst is the one selected next from the ready queue. This method favors processes with long CPU bursts.
Priority Scheduling
This is a preemptive method. A priority is assigned to each type of process. Processes are queued based on decreasing priority. Highest priority processes are given CPU time and will be interrupted if a higher priority process arrives. Interrupted processes will return to the end of the queue. A priority can be static or dynamic; static priorities do not change but dynamic processes can be changed to help reduce problems.
Guaranteed Scheduling
Each process is promised an equal amount of CPU time. Processes are ordered based on an incrementing ratio of CPU time received to CPU time entitled.
Lottery Scheduling
Each process is given some number of lottery tickets. A ticket is drawn to determine which process is given CPU time.
Fair Share Scheduling
Time allocation is tied to the user, not to each individual process. A faction of CPU time is allocated to the user and that time is shared amongst the user processes.
Multiple Queues
Each queue has a different scheduling mechanism
Quantum based queues have processes enter at queue 1 (highest priority). Processes that use the full quantum move down a queue and processes that don’t use the full quantum move up a queue. Each queue has double the quantum of the one above it.
Priority-based scheduling methods
Priority Scheduling -
This is a preemptive method. A priority is assigned to each type of process. Processes are queued based on decreasing priority. Highest priority processes are given CPU time and will be interrupted if a higher priority process arrives. Interrupted processes will return to the end of the queue. A priority can be static or dynamic; static priorities do not change but dynamic processes can be changed to help reduce problems.
Describe performance measures of system performance
Wait Time - How long processes wait
Throughput - Number of jobs or processes that are completed per unit of time
CPU Utilization - The proportion of time that the CPU spends executing processes
Resource Utilization - proportion of the interval the resource is used relative to the total observation time interval
Response Time - Time the system takes between giving a command and getting the result
Turnaround Time - Time it takes a job or process from submission to completion
Availability - Time the system is readily available
Reliability - Time between failures or the probability of failures
Capacity - Maximum throughput achievable under ideal working conditions
Fairness - Metric that indicates if all processes are threated in a similar manner
- A performance study attempts to reduce waiting periods, improve CPU utilization and maximize throughput
Preemptive Scheduling
A process has use of the CPU for a fixed time interval or until it blocks or gives up the CPU
Non-preemptive Scheduling
A process that has use of the CPU until it blocks or gives up the CPU
Compute Bound
A process that does little I/O
I/O Bound
Process that issues a lot of I/O requests
Context Switch
The changing of the CPU from one process to another
Scheduling
The sharing of one of more processors among the processes in the ready queue
Scheduling Policies
Define the order in which processes are selected from the ready queue
Scheduling Mechanisms
Decide when and how to carry out a context switch for the selected process
Long-term Scheduling
The OS decides to create a new process from the jobs waiting in the input queue
Medium-term Scheduling
The OS decides when and which process process to swap out or swap in to or from memory
Short-term Scheduling
The OS decides which process to execute next
Scheduler
Component of the OS that decides which process to run
Multiclass System
A system with different groups of processes
Degree of multiprogramming
The number of processes a system can support
Multiprogramming
The ability of the OS to coordinate the presence of several processes in memory with CPU and I/O services that are provided in bursts
Kernel
The core and most critical part of the operating system that always needs to reside in memory
CPU Burst
A CPU request
I/O Burst
An I/O request