Exam 1 (up to Scheduling) Flashcards
What is the mode of communication between the CPU and Main Memory?
Memory Bus (DDRX)
Instructions come from
Memory hierarchy (iL1 cache)
What holds the address of each new instruction executed by the CPU?
Program Counter (PC)
PC is updated after each instruction with what?
Either the next instruction (move 4/8 bytes) or the target of a branch address to be taken
Four components of a CPU
Registers (temporary data storage), ALUs or Functional Units, PC for next instruction to fetch, Stack Pointer (SP) for top of stack
What is an ISA?
Instruction Set Architecture, a model of a computer. Involves Memory Operations (load, store), Arithmetic Operations (add, or), and Control Flow Operations (jne, jmp)
What is the purpose of a CPU interrupt?
To handle asynchronous events, possible external. These are handled at the end of instruction execution (the boundaries of instructions).
What is ISR?
Interrupt Service Routine. This is what happens when a CPU handles an interrupt. After the interrupt the PC will push the next instruction (before the interruption).
What is memory hierarchy?
The concept that smaller SRAM cells are kept closer to the CPU (more frequently used data is kept closer). When data is not found in SRAM, the CPU will go as far as DRAM.
How to determine if something is in the cache?
Cache is broken into blocks, then sets (which contain a certain number of blocks). If number of blocks in a set is one, it is called direct mapped. If there are n-blocks in a set, it is called n-way associative. You must find the set where the block exists and then search every block in the set by using the tag (tag comparison)
What happens on a cache miss?
Bring in a block from deeper in the hierarchy (maybe even DRAM) and evict a block in the SRAM (Least Recently Used, a.k.a. LRU).
What happens when a block is modified after it is evicted?
The data will need to be written again (write-back cache).
What is Memory Mapped IO?
The CPU can access IO devices like regular memory, using addresses. Events from IO devices are handled as interrupts.
What’s the problem with Memory Mapped IO?
It causes a lot of CPU resource usage. Everything the IO devices does needs to be handled by the CPU. Solution is to add controllers to the IO devices to take the usage off of the CPU.
Components of a Program
Code, Data, Stack, Heap. Space occupied by each part is called a segment.
Where are heap and stack located before execution in memory?
Heap starts at a lower address and grows toward higher addresses. Stack starts at the highest address and grows toward lower addresses. Both are empty before execution.
How does a computer know how to execute a program?
Load code and data into memory, set Heap and Stack pointers, and set the PC to main.
What’s the difference between a program and a process?
A process is a program in execution. A process has a program and a state at any point in time.
How to create a process in UNIX?
Call either fork() or exec(). fork() will create a duplicate of the already running process, and both will execute the next instruction from the PC. exec() will override the calling process with a new one.
What happens when you type a.out() in the shell?
The shell itself is a process that then calls fork() to make a duplicate of itself. The duplicate then calls exec() with a.out as a parameter, which runs the program.
What is it called to perform one activity after another when you have many to run?
Batching
What does it mean to time-multiplex the CPU amongst the processes?
It means even if one activity is done, we may still want to move on to processing another.
Is batching or multiprocessing preferred?
It is always better to use multiprocessing in modern computers. Early computers used batching. Do you want to stare at the oven while a cake bakes for 30 minutes?
What does it mean to pre-empt a CPU?
Take the CPU away from currently executing processes.
What is context-switching?
Re-assigning the CPU from one process to another.
When should an OS perform context-switching?
Any time a process can’t proceed (waiting for I/O, etc.), periodically (using a timer).
What does it mean for context-switching to be transparent?
An application should never know that it is being context-switched.
How do we implement context-switching?
We need to save the context of a current process, restore the context of another process. This ensures the application never knows of the context-switching.
What is the context of an application?
Code + data + stack + heap are what is referred to as the address space. There’s also the PC, SP, HP, and registers. Every process has something called a PCB (Process Control Block) into which context is saved and restored from.
What is the advantage of a “process” view?
Implement concurrency between users, activities of a user. Also, insulating one activity from another.
Drawbacks of a process view?
They are heavy weight, have higher scheduling costs (direct and indirect, like cache flushes). State sharing is also a problem.
What are the overheads of a process view?
Switching code, data, stack and heap. Saving and restoring registers. Saving and restoring state maintained by OS.
What are threads?
They are activities within the same address space, they share code, data, and heap. Only stacks are disjoint. All you have to do to switch between threads is to switch stacks. There is NO PROTECTION between threads, but this is okay since they are meant to be cooperative.
How do we create a thread?
pthread_create()
How do we terminate a thread?
pthread_exit()
How do we wait for a thread to exit?
pthread_join()
How do we get the thread ID?
pthread_self()
What are the three process states?
Running, Ready, and Blocked. The OS maintains a queue for each of these states.
What does it mean to Dispatch in the context of a state transition diagram?
Dispatching is moving from the ready state to the running state (assigning the CPU).
Criteria for process selection: utilization
We should try to keep the CPU busy 100% of the time.
Criteria for process selection: throughput
Maximize the number of jobs processed per hour
Criteria for process selection: turnaround time
From the time of submission to the time of completion
Criteria for process selection: waiting time
Sum of times spent in ready queue waiting to be scheduled on the CPU.
Criteria for process selection: response time
Time from submission till the first response is produced (mainly for interactive jobs)
Criteria for process selection: fairness
Make sure each process gets a fair share of the CPU
Pre-emptive versus non-pre-emptive
With no pre-empting, a process has to terminate or be blocked before another can run. There is no transition from Running to Ready.
FCFS Algorithm (Scheduling)
Serves jobs in the order they arrive, with no pre-empting. Very easy to implement, it’s just a FCFS queue. Very fair.
Shortest Job First (SJF) Algorithm (Scheduling)
Pick the job that has the shortest duration after the current job is done. Could be pre-emptive or not. Has low turnaround time, but hard to know job duration and may be unfair.
How do we estimate duration in a SJF scheduling implementation?
The same job could be coming back many times, so we could use a history. We use an exponential averaging technique to determine average time of a job.
Priority Algorithm (Scheduling)
Every process has a priority value, we always schedule the process with the highest priority. FCFS and SJF are specialized forms of priority scheduling.
Round Robin Algorithm (Scheduling)
This is pre-emptive. Periodically switch from 1 process to another. When a process arrives, add it to the end of the queue. When the time quantum expires, pre-empt the running process and put it at the end of the queue. Give the CPU to the process at the head of the ready queue. If the process blocks, put it in the blocked queue and give CPU to head of ready queue, resetting the time quantum.
Rules of thumb with time quanta
Keep quanta large enough to accommodate most CPU bursts within one quantum. Keep large enough to keep context-switching overheads relatively low. Typically, context switch costs are in the 10s of microseconds. Time quanta are in the 10s/100s of milliseconds.
Round robin queue with Priority
Has a different queue for each priority, always service the non-null queue at the highest priority level. Perform round robin scheduling on each queue.
Multi Level Feedback Queues (MLFQ) (Scheduling)
Pick the process at the head of the highest priority non-null queue. Give it the allotted time quantum. If the CPU burst is not done, move it to the tail of the queue of a lower priority level.
Sun Solaris Example
An MLFQ implementation example with 60 priority levels. Time quantum for highest level is 20ms, for lowest level is 200ms