Interprocess Communication, CPU Scheduling Flashcards

1
Q

Interprocess Communication

A
  • Processes within a system may be independent or cooperating
  • Cooperating process can affect or be affected by other processes
  • Reasons for cooperating processes:
  • Information sharing
  • Computation speedup
  • Modularity
  • Cooperating processes need interprocess communication (IPC)
  • Two models of IPC
  • Shared memory (require process synchronization)
  • Message passing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Message Passing

A

Message Passing needs:
* Some way of addressing the message.
* Some way of transporting the message.
* Some way of notifying the receiver that a message has arrived.
* Some way of accessing or depositing the message in the receiver.
The message size is either fixed or variable

May look like:
send(destination, message)
receive(source, message)
Or:
write(message)
read(message)

In this case we also need some way of making a connection between the processes, like
an open() call.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Direct Communication

A

Processes must name each other explicitly:
* send (P, message) – send a message to process P
* receive(Q, message) – receive a message from process Q.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Properties of Direct Communication Link

A
  • Links are established automatically
  • One link between each pair of processes
  • The link may be unidirectional, but is usually bi-directional
  • receiver doesn’t have to know the id of the sender (it can receive it with the message)
  • A server can receive from a group of processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Disadvantages of Direct Communication

A
  • Can’t easily change the names of processes.
  • Could lead to multiple programs needing to be changed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Indirect Communication

A

Messages are directed and received from mailboxes (also referred to as ports)
* Each mailbox has a unique id
* Processes can communicate only if they share a mailbox
* Mailbox ownership – system owned, or process owned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Properties of Indirect Communication Link

A
  • Link established only if processes share a common mailbox
  • A link may be associated with many processes
  • Each pair of processes may share several communication links
  • Link may be unidirectional or bi-directional
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Oridinary Pipes (UNIX)

A
  • Acts as a conduit allowing two processes to communicate
  • In UNIX it starts as a way for a process to talk to itself.
    int my_pipe[2]; pipe(my_pipe);
  • The system call returns two UNIX file descriptors
    Data gets put into the pipe and taken out the other end
  • the write-end of the pipe – my_pipe [1]
  • the read-end of the pipe – my_pipe [0]
  • Use ordinary read() and write() system calls.
    e.g., write(my_pipe[1], data, length);
  • Ordinary pipes are therefore unidirectional
  • Cannot be accessed from outside the process that created it.
  • Require parent-child relationship between communicating processes
  • Windows calls these anonymous pipes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Empty and Full Pipes

A
  • Reading process blocks when pipe is empty (until data becomes available)
  • Writing process blocks when pipe is full (65536 bytes on recent Linuxes)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Broken Pipes

A
  • A process waiting to read from a pipe with no writer gets an EOF (once all existing data has been
    read).
  • A process writing to a pipe with no reader gets signalled.
  • Writes are guaranteed to not be interleaved if they are smaller than the PIPE_BUF constant.
    This must be at least 512 bytes and is 4096 bytes by default on Linux.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Limitations of Ordinary Pipes

A
  • Can only be used to communicate between related processes. (Named pipes or FIFO files can be
    used for unrelated processes.)
  • The file handles are just low integers which index into the file table for this process.
  • The same numbers only make sense in the same process (or in one forked from it).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Named Pipes

A
  • Named Pipes are more powerful than ordinary pipes
  • Communication is bidirectional
  • No parent-child relationship is necessary between the communicating processes
  • Several processes can use the named pipe for communication
  • Provided on both UNIX and Windows systems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Communicating via Shared Resources

A

Shared resources
* Separate processes can alter the resource.
* Need to check the state of the resource to either receive data or know that some
event has occurred.
* Usually need to explicitly coordinate access to the resource.

Files
* Easy to use but slow.
* File system may provide synchronization help, e.g., only one writer at a time.

Memory
* Fast
* Synchronization is usually handled explicitly by the processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Shared Memory

A
  • Different threads in the same process automatically share memory.
  • define sections of shared memory,
  • attach the shared memory to the process,
  • detach, indicate who can access it etc.
  • Both processes need to know the name of the area of shared memory.
  • Must make sure the memory is attached to some unused area of the process’s
    address space.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Shared Memory in LINUX Processes

A

Remember that fork copies data.
#include <stdio.h>
#include <unistd.h>
#include <sys/mman.h>
void main() {
void *shared;
int *a_number;
int b_number;
shared = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
printf("Program 1 - shared:%p\n", shared);
a_number = (int *)shared;
printf("a_number: %d, b_number: %d\n", *a_number, b_number);
if (fork() != 0) {
*a_number = 12345;
b_number = 54321;
}
printf("a_number: %d, b_number: %d\n", *a_number, b_number);
}
Reading and writing to shared memory is done by using the pointer returned by mmap().</unistd.h></stdio.h>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

POSIX Shared Memory

A
  • Process first creates shared memory segment
    shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
  • Also used to open an existing segment
  • Set the size of the object
    ftruncate(shm_fd, 4096); //set size in bytes
  • Use mmap()to memory-map a file pointer to the shared memory object
  • Reading and writing to shared memory is done by using the pointer returned by
    mmap().
    ptr = (char *) mmap(0, SIZE, PROT_READ | PROT_WRITE,
    MAP_SHARED, shm_fd, 0);
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Signals

A

Signals are used in UNIX systems to notify a process that a particular event has
occurred.
* A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
- default
- user-defined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Signal Handlers

A

Every signal has default handler that kernel runs when handling signal
* User-defined signal handler can override default
* For single-threaded, signal delivered to process
* For multi-threaded, depends on the type of signal generated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Signals - Software Interrupts

A

kill(pid, signalNumber)
* originally for sending events to the process because it had to stop.

  • signalNumbers for:
  • illegal instructions
  • memory violations
  • floating point exceptions
  • children finishing
  • job control
  • But processes can catch and handle signals with signal handlers.
    signal(signalNumber, handler)
  • Can also ignore or do nothing.
    If you don’t ignore or set a handler, then getting a signal stops the process.
  • One signal can’t be handled - 9 SIGKILL
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Mach Ports

A
  • Underneath macOS and iOS
  • Everything done via ports even system calls and exception handling
  • Only one receiver from each port but can have multiple senders
    (unidirectional too)
  • Can pass the right to receive
  • When a process is started, it is given send rights to a port on the
    bootstrap process (the service naming daemon) (and it normally gives the
    bootstrap process send rights via a message)
  • Programmers don’t usually work at that level (they can use the standard
    UNIX communication mechanisms)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Mach Port Processes

A
  • Each task (or process) gets two ports at creation - Kernel and Notify
  • Messages are sent and received using the mach_msg() function
  • Ports needed for communication, created via
    mach_port_allocate()
  • Send and receive are flexible; for example four options if mailbox full:
  • Wait indefinitely
  • Wait at most n milliseconds
  • Return immediately
  • Temporarily cache a message
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Local Procedure Calls (Windows)

A
  • Message-passing centric via advanced local procedure call (ALPC)
    facility
  • Only works between processes on the same system
  • Uses ports (like mailboxes) to establish and maintain communication channels
  • Communication works as follows:
  • The client opens a handle to the subsystem’s connection port object.
  • The client sends a connection request.
  • The server creates two private communication ports and returns the handle to
    one of them to the client.
  • The client and server use the corresponding port handle to send messages or
    callbacks and to listen for replies.
23
Q

CPUs - Basic Concepts

A
  • Maximum CPU utilization obtained with
    multiprogramming
  • CPU–I/O Burst Cycle – Process execution
    consists of a cycle of CPU execution and I/O
    wait
  • CPU burst followed by I/O burst
  • CPU burst distribution is of main concern
24
Q

CPU Burst Times

A

*Large number of short CPU bursts at the start
*Small number of longer CPU bursts as the burst continuous
*Follows de-exponential Pattern

25
Q

CPU Scheduler

A

Selects from among the processes in ready queue, and allocates a CPU core to one of them
* Queue may be ordered in various ways
* CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
* For situations 1 and 4, there is no choice in terms of scheduling – nonpreemptive or
cooperative scheduling
* For situations 2 and 3, however, there is a choice – preemptive scheduling
* Preemptive scheduling can result in race conditions

26
Q

Preemptive Scheduling

A

*Preemptive scheduling can result in race conditions when data are shared among several processes.

  • Virtually all modern operating systems including Windows, MacOS, Linux, and
    UNIX use preemptive scheduling algorithms.
  • Consider the case of two processes that share data. While one process is updating
    the data, it is preempted so that the second process can run. The second process
    then tries to read the data, which are in an inconsistent state.
27
Q

Nonpreemptive Scheduling

A
  • Under Nonpreemptive/Cooperative scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state.
28
Q

Dispatcher

A
  • Dispatcher module gives control of the CPU
    to the process selected by the CPU scheduler;
    this involves:
  • Switching context
  • Switching to user mode
  • Jumping to the proper location in the user
    program to resume that program
29
Q

Dispatch latency

A

The time it takes for the
dispatcher to stop one process and start
another running

30
Q

Scheduling Criteria

A

CPU utilization – keep the CPU as busy as possible
* Throughput – # of processes that complete their execution per time unit
* Turnaround time – amount of time to execute a particular process
* Waiting time – amount of time a process has been waiting in the ready queue
* Response time – amount of time it takes from when a request was submitted
until the first response is produced.

31
Q

Scheduling Algorithm Optimism Criteria

A
  • Max CPU utilization
  • Max throughput
  • Min turnaround time
  • Min waiting time
  • Min response time
32
Q

First Come First Served Scheduling(FCFS)

A
  • No time wasted to determine which process should run next.
  • Little overhead as context switch only when required
  • Non-preemptive
    *Average waiting time: (0 + 24 + 27)/3 = 17
  • Suppose that the processes arrive in the order: P1 , P2 , P3
    Process Burst time
    P1 24
    P2 3
    P3 3
    Suppose that the processes arrive in the order:
    P2 , P3 , P1
  • Average waiting time: (6 + 0 + 3)/3 = 3
  • Much better than previous case
33
Q

Convoy Effect

A

Short process behind long process
* Consider one CPU-bound and many I/O-bound processes
* Results in lower CPU and device utilization

34
Q

Round Robin(RR) Scheduling

A
  • A pre-emptive version of FCFS.
  • Need to determine the size of the time slice or time quantum
  • no concept of priorities
  • doesn’t deal with different types of process - compute bound vs IO bound
    One way to tune this is to change the length of the time slice.
    If the time slice = 4?
  • Average waiting time: (6 + 4 + 7)/3 = 17/3 = 5.66
    Process Burst time
    P1 24
    P2 3
    P3 3
35
Q

Shortest Job First (SJF)

A
  • SJF is optimal – gives minimum average waiting time for a given set of
    processes
  • Unfortunately, we don’t know which is the process with the shortest CPU
    burst.
  • Use the previous CPU bursts to estimate the next.
36
Q

Pre-emptive SJF

A

Whenever a new process arrives in the ready queue, with a shorter burst time
than the remaining burst time of the running process then the process is pre-empted.

Waiting time only starts from when the process arrives!!!
* Average waiting time = (9 + 1 + 0 +2)/4 = 3

Process Arrival time Burst
time
P1 0 7
P2 2 4
P3 4 1
P4 5 4

37
Q

Priority Scheduling

A

Assume low numbers represent high priority

Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
* Average waiting time = 8.2
* Priority scheduling can be either preemptive or nonpreemptive.

38
Q

Explicit Priorities

A
  • Unchanging
  • Set before a process runs.
  • When a new process arrives, it is placed in the position in the ready queue corresponding
    to its priority.
  • It is possible to get Indefinite blocking or Starvation (low priority processes may
    never execute)
39
Q

Variable Priorities

A
  • Priorities can vary over the life of the process.
  • The system makes changes according to the process’ behaviour: CPU usage, IO usage,
    memory requirements etc.
  • If a process is not running because it has a low priority.
  • Solution – Aging (as time progresses increase the priority of the process)
  • Or a process of a worse priority might be scheduled after five processes of a better
    priority.
  • This prevents starvation, but better priority processes will still run more often.
40
Q

Multilevel Queue Scheduling

A

The ready queue consists of multiple queues
* Multilevel queue scheduler defined by the following parameters:
* Number of queues
* Scheduling algorithms for each queue
* Method used to determine which queue a process will enter when that process needs
service
* Scheduling among the queues
* A process stays on its original queue or can be moved from queue to queue

41
Q

Multiple-Processor Scheduling

A
  • Symmetric multiprocessing (SMP) is where each processor is self scheduling.
  • All threads may be in a common ready queue
  • Each processor may have its own private queue of threads
42
Q

Real-Time CPU Scheduling

A
  • Can present obvious challenges
  • Soft real-time systems – critical real-time tasks have the highest priority, but no
    guarantee as to when tasks will be scheduled
  • Hard real-time systems – task must be serviced by its deadline
  • When processes are submitted, they indicate their CPU requirements.
  • The scheduler may reject the process if the requirement cannot be met.
  • But very important processes can force other processes to relinquish their allocations.
43
Q

Real-Time CPU Scheduling - Event Latency

A

The amount of time that elapses from when an event occurs to
when it is serviced.
* Two types of latencies affect performance
1. Interrupt latency – time from arrival of interrupt to start of routine that services interrupt
2. Dispatch latency – time for schedule to take current process off CPU and switch to another

44
Q

Conflict Phase of Dispatch Latency

A
  1. Preemption of any process running in kernel mode
  2. Release by low-priority process of resources needed by high-priority processes
45
Q

Real-Time Scheduler Requirements

A
  • For real-time scheduling, scheduler must support preemptive, priority-based
    scheduling
  • But only guarantees soft real-time functionality
  • For hard real-time must also provide ability to meet deadlines
    (c, p, d)
  • c – computation time (worst case)
  • p – period time
  • d – deadline
  • c <= d <= p
46
Q

Periodc Processes

A
  • require CPU at constant intervals
  • used for polling, monitoring and sampling
  • predetermined amount of work every period
  • Period and Deadline are determined by the system requirements (often the same).
  • Computation time is found through analysis, measurement or simulation.
  • When the computation is complete the process is blocked until the next period starts.
  • Sometimes it doesn’t matter if the deadline extends beyond the period or the period
    can change depending on system load.
47
Q

Sporadic Processes

A
  • event driven – some external signal or change
  • used for fault detection, change of operating modes
  • (c, p, d) still applies
  • c and d have the obvious meaning
  • p is the minimum time between events
  • aperiodic processes
48
Q

Aperiodic Processes

A
  • p = 0
  • events can happen at any time, even simultaneously
  • timing can no longer be deterministic but there are ways of handling this
  • statistical methods, we design to satisfy average response times
  • if it is rare that the system has timing faults then special cases can be included in the handling
    code
49
Q

Cyclic Executives(CEs)

A
  • Handles periodic processes.
  • Sporadic processes can be converted into equivalent periodic processes or they
    can be ignored (if they take only a little time to handle).
  • Pre-scheduled – a feasible execution schedule is calculated before run time.
  • The cyclic executive carries out this schedule.
  • It is periodic.
  • Highly predictable – non-preemptible
  • Inflexible, difficult to maintain.
50
Q

CE Schedule

A

Major schedule – cyclic allocation of processor blocks which satisfies all deadlines
and periods.
* Minor cycle (or frame) – major schedules are divided into equal size frames. Clock ticks only occur on the frame boundaries.

51
Q

Scheduling with Priority

A
  • Scheduling decisions are made:
  • when a process becomes ready
  • when a process completes its execution
  • when there is an interrupt
  • Priorities can cause schedules to not be feasible.
    A = (1, 2, 2) better priority
    B = (2, 5, 5) worse priority
  • This is feasible (without preemption), but if the priorities are reversed it is not.
  • Still priorities are almost always used
  • fixed – determined before execution
  • dynamic – change during execution
52
Q

Fixed Priority Allocation

A
  • Rate monotonic (RM) – the shorter the period the higher the priority.
  • Least compute time (LCT) – the shorter the execution time the higher the priority
    (shortest job first)
53
Q

Dynamic Priority Allocation

A
  • Shortest completion time (SCT) – shortest job first with preemption. But this time
    we have good information on the execution time requirement.
  • Earliest deadline first (EDF) – the process with the closest deadline has the highest
    priority.
  • Least slack time (LST) – the process with the least slack time has the highest
    priority.
54
Q

Slack Time

A

Slack time is the amount of time to the process’s deadline minus the amount of time the
process still needs to complete.