Quiz 2 Flashcards

1
Q

Message Passing: Implementation of communication link (Physical vs Logical)

A

Physical: shared memory, hardware bus, network
Logical: direct or indirect, synchronous or asynchronous, automatic or explicit buffering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Message Passing: Direct Communcation

A

Processes MUST name each other explicitly: send or receive.

Properties of communication link:

  1. Links are established automatically
  2. Link is associated with exactly one pair of communicating processes.
  3. Between each pair there exists exactly one link.
  4. Link may be unidirectional, usually bi-directional.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Message Passing: Indirect Communication

A

Messages are directed & received from mailboxes (ports): each mailbox has a unique id, processes can only communicate if they share a mailbox.

Properties of communication link:

  1. Link established only if processes share a common mailbox.
  2. A link may be associated with many processes.
  3. Each pair of processes may share several links.
  4. Link may be unidirectional or bi-directional.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Indirect Communication operations

A
  1. Create a new mailbox (port)
  2. Send & receive messages through port
  3. Destroy a mailbox
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Indirect Communication Issue: Who gets the message?

A

Problem: multiple processes want to receive a sent message, who gets it?
Solution:
1. Allow link to be associated with at most two processes.
2. Allow only one process at time to execute a receive operation.
3. Allow the system to select arbitrarily the receiver. Sender is notified who receiver is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Synchronization

Blocking vs Non-blocking

A

Blocking is considered synchronous:
1. Blocking send- sender is blocked until the message is received
2. Blocking receive - receiver is blocked until a message is available
Non-blocking is considered asynchronous:
1. Non-blocking send- sender sends the message and continues.
2. Non-blocking receive- receiver receives a valid message or a null message.

If both send and receive are blocking, we have a rendezvous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Buffering

Implemented in one of three ways

A

Queue of messages attached to the link.

  1. Zero capacity – no messages are queued on a link. Sender must wait for receiver (rendezvous)
  2. Bounded capacity – finite length of n messages. Sender must wait if link full
  3. Unbounded capacity – infinite length. Sender never waits
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

IPC Systems - POSIX

A
  1. Process first creates shared memory segment:
    shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
    Also used to open an existing segment to share it
  2. Set the size of the object
    ftruncate(shm_fd, 4096);
  3. Now the process could write to the shared memory
    sprintf(shared memory, “Writing to shared memory”);
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

IPC Systems - Mach

A

Mach communication is message based:
A) Even system calls are messages
B) Each task gets two mailboxes at creation- Kernel and Notify
C) Only three system calls needed for message transfer
msg_send(), msg_receive(), msg_rpc()
D) Mailboxes needed for commuication, created via
port_allocate()

E) Send and receive are flexible, for example four options if mailbox full:
Wait indefinitely
Wait at most n milliseconds
Return immediately
Temporarily cache a message
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

IPC Systems - Windows

A

Message-passing centric via advanced local procedure call (LPC) facility:
A) Only works between processes on the same system
B) Uses ports (like mailboxes) to establish and maintain communication channels
C) Communication works as follows:
1. The client opens a handle to the subsystem’s connection port object.
2. The client sends a connection request.
3. The server creates two private communication ports and returns the handle to one of them to the client.
4. The client and server use the corresponding port handle to send messages or callbacks and to listen for replies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sockets

A

A socket is defined as an endpoint for communication

Concatenation of IP address and port – a number included at start of message packet to differentiate network services on a host

The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8

Communication consists between a pair of sockets

All ports below 1024 are well known, used for standard services
ftp, http, …

Special IP address 127.0.0.1 (loopback) to refer to system on which process is running

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

3 Types of sockets

A
  1. Connection-oriented (TCP)
  2. Connectionless (UDP)
  3. MulticastSocket: class-data can be sent to multiple recipients
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Remote Procedure Calls

A

A) Remote procedure call (RPC) abstracts procedure calls between processes on networked systems
Again uses ports for service differentiation
B) Stubs – client-side proxy for the actual procedure on the server
C) The client-side stub locates the server and marshalls the parameters
D) The server-side stub receives this message, unpacks the marshalled parameters, and performs the procedure on the server
E) On Windows, stub code compile from specification written in Microsoft Interface Definition Language (MIDL)
F) Data representation handled via External Data Representation (XDL) format to account for different architectures
(Big-endian and little-endian)
G) Remote communication has more failure scenarios than local
Messages can be delivered exactly once rather than at most once
H) OS typically provides a rendezvous (or matchmaker) service to connect client and server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Pipes

A

A) Acts as a conduit allowing two processes to communicate
B) Implementation issues:
1. Is communication unidirectional or bidirectional?
2. In the case of two-way communication, is it half or full-duplex?
Half: data travel in one direction at any given time
Full: data travel in both direction at same time.
3. Must there exist a relationship (i.e., parent-child) between the communicating processes?
4. Can the pipes be used over a network?

Ordinary pipes – cannot be accessed from outside the process that created it. Typically, a parent process creates a pipe and uses it to communicate with a child process that it created.
Named pipes – can be accessed without a parent-child relationship.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Ordinary Pipes

UNIX treats them as files, thus can use read and write syscalls

A

Ordinary Pipes allow communication in standard producer-consumer style

Producer writes to one end (the write-end of the pipe)

Consumer reads from the other end (the read-end of the pipe)

Ordinary pipes are therefore unidirectional
Require parent-child relationship between communicating processes

Windows calls these anonymous pipes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Named Pipes

A

Named Pipes are more powerful than ordinary pipes

Communication is bidirectional

No parent-child relationship is necessary between the communicating processes

Several processes can use the named pipe for communication

Continue to exist after communicating processes have finished

Provided on both UNIX and Windows systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Named Pipes (UNIX vs Windows)

A

UNIX:
Refers to them as FIFOs
Created using mkfifo() system call
Use open(), read(), write() and close() system calls
Communicating processes reside on the same machine
Half Duplex
Must be explicitly deleted from the file system

Windows:
Full Duplex is allowed
Processes reside on the same or different machines

18
Q

Multithreaded Programming: Motivation

A

A thread is a basic unit of CPU uitilization

Most modern applications are multithreaded

Threads run within application

Multiple tasks with the application can be implemented by separate threads
Update display
Fetch data
Spell checking
Answer a network request

Process creation is heavy-weight while thread creation is light-weight

Can simplify code, increase efficiency

Kernels are generally multithreaded

19
Q

Multithreaded Benefits

A

Responsiveness – may allow continued execution if part of process is blocked, especially important for user interfaces

Resource Sharing – threads share resources of process, easier than shared memory or message passing

Economy – cheaper than process creation, thread switching lower overhead than context switching

Scalability – process can take advantage of multiprocessor architectures

20
Q

Multicore Programming Challenges (5)

A
  1. Dividing activities/Identifying tasks – identify areas of program that can be divided into concurrent tasks
  2. Balance - Low tasks should be grouped in same thread, while heavier tasks should be divided into separate threads
  3. Data splitting – to divide the data across several threads. (such as one thread adding up the first million, next adding 1-2 million, 3rd adding 2-3…)
  4. Data dependency – data access must be synchronized (could have a master thread to add up sums from previous threads, must wait for sums to be completed)
  5. Testing & debugging – many different execution paths are possible due to the multi threaded program.
21
Q

Multicore Programming

Parallelism vs Concurrency

A

Parallelism implies a system can perform more than one task simultaneously

Concurrency supports more than one task making progress. Single processor / core, scheduler providing concurrency

22
Q

Types of Parallelism (2)

A

Data parallelism – distributes subsets of the same data across multiple cores, same operation on each

Task parallelism – distributing threads across cores, each thread performing unique operation

23
Q

Amdahl’s Law

A

Identifies performance gains from adding additional cores to an application that has both serial and parallel components
S is serial portion
N processing cores

speedup<= 1/[S + [(1-S)/N]]

That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times
As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate effect on performance gained by adding additional cores
Algorithm more important than # cores to achieve higher speedup!
- want to reduce serial!

24
Q

User Threads & Kernel Threads

A

Thread library: provides programmer with API for creating and managing threads
Library entirely in user space (cannot use system calls)
Kernel-level library supported by the OS

User threads - management done by user-level threads library
Kernel threads - Supported by the Kernel

Three primary thread libraries:
  POSIX Pthreads (User and Kernel)
  Windows threads (Kernel)
  Java threads (User but then depends 
  on the hosting OS)
Examples – virtually all general purpose modernoperating systems, including:
  Windows 
  Linux
  Mac OS X
25
Q

User Thread Advantages (4) & Disadvantages (2)

A

Advantages

  1. Thread switching does not require Kernel mode privileges.
  2. User level thread can run on any operating system.
  3. Scheduling can be application specific in the user level thread.
  4. User level threads are fast to create and manage.

Disadvantages

  1. In a typical operating system, most system calls are blocking.
  2. Multithreaded application cannot take advantage of multiprocessing.
26
Q

Kernel Thread Advantages (3) & Disadvantages (2)

A

Advantages

  1. Kernel can simultaneously schedule multiple threads from the same process on multiple processors.
  2. If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
  3. Kernel routines themselves can be multithreaded.

Disadvantages

  1. Kernel threads are generally slower to create and manage than the user threads.
  2. Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.
27
Q

Multithreading Models: Many-to-One

A

Many user-level threads mapped to single kernel thread

One thread blocking causes all to block

Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time

Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads

28
Q

Multithreaded Models: One-to-One

A

Each user-level thread maps to kernel thread

More concurrency than many-to-one

Creating a user-level thread creates a kernel thread

Number of threads per process sometimes restricted due to overhead
Examples
Windows
Linux
Solaris 9 and later
29
Q

Multithreaded Models: Many-to-Many

A

Allows many user level threads to be mapped to many kernel threads

Allows the operating system to create a sufficient number of kernel threads

Solves the shortcomings of Many-to-One & One-to-One

Examples:
Solaris prior to version 9
Windows with the ThreadFiber package

30
Q

Multithreaded Models: Two-level

A

Similar to M:M, except that it allows a user thread to be bound to kernel thread

Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
31
Q

Pthreads

A

May be provided either as user-level or kernel-level

A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization

Specification (what), not implementation (how)

API specifies behavior of the thread library, implementation is up to development of the library

Common in UNIX operating systems (Solaris, Linux, Mac OS X)

32
Q

Implicit Threading

A

Growing in popularity as numbers of threads increase, program correctness more difficult with explicit threads

Creation and management of threads done by compilers and run-time libraries rather than programmers

Three methods explored
Thread Pools
OpenMP
Grand Central Dispatch

Other methods include Microsoft Threading Building Blocks (TBB), java.util.concurrent package

33
Q

Thread Pools

3 advantages

A

Create a number of threads in a pool where they await work
Advantages:
1. Usually slightly faster to service a request with an existing thread than create a new thread
2. Allows the number of threads in the application(s) to be bound to the size of the pool
3. Separating task to be performed from mechanics of creating task allows different strategies for running task
i.e., Tasks could be scheduled to run periodically

34
Q

Threading Issues

A

Semantics of fork() and exec() system calls

Signal handling
Synchronous and asynchronous
Sync: the signal is handled immediately
after it is generated
Async: the signal is handled after some
time it is generated

Thread cancellation of target thread
Asynchronous or deferred

Thread-local storage

Scheduler Activations

35
Q

Signal Handling

A

Signals are used in UNIX systems to notify a process that a particular event has occurred.

A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
A) default
B) user-defined
Every signal has default handler that kernel runs when handling signal
A) User-defined signal handler can override default
B) For single-threaded, signal delivered to process

(Some signals can be ignored, window change size. Others such as illegal memory access are handled by terminating the program)

36
Q
Signal Handling
(Where should a single be delivered for multi-threaded)
A

A) Deliver the signal to the thread to which the signal applies
B) Deliver the signal to every thread in the process
C) Deliver the signal to certain threads in the process
D) Assign a specific thread to receive all signals for the process

37
Q

Thread Cancellation

A

Terminating a thread before it has finished

Thread to be canceled is target thread

Two general approaches:

  1. Asynchronous cancellation terminates the target thread immediately
  2. Deferred cancellation allows the target thread to periodically check if it should be cancelled

Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state
A) If thread has cancellation disabled, cancellation remains pending until thread enables it
B) Default type is deferred

(Useful in searching through a database. When 1 thread returns the result, cancel the other threads)

38
Q

Thread-Local Storage

A

Thread-local storage (TLS) allows each thread to have its own copy of data

Useful when you do not have control over the thread creation process (i.e., when using a thread pool)

Different from local variables
A) Local variables visible only during single function invocation
B) TLS visible across function invocations

Similar to static data
A) TLS is unique to each thread

39
Q

Linux Threads

A

Linux refers to them as tasks rather than threads

Thread creation is done through clone() system call

clone() allows a child task to share the address space of the parent task (process)

struct task_struct points to process data structures (shared or unique)

40
Q

Linux Threads (Clone flags)

A

Cloning a task may be done differently depending on the flags, which determine how much sharing is to take place between the parent and child task.

Clone with all the flags is equivalent to create a new thread. If no flags, similar to fork system call.

Flags control behavior:
A) CLONE_FS: file-system information is shared
B) CLONE_VM: the same memory space is shared
C) CLONE_SIGHAND: signal handlers are shared
D) CLONE_FILES: the set of open files is shared

When fork() is is invoked, a new task is created, along with a copy of all the associated data structures of the parent process. Clone allows one to create a new task with only some of the data structures of the parent task.