Test 2 Flashcards
Multi threaded architecture
Most modern applications are multithreaded
Process creation is heavy-weight while thread creation is light-weight
Benefits
Responsiveness – may allow continued execution if part of process is blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier than shared memory or message passing
Economy – cheaper than process creation, thread switching lower overhead than context switching
Scalability – process can take advantage of multiprocessor architectures
Multicore programming
Multicore or multiprocessor systems putting pressure on programmers, challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Parallelism implies a system can perform more than one task simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
Parallelism
Multicore programming:
Types of parallelism
Data parallelism – distributes subsets of the same data across multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread performing unique operation
As # of threads grows, so does architectural support for threading
CPUs have cores as well as hardware threads
Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core
Items Shared/Not Shared by Threads
Shared by all threads in a process Address space Global variables Static variables Open files Accounting information
Per thread items: Program counter Registers Stack State
Amdahl’s Law
LECTURE 9
Identifies performance gains from adding additional cores to an application that has both serial and parallel components
S is serial portion
N processing cores
That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times
As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate effect on performance gained by adding additional cores
User Threads and Kernel Threads
User threads - management done by user-level threads library Three primary thread libraries: POSIX Pthreads Windows threads Java threads Kernel threads - Supported by the Kernel Examples – virtually all general purpose operating systems, including: Windows Solaris Linux Tru64 UNIX Mac OS X
Many-to-One- multithreading models
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may be in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
ONe-to-ONe- multithreading models
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples
Windows
Linux
Solaris 9 and later
Many-to-many multithreading models
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create a sufficient number of kernel threads
Solaris prior to version 9
Windows with the ThreadFiber package
two level model multithreading models
Similar to M:M, except that it allows a user thread to be bound to kernel thread Examples IRIX HP-UX Tru64 UNIX Solaris 8 and earlier
thread library
Thread library provides programmer with API for creating and managing threads
Two primary ways of implementing
Library entirely in user space
Kernel-level library supported by the OS
pthreads
May be provided either as user-level or kernel-level
A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
Specification, not implementation
API specifies behavior of the thread library, implementation is up to development of the library
Common in UNIX operating systems (Solaris, Linux, Mac OS X)
java threads
Java threads are managed by the JVM
Typically implemented using the threads model provided by underlying OS
Java threads may be created by:
public interface Runnable() {
public abstract void run();
}
Extending Thread class Implementing the Runnable interface
implicit threading
Growing in popularity as numbers of threads increase, program correctness more difficult with explicit threads
Creation and management of threads done by compilers and run-time libraries rather than programmers
Three methods explored
Thread Pools
OpenMP
Grand Central Dispatch
Other methods include Microsoft Threading Building Blocks (TBB), java.util.concurrent package
thread pools
Create a number of threads in a pool where they await work
Advantages:
Usually slightly faster to service a request with an existing thread than create a new thread
Allows the number of threads in the application(s) to be bound to the size of the pool
Separating task to be performed from mechanics of creating task allows different strategies for running task
i.e.Tasks could be scheduled to run periodically
Windows API supports thread pools:
threading issues
Semantics of fork() and exec() system calls Signal handling -Synchronous and asynchronous Thread cancellation of target thread -Asynchronous or deferred Thread-local storage Scheduler Activations
semantics of fork() and exec()
Does fork()duplicate only the calling thread or all threads?
Some UNIXes have two versions of fork
exec() usually works as normal – replace the running process including all threads
signal handling
Signals are used in UNIX systems to notify a process that a particular event has occurred.
A signal handler is used to process signals
Signal is generated by particular event
Signal is delivered to a process
Signal is handled by one of two signal handlers:
default
user-defined
Every signal has default handler that kernel runs when handling signal
User-defined signal handler can override default
For single-threaded, signal delivered to process
Where should a signal be delivered for multi-threaded?
Deliver the signal to the thread to which the signal applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
■ The method for delivering a signal depends on the type of signal
● Synchronous signals need to be delivered to the thread causing the signal, not other threads
● Terminating a process signal should be sent to all threads within the process
thread cancellation
Terminating a thread before it has finished
Thread to be canceled is target thread
Two general approaches:
Asynchronous cancellation terminates the target thread immediately
Deferred cancellation allows the target thread to periodically check if it should be cancelled
Pthread code to create and cancel a thread:
pthread_t tid;
pthread. create(&tid, 0, worker, NULL);
pthread. cancel(tid);
Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state
If thread has cancellation disabled, cancellation remains pending until thread enables it
Default type is deferred
Cancellation only occurs when thread reaches cancellation point
I.e. pthread_testcancel()
Then cleanup handler is invoked
On Linux systems, thread cancellation is handled through signals
thread local storage
Thread-local storage (TLS) allows each thread to have its own copy of data
Useful when you do not have control over the thread creation process (i.e., when using a thread pool)
Different from local variables
Local variables visible only during single function invocation
TLS visible across function invocations
Similar to static data
TLS is unique to each thread
windows threads
Windows implements the Windows API – primary API for Win 98, Win NT, Win 2000, Win XP, and Win 7
Implements the one-to-one mapping, kernel-level
Each thread contains
A thread id
Register set representing state of processor
Separate user and kernel stacks for when thread runs in user mode or kernel mode
Private data storage area used by run-time libraries and dynamic link libraries (DLLs)
The register set, stacks, and private storage area are known as the context of the thread
linux threads
Linux refers to them as tasks rather than threads
Thread creation is done through clone() system call
clone() allows a child task to share the address space of the parent task (process)
Flags control behavior
struct task_struct points to process data structures (shared or unique)
process synchronization background
Processes can execute concurrently
May be interrupted at any time, partially completing execution
Concurrent access to shared data may result in data inconsistency
Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes
Illustration of the problem:
Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer counter that keeps track of the number of full buffers. Initially, counter is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.
race condition
counter++ could be implemented as
register1 = counter register1 = register1 + 1 counter = register1 counter-- could be implemented as register2 = counter register2 = register2 - 1 counter = register2
Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
critical section problem
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of code
Process may be changing common variables, updating table, writing file, etc
When one process in critical section, no other may be in its critical section
Critical section problem is to design protocol to solve this
Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section
critical section solution and handling in OS
- Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections
- Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the process that will enter the critical section next cannot be postponed indefinitely
- Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes
handling in OS
Two approaches depending on if kernel is preemptive or non- preemptive
Preemptive – allows preemption of process when running in kernel mode
Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
Essentially free of race conditions in kernel mode
peterson’s solution and algorithm
Good algorithmic description of solving the problem
Two process solution
Assume that the load and store machine-language instructions are atomic; that is, cannot be interrupted
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!
algorithm do { flag[i] = true; turn = j; while (flag[j] && turn = = j); critical section flag[i] = false; remainder section } while (true);
Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
synchronization hardware
Many systems provide hardware support for implementing the critical section code.
All solutions below based on idea of locking
Protecting critical regions via locks
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
Atomic = non-interruptible
Either test memory word and set value
Or swap contents of two memory words