slides06 Flashcards

1
Q

lightweight multi-processing

A
  • multi tasks within an application implemented by separate threads
  • cheaper/faster to create and faster to switch
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

motivation for lightweight multi-processing

A
  • process creation is heavyweight: application can be slowed down by process creation, termination and management overhead
  • use process pools to reduce overhead of creation on demand
  • thread creation is more lightweight
  • threads simplify code (modularization) and increase efficiency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

processes vs threads

A

PROCESSES
- create a new address space at process creation
- allocate resources at creation
- need IPC to share data - each process has its own address space
- deeper isolation for security and fault tolerance

THREADS
- same address space
- quicker creation times
- sharing through shared memory
- fault sharing between all threads within a process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

disadvantages of threads

A
  • threads don’t have the same fault tolerance as processes
  • overheard for handing client request to a thread
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

benefits of multithreading

A
  1. responsiveness: allows for continued execution if part of process is blocked
  2. resource sharing: threads come with shared memory (no need for creation)
  3. economy: cheaper than process creation, thread switching has lower overhead than context switching
  4. scalability: process can take advantage of multiprocess architecture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

paralellism

A

system can perform more than one task simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

types of paralellism in multi-core programming

A
  1. data paralellism:
    -distributes subsets of data across mulitple cores, same operation on each set of data
    - three steps: splitting -> solving sub-problems -> combininh
  2. task parallelism: distributing threads across cores, each thread performs a unique operation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

threading models

A
  1. user-level threads
  2. kernel-level threads
  3. hybrid threads
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

user-level threads

A
  • many-to-one
  • many user-level threads mapped to a single kernel thread
  • one thread blocking causes all to block
  • multiple threads cannot run in parallel in multi-core system because kernel level has one thread
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

kernel-level threads

A
  • one-to-one
  • each user-level thread maps to a kernel thread
  • more concurrency
  • number of threads per process restricted due to overhead
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

hybrid threads

A
  • many-to-many
  • many user lvl threads mapped to many kernel threads
    locality of mapping is a concern
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

asynchronous threading

A
  • parent spawns child thread
  • parent resumes execution
  • parent/child run concurrently and independently of one another
  • share little data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

synchronous threading

A
  • parent spawns child and waits for child to terminate
  • children exchance data with each other but none with parent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

thread pools

A
  • create a fix set of threads ahead of time
  • select available thread within pool to serve the next request
  • queue request if there are no free threads in the pool
  • creation overhead reduced
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

thread cancellation

A
  • using pthread_cancel()
  • deferred cancellation: target thread periodically checks whether it should terminate in an orderly manner
How well did you know this?
1
Not at all
2
3
4
5
Perfectly