Resource Management Flashcards

1
Q

What is a resource, what are the types, their characteristics, what is the problem with resources usage, what is the solution to this problem and the goals of this solution?

A

An entity needed by a thread to run.

Resources have following types:
- Real: physically existent such as main memory, processor, disc drive
- Virtual: used to offer more or higher capacity of a real resource (virtual memory). Usage is intermediate as virtual resource is mapped to the real resource.
- Logical: Extend virtual or real resources to provide higher level of service via comfortable interface or with interface with enhanced functionality (think a file or window).

Characteristics:
- Persistence: Reusable or Consumable
- Capacity: Limited or Unlimited
- Acquisition: thread requests resource or another instance requests resource for thread

Resources can lead to problems if they are limited or they can only be used exclusively. Uncoordinated use of a resource may lead to undesired effects.

The solution to this problem is resource management as a central authority to ensure coordinated usage of the resources. This requires separation between usage and management.

Resource manager decides what threads can use which (or which part of resource, think memory segments) resources at any point of time and acts an intermediate instance between threads and resources.

The goals of a resource manager are:
- Correct execution for involved threads
- No deadlocks
- No starvation
- High level of parallel execution
- High level of resource usage

Mutual exclusion is a form of resource management where there is no intermediate instance (resource manager) but rather agreement between threads to have a critical section accessed by one thread at a time (via locks for example).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Central Resource Management: How are one-exemplar resources handled?

A

The usage of these one-exemplar resources can be seen as a critical section so management here is like handling a coordination problem.

allocate and release OPs here have same structure as lock and unlock (block resource on allocate, deblock on release then see if threads are waiting for resource and deblock first). So we need a queue for waiting threads and bits to mark the state of allocation of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Central Resource Management: How are multi-exemplar resources handled?

A
  • If resource has multiple instances: allocate OP returns the ID of the resource instance that has been allocated and release has the resource instance ID as its parameter. In this case a simple Semaphore initialized with # of exemplars is sufficient.
  • If resource is divisible (think memory or processor cores): then allocate OP has num of units requested for allocation as parameter and return value is the start address of the allocated area and release OP has both start address and num of units as parameters. Usually, divisble resources are allocated contiguously.
    The status of the allocation of the individual parts of a divisible resource is stored in a data structure such as a bit list and these bits are updated as effect of allocate and release OPs using their own OPs as well as a FREE OP that checks if amount of units is available (used by allocate to see if requested allocation can be done). Here we also block waiting threads and deblock waiting threads on release if enough units are available and so also store the specific requests for waiting threads so that we test it on release. These requests can be sorted by size of requests (# of units).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Selection strategies: Why we do need them? Give examples, how can they be optimized?

A

We need selection strategies to get high utilization of the resource and treat the requesting processes fair.

  • FIFO/FCFS: Low utilization in case of large first request so smaller later requests are not served even though there is enough units in the resource.
  • First-Fit-Request: Linear search in queue and find first request that satisfies free units in resource
  • Best-Fit-Request: Search whole queue and pick request that minimizes the amount of remaining free units, i.e maximizing the utility of the resource. In FFR and BFR, large requests may be penalized/starved.

To optimize these strategies:
- More than waiting thread can be assigned to the resource if all the requests can be satisfied so stop when no waiting request can be satisfied.
- Windowing: Reduce the effort the search by limiting it by a window of size L so only first L positions of the waiting requests queue are investigated. This allows to avoid starving requests. The size of window is decreased every time the head of the window queue is not allocated (until reaching size 1), if it does then go back to initial max size and repeat. When the size is 1 then the BFR and FFR strategies converge to FCFC allowing for large requests to go through.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Deadlock: What is it? What is wait-for-graph? What are the requirements for a deadlock?

A

In context of resource management, it is when one thread is attempting to allocate a resource that is already allocated by other thread making both blocked and none of them can continue work to release the resources.

Wait-For-Graph: Nodes are threads, and directed edges show which thread is waiting for a resource to be released by another thread.

deadlock appear in case the following three requirements appear:
- Resources are used exclusively.
- Threads hold allocation of a resource and try to allocate another.
- There is no preemption.

OR following two requirements:
- Resources are used exclusively
- There is a cycle in the wait-for-graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Deadlock Prevention: What is it? Strategies?

A

Handling resource assignment/allocation in a restrictive way to ensure no deadlocks can occur.

  • Pre-Claiming: Resources claim all of their needed resources at init time. In dynamic systems, this is difficult to predict and this is also uneconomical as resources are occupied longer than needed.
  • Overall release at request: Only allow thread to allocate new resource when they release their already allocated resources.
  • Allocation by (given) order: Resources are sorted with pre-given order and threads are only allowed to allocate resources in that specific order.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Resource graph: what is it? what are its properties? What is reduction in a resource graph and its relation to deadlock?

A

Bipartite graph with P representing (processes/threads) in circles and R representing resources in rectangles. Directed edges (in both directions) exist between Ps and Rs. The number of units provided by each resource is depicted as a point within the rectangle.
An edge from process to resource rectangle means the process is waiting for that resource (unit) to be released, it implies a REQUEST op that gets the process blocked. The opposite edge that goes out from the dot in the resource rectangle to a process means that the process has that resource unit allocated, implying an ALLOCATE op. Removal of former edge means a RELEASE op.

A resource graph allows to depict the current resource status and having a cycle in it (just like a wait-for-graph) is a condition for having a deadlock but the presence of one does not necessarily means that there’s a deadlock.

A reduction of resource graph by a process p means removing all edges for allocation if p is not isolated or blocked (i.e releasing resources). After reduction of p, other blocked processes that requested resources previously allocated by p can now be allocated as they are released. A resource graph can be reduced completely if there is an order of reductions so that all edges will be removed at the end.

A deadlock is present if the resource graph can not be reduced completely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Deadlock Avoidance: Safe/Unsafe situation, What is deadlock avoidance, steps

A

To avoid a deadlock, the remaining requests of the processes have to be known so that they can be satisfied. This situation is then called safe. Otherwise, it is called unsafe (subset of remaining requests can not be satisfied with the resources currently available in an order that does not trigger a deadlock).

A set of processes is called safe if there is an order of process termination so the remaining request of the other processes can be satisfied.

Deadlock avoidance implements the resource management in such a way that no unsafe situation can occur. So requests are only satisfied if it leads to a safe situation (move from one safe situation to another)

Steps:
- Check for every request if allocation would lead to unsafe situation (using Banker’s algorithm)
- If the request would lead to an unsafe situation, postpone request.
m: processes, n: types of resources the algorithm needs O(m n) to select the next process to allocate that leads to safe situation and with m processes, overall complexity is O(m^2 n)
This is how we find the order of satisfying resource allocation requests for processes to avoid a deadlock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Deadlock Detection: when do we need it? what is the algorithm and its steps? how to handle the special case of a one-exemplar resource?

A

If there is no knowledge about remaining requests for processes then deadlock avoidance can not be performed, meaning deadlocks can occur. Here we require the detection of a deadlock.

For detection, a permutation is searched for that it can be executed if current requests can occur at the same time without a deadlock (unlike avoidance where we search for a permutation so that remaining requests can be executed without a deadlock) so we can use the Banker’s algorithm here too by replacing remaining requests by current requests. Therefore we run this algorithm at every new request, periodically, as part of the idle process or in case of suspicion and consider all current processes.

In case of a one-exemplar resource, we can run a depth first search on wait-for-graph for the resource to reduce the graph until we remove all edges or detect a cycle, triggering a deadlock detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Deadlock Resolution: what is it? how to do it? problems and solutions?

A

The resolution is cutting the cycle in the wait-for graph, which means withdrawing (releasing) one of the resources that is part of the cycle/deadlock. In case it is impossible to release resource then the whole process is aborted.

The problem here is which process that is participating in the deadlock to abort? Criteria depend on the size of requests, amount of resources allocated, priority/urgency, user vs system process, effort of abort, wasted work, remaining answer time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly