W4 Flashcards
What is meant by a task being blocked?
That it is waiting on a blocking synchronisation action
When is a set of tasks D considered deadlocked? Three conditions:
- All tasks in D are blocked or terminated
- There is at least one non-terminated task in D
- For each non-terminated task t in D, any task that might block t is also in D
What are four program conditions that may lead to deadlock?
- Mutual exclusion
- Greediness (holding and waiting)
- Absence of preemption mechanisms
- Circular waiting
What two types of resources have their access associated with deadlocks?
- Consumable resources that are taken away upon use.
- Reusable resources that are given back after use.
What are a great model for the analysis of deadlocks? What are two types of these models?
Graphs
Wait-for graphs and dependency graphs
What two things do wait-for graphs model best? How about dependency graphs?
Consumable resources and condition synchronisations
Reusable resources and action synchronisations
What do the nodes in wait-for graphs represent?
How about edges?
What does an edge p1->p3 represent?
What are the edges labelled with?
What does the wait-for graph capture?
Tasks, i.e., activities, threads/processes
Wait-for (blocked-on) relationships.
p3 is blocked on p1
Corresponding blocking conditions
A possible dynamic situation (reachable system state), whose existance must be proven.
When is a deadlock possible in a wait-for graph?
Only if there is a CYCLE in the wait-for graph and NO TASK OUTSIDE any cycle can UNBLOCK a task in the cycle.
How can one use a wait-for graph to prove the abscence of deadlocks?
Through a proof by contradiction.
Assume one task T in the graph is blocked on a certain action. Add T to a new graph
Add each task T’ that can unblock T to the new graph, creating an edge from T’ to T labeled with the blocking condition it may unblock.
Check whether each task T’ can possible be blocked on an action. If yes, repeat from 1 with T’.
Once the whole graph is built, check if we are in a deadlock state. If no, we reached a contradiction. If yes, we are in a potential deadlock and we should build a trace showing how we reached that state.
What type of graph are resource dependency graphs? What do the nodes represent? How about the edges?
Bipartite graphs
Two classes of nodes, one for tasks and one for resources
Three types of edges, depending on type of outgoing and incoming nodes
Let p be a task and R a resource. What does the edge p->R represent in a dependency graph? How about R->p? How about p–>R
p->R represents that task p is requesting resource R and now waiting for it
R->p represents that task p holds resource R
p–>R represents that task p MAY request resource R
What does a resource dependency graph represent? What are three types of events that may change the state?
A particular state of the system
A request by a task, An acquisition of a resource by a task, A release of a resource by a task
When is a task in a resource dependency graph blocked?
When it has an outgoing edge that is not directly removable, i.e., for which the requested resource is not free
When is there a deadlock in a resource dependency graph?
When after removing non-blocked tasks and all their incoming connections, there rem,ains a set of tasks.
What are three active approaches to dealing with deadlocks?
Prevention at design time by programmer
Avoidance on system side by dynamically checking to avoid entering risky state
Detection and Recovery by checking only whether we are in a deadlock state, and trying to recover from it
What is prevention?
How can dependency graphs be used?
Wait-for graphs?
What are some synchronisation tricks that can be used?
How does preemption play a role?
analysing your system using reduced dependency graphs and wait-for graphs at design time to provide a proof that there can never be any deadlock
Make their reductions be empty.
Prevent cycles in the graphs
No circular wait, ensuring termination of critical sections, locking resources in a fixed order, acquiring all resources at once
Preemption of resources must be allowed when needed
How does avoidance work?
Upon executing each potentially blocking action:
check if an open execution path exists (i.e., that will avoid deadlocks)
if one does not, deny or postpone the action
the check can be carried out with a reduction of the max claim graph
What is a maximum claim graph?
Dependency graph extended to capture potential future resource claims
What is Banker’s algorithm?
What does c[j] represent?
How about max[i,j]?
What is its aim?
An avoidance algorithm
Number of resources of type j
Maximum number of resources of type j that may be needed by task i
Synchronising requests such that each taskc an always acquire resources until its specified maximum
What does avail[j] represent in the Banker’s algorithm?
How about alloc[i,j]?
How about claim[i,j]?
What are the initial states of all these?
What are the invariants related to these?
Number of resources of type j still available?
Number of resources of type j allocated to task i
Maximum number of resources of type j that may still be claimed by task i
Avail[j] = c[j], Alloc[i,j] = 0, claim[i,j] = max[i,j]
Avail[j] = c[j] - Sum of Alloc[i,j] for all i
claim[i,j] = max[i,j] - alloc[i,j]
When is a state called open for a task i? How would you specify this mathematically?
When all the resources it may claim can be directly given to it
Claim[i,j] is less than or equal to Avail[j]
When is a state called safe for a task?
When this task can eventually be given its maximum number of resources
When is a state called safe?
When it is safe for all tasks
Assume a safe state, and a new request on resource j by task i. If this request were fulfilled, is it enough to check whether the resulting state is safe for task i to determine whether the resulting state is safe overall? Why or why not?
How could state safety be checked?
Yes, because if it is safe for task i, then all resources owned by task i will eventually be returned, reaching a state at least as safe as the current one.
Repeat the following:
1. Find an open task
2. Remove it, and all its claims (graph reduction)
3. Until task i is found to be open
What does the NextOpen function do?
What happens if no task can proceed in the NextOpen function?
In the NextOpen function, what does Alloc[i] != 0 check?
What does Claim[i] <= Avail ensure in the NextOpen function?
It finds the index of the first open task that can proceed based on the available resources and claimed resources.
The function returns N, indicating no open task is available.
It checks if the task i is active (has allocated resources).
It ensures the remaining resource claim of task i can be satisfied with the current available resources.
What is the precondition for calling the Safe function?
Avail >= req must hold, meaning the available resources can cover the requested resources.
What does the line Avail = Avail - req in the Safe function simulate?
How does the Safe function check if the system remains safe after allocation?
What does Avail = Avail + Alloc[k] in the Safe function do?
Why is the Claim[k] reset to Max[k] when task k completes?
What does the Safe function return?
What happens inside the while loop in the Safe function?
It reduces available resources by the amount requested by task i.
It simulates completing tasks using the NextOpen function and checks if resources can satisfy all remaining claims.
It simulates releasing all resources held by task k back to the available pool.
To indicate that task k has finished and no longer needs resources.
true if the system is in a safe state after allocation, otherwise false.
The function iterates through open tasks, simulating their completion, until the system is safe or no open tasks remain.
What is the purpose of the Safe function in the Banker’s Algorithm?
To ensure that allocating resources to a task will not lead the system into an unsafe state.
What is the Banker’s Algorithm, and how does it work in detail?
The Banker’s Algorithm is a deadlock avoidance method that ensures safe resource allocation by simulating whether a system can remain in a safe state after granting resource requests. It works by checking if resource allocation to processes is possible without causing a system-wide deadlock. It concerns one particular request, and simulates whether granting that request would lead to a safe state or not?
What does the detection algorithm do during execution?
What is required for deadlock detection during execution?
What is a major drawback of repeated monitoring and recovery from deadlock?
Where can the recovery algorithm be executed?
What are the two approaches for killing tasks in a deadlock set?
What does rolling back to a safe state require?
What does preempting resources in a deadlock involve?
It is invoked periodically to check if deadlock occurs.
An algorithm to examine the state upon execution of a blocking action, and an algorithm to recover from a deadlock.
It causes large overheads.
Locally, inside the task that last blocked or globally, through a recovery policy.
Kill all tasks, and kill some tasks based on specific criteria.
An alternative execution path must exist, and the state must be recorded at checkpoints (sufficient history information to roll back).
Deallocating resources from tasks in the deadlock state.
What are the different schemes for the prevention approach?
Requesting all resources at once.
Preemption.
Resource ordering.
What are the major advantages of the prevention approach?
Works well for processes that perform a single burst of activity.
No preemption necessary.
What are the major disadvantages of the prevention approach?
Inefficient.
Delays process initiation.
Future resource requirements must be known by processes.
What are the major advantages of the avoidance approach?
What are the major disadvantages of the avoidance approach?
No preemption necessary.
Future resource requirements must be known by the OS.
Processes can be blocked for long periods.
What are the major advantages of the detection approach?
What are the major disadvantages of the detection approach?
Never delays process initiation.
Facilitates online handling.
Inherent preemption losses.
What are the resource allocation policies for the three approaches (Prevention, Avoidance, and Detection)?
Prevention: Conservative; undercommits resources to ensure that deadlocks cannot occur. It enforces strict rules to avoid unsafe states by limiting resource allocation.
Avoidance: Balances between prevention and detection by cautiously allocating resources to maintain at least one safe path. The system dynamically analyzes the current state and future requests to avoid potential deadlocks.
Detection: Very liberal; resources are granted whenever possible without regard to potential deadlocks. This approach focuses on detecting and resolving deadlocks after they occur.
What is a resource in the context of processes?
Anything needed for a process to run, such as main memory, space on a disk, or the CPU
How does an OS abstract resources? What represents disk space? How about memory use? How about CPU use?
By creating simplified representations of hardware:
Files represent disk space
Processes represent memory use
Threads represent CPU use
How does an OS manage resource sharing? Three ways
Space sharing (allocates different resources to different programs simultaneously)
Time sharing (allows programs to share resources over time by switching between them)
OS Isolation Mechanisms (protect resources and programs, ensuring they do not interfere with each other or compromise system stability)
What are the four main components of the OS kernel in logical OS organization?
Process and thread manager.
File manager.
Memory manager.
Device manager.
What does the process and thread manager interact with?
Memory manager for virtual memory management.
Processor(s) for scheduling and task execution.
What does the file manager interact with?
Memory manager for performance improvement (e.g., caching data).
Device manager for accessing storage devices.
What is the role of the memory manager in logical OS organization?
Coordinates with the process manager for virtual memory (e.g., scheduling and memory allocation).
Interacts with the file manager for performance improvement (e.g., data caching).
Manages access to main memory.
How do modules in the OS kernel coordinate their activities?
All modules (process manager, file manager, memory manager, device manager) interact to ensure efficient resource allocation and system performance.
What are the two requirements for process execution?
A running program must be brought from disk to main memory
Each process must have a distinct address space for protection
What are the six characteristics of a good memory from the user/program perspective?
Non-volatile.
Private for a program (protected).
Infinite capacity.
Zero access time.
Simple to use.
Cheap.
What are the characteristics of CPU registers in the memory hierarchy?
They have the fastest access speed (1 clock cycle)
They are limited in size
They store the most frequently accessed data
What are the characteristics of primary memory (main memory)?
They have direct access (around 100 clock cycles)
They are relatively larger than CPU registers
E.g., RAM
What are the characteristics of secondary memory (storage devices)?
Accessed through I/O operations
Very cheap and very large in size
Data is stored for long periods
Much slower than CPU register memory
E.g., disk, tape, solid state devices
How does the memory hierarchy manage frequently and less frequently used information?
More frequently used information moves to faster memory, such as CPU registers and primary memory
Less frequently used information is stored in slower memory, such as secondary memory
How does the memory hierarchy balance speed and storage size?
Faster memory (e.g., CPU registers) has smaller size and higher cost.
Slower memory (e.g., secondary storage) has larger size and lower cost
Data is moved up or down the hierarchy based on access frequency.
How does the OS exploit the memory hierarchy during updates?
Describe the common characteristics of upward moves/updates.
Do the same for downward moves/updates.
Updates are first applied to upper memory.
Usually copy operations, keeping the image in both higher and lower memory.
Usually destructive to save space in upper memory by destroying the image in upper memory, and updating the image in lower memory.
What are the characteristics of single-programmed memory management solutions?
How about multi-programmed memory management solutions?
One program runs at a time and uses all available memory.
Multiple programs share memory and CPU resources, improving overall resource use and efficiency.
What is a single-programmed solution in memory management?
- Program loaded entirely into main memory (MM) and allocated as much contiguous memory space as needed.
- Once in MM, program stays there until execution stops
Issues with single-programmed solutions
Advantages?
Issues:
No support for multiprogramming.
Large context-switch overhead.
If a program does not fit in memory, it cannot be executed.
Advantages:
Simple hardware requirements (base address and limit registers).
Easy protection between the operating system and processes, as well as between processes.
What are fixed partitions in memory management?
Memory is divided into fixed partitions, with one process per partition
Partitions may have different sizes, determined at OS initialization.
Each process can only access its assigned partition for protection.
What are the disadvantages of fixed partitions?
Entire program must be stored in main memory.
Internal fragmentation
What is internal fragmentation?
Internal fragmentation occurs in fixed partitions when a program doesn’t fully use the allocated partition space
Small partitions lead to long turnaround times, large partitions result in wasted memory
What are dynamic partitions in memory management?
Partition size is not fixed and is based on process size.
Eliminates internal fragmentation.
However, external fragmentation may occur.
What is external fragmentation?
External fragmentation occurs when free memory is broken into non-contiguous blocks due to repeated allocation and deallocation.
What is the First-Fit allocation scheme for dynamic partitions? Adv, disadv
Allocates the first partition that is big enough for the process.
Advantages: Faster allocation.
Disadvantages: Wastes more memory space due to larger leftover partitions
What is the Best-Fit allocation scheme for dynamic partitions? Adv, disadv
Allocates the smallest partition that fits the process requirements
Advantages: Produces the smallest leftover partition, making better use of memory.
Disadvantages: Takes more time to find the best fit.
How does First-Fit memory allocation track holes?
How about Best-Fit memory allocation?
FF:
Orders the list by memory location.
Finds the first available block large enough for allocation.
BF:
Orders the list by block size.
Finds the smallest block that can accommodate the process.
Provides better memory utilization but takes more time.
What is the Next-Fit allocation scheme?
Starts each search at the point where the previous search stopped.
Results in a faster search but slightly worse memory utilization compared to First-Fit.
What is the Worst-Fit allocation scheme?
Allocates the largest available partition for the process.
Results in performance similar to First-Fit and Next-Fit.
Leaves the smallest possible remaining partitions.
What statistics do optimized allocation schemes in memory management use? What do they aim to do?
Utilize statistics like average process size and other metrics.
Aim to improve memory utilization and reduce fragmentation.
When does the release of memory space occur?
How does memory release work for fixed partitions?
How does memory release work for dynamic partitions?
When a process terminates or suspends
The memory manager resets the status of the memory block to “free.
Tries to combine free areas of memory:
If a block is adjacent to another free block, combine the two.
If a block is between two free blocks, combine all three.
If a block is isolated from other free blocks, create a new table entry.
How does the memory manager handle relocatable dynamic partitions?
Relocates programs to gather all empty blocks into one large free memory block.
Solves both internal and external fragmentation problems. Relocation and compaction avoid memory waste. Reduces “insufficient memory” issues.
What are the steps of memory compaction?
- Relocate every program in memory so that they are contiguous.
- Adjust every address and reference within each program to account for their new locations in memory.
When should compaction be done?
When a certain percentage of memory is busy (e.g., 75%).
When there are pending processes.
After a prescribed amount of time.
What is a major drawback of memory compaction?
Compaction has a very high overhead, requiring each word to be read and rewritten into memory, which may take several seconds.
What is complete compaction in memory management?
What is partial compaction in memory management?
What is the minimal data movement strategy in memory management?
All processes are relocated to create one large contiguous free memory block. Maximizes free space but involves the most data movement.
Relocates only enough processes to create the required block of free memory. Balances between maximizing space and minimizing data movement.
Moves only the necessary data to create the required block of free memory. Minimizes overhead at the cost of potentially fragmented free memory blocks.
How can the location of each process be determined with respect to its original location?
What is the purpose of relocation registers?
What is the purpose of limit registers?
Through the use of special-purpose registers.
Contain a value to be added to each address referenced in the program. Ensure correct memory access after process relocation.
Store the size of the memory space accessible to each program. Prevent programs from accessing memory outside their allocated space.
What is swapping in memory management?
Swapping is carried out when a program must be loaded into main memory (MM) and there is not enough room
It involves moving something else to secondary storage to make room.
How is swapping performed?
A process is selected to swap out and replaced with another process from secondary storage.
Only parts of the process not already on disk are swapped out.
Where are swapped processes placed?
In either an arbitrary file in secondary memory or a special partition on disk called the swap space.
What are the three main properties of a file system?
Persistent across shutdowns
Independent of virtual memory size
Facilitates sharing between users and processes
What are the basic requirements of a file system?
Naming: Use symbolic names for files
Transparency and portability: Hide hardware details through abstraction
Robustness: Protect against faults
Access control: Ensure authorization and authentication
Security: Protect files and ensure data integrity.
What are the user requirements of a file system?
Access files using symbolic names.
Capability to create, delete, read, and change files.
Controlled access to system and other users’ files.
Control own access rights.
Restructure files (e.g., moving, copying).
Move data between files.
Backup and recover files.
What is a file in a file system? What can it store
a logical data storage unit that abstracts physical storage properties
It can store numerical data, character data, binary data, program files.
What are permanent attributes of a file? What are some examples?
Created upon file creation and always exist (values can be changed)
Type.
Ownership.
Size.
Permanent/temporary flag.
Protection and access rights.
Dates (creation, access, modification).
What are temporary attributes of a file? What are some examples?
Created and maintained during file access.
Read pointer, write pointer, buffers, open count, locking
What does the “temporary flag” attribute represent?
0: Normal file.
1: Delete file upon process exit.
What are the three file access methods?
Sequential Access
Direct/Random Access
Access Through Index Files
What is sequential access in file management? What are the allowed operations?
Reads bytes/records from the beginning and can only rewind.
Read next: Reads a record and advances to the next position.
Write next: Writes a record and advances to the next position.
Rewind: Resets to the beginning.
What is direct/random access in file management? What operations are included?
Can jump to any record and read/write
Read/write at a specific record (requires an argument).
Jump to a given record.
Adjust current position (seek).
What is access through index files? What are the advantages of indexed access?
Built on top of the direct-access method. Involves constructing an index for the file, searching the index to locate the file’s pointer, using the pointer to access the file directly.
Efficient retrieval of records based on logical identifiers (e.g., names or IDs).
Provides faster access compared to scanning the entire file sequentially.
What is a directory in file system organization?
A file containing information about other files and directories.
What is a disk in file system organisations?
A disk can be subdivided into partitions, each capable of hosting its own file system.
What is a partition in file system organization? Can it involve multiple disks?
It represents a subdivided section of a disk.
Yes
What is a volume in file system organization?
A logical storage space (partition) formatted with a file system
Referred to by a logical drive letter (e.g., C:, D:).
Can each partition have its own file system?
Yes, each partition can have its own file system, such as NTFS or FAT32, and is associated with a specific drive letter (e.g., C:, D:, E:).
What is the relationship between physical disks, partitions, and volumes?
Physical disks can be divided into partitions
Each partition is formatted as a volume with its own file system and drive letter.
The drive letters can be referred to as volumes.
Example:
Partition 1 (NTFS) → Drive C:.
Partition 2 (FAT32) → Drive D:.
Partition 3 (FAT32) → Drive E:.
What are the semantics of directory operations? Create, Delete, Rename, List, Search
Create: Generates an empty structure in a parent directory.
Delete: Removes an entry (file or subdirectory).
Rename: Renames a file or subdirectory.
List: Lists the contents.
Search: Recursively queries directories for a file or subdirectory.
What are the UNIx mode of accesses? What are the classes of users?
Read (R), Write (W), Execute (X).
Owner: Defines rights of the owner.
Group: Defines rights of a group of users.
Other: Defines rights of any other user.
How are UNIX file permissions represented?
Permissions are represented as rwx for Read, Write, Execute.
Mapped to numeric values:
rwx = 7
rw- = 6
r-x = 5
And so on…
How are UNIX permissions assigned with chmod?
Permissions are assigned using numbers for owner, group, and others.
Example: chmod 761 game means:
Owner: Full access (7 = rwx).
Group: Read and Write (6 = rw-).
Others: Execute only (1 = –x).
What is the role of the kernel in the conceptual interaction diagram?
Opens subdirectories and checks permissions
Creates file descriptors in the Open File Table (OFT)
Provides a pointer to the descriptor for the application process
What four things are stored in the Open File Table (OFT)?
Owner of the file or directory
Access permissions (e.g., rwx)
Size and blocks or buffers associated with the file
Entries for subdirectories and files
What are the levels of abstraction in the layered view of the file system?
User Level: Symbolic file names and file system interface.
File System Level:
Directory management (high-level file functions).
Basic file system (open/close functions).
Device Level:
Device organization methods (low-level data access)
Logical block to physical address mapping.
How does the hard drive store and access data?
Data is stored in sectors (32B to 4KB), grouped into cylinders
Read/write heads are mounted on arms that move across cylinders
Sectors are numbered sequentially from 0 to the total number of sectors on the drive
What is the purpose of sectors, cylinders, and read/write heads?
Sectors: Memory blocks for data storage.
Cylinders: Vertical alignment of tracks on multiple platters.
Read/Write Heads: Access data on sectors and move between cylinders
What is a bootstrap program? Where does it reside?
Initial code executed when a computer is powered on or reset
Initializes hardware and loads the operating system into memory
Resides in non-volatile memory (e.g., ROM or flash) to ensure startup without relying on external storage
How does booting from the hard drive work?
The bootstrap program in ROM:
Loads the Master Boot Record (MBR) from the first hard drive.
Transfers control to the boot program.
The boot program:
Uses the partition table to locate the OS kernel.
Loads the OS kernel into main memory and transfers control to the OS.
What are the kernel’s key data structures for booting? What does each do, specifically?
Boot-control block contains information about booting the system from a specific partition. Stored in the first sector of the volume.
Volume control block contains the partition table, number of blocks in the file system, and pointers to free blocks.
Directory structure: contains file names and pointeres to corresponding file control blocks for each file
File control block: contains details about ownership, size, permissions
What is a volume/ logical drive?
A storage area with a single file system, usually denoted by a drive letter such as C, D, or E
What happens when a new file is created?
A new FCB is allocated and filled with the file’s details
The directory structure is updated with the new file name and FCB information.
What happens when a process opens a file?
A copy of the File Control Block (FCB) is saved from the disk into the system-wide Open File Table (OFT).
An entry is added to the per-process Open File Table (OFT) in the Process Control Block (PCB), referencing the system-wide OFT
What is the system-wide Open File Table (OFT)?
Stores FCBs of files currently open by any process
Tracks file usage across all processes
Maintains a counter indicating how many processes have opened each file
What happens when a file is opened by multiple processes?
Only one entry is created in the system-wide OFT for the file.
Each process has its own entry in the per-process OFT referencing the system-wide OFT
The system-wide OFT counter tracks how many processes have the file open.
What happens when a file is closed by a process?
The corresponding entry in the per-process OFT is freed.
The system-wide OFT counter is decremented.
If the counter reaches zero:
No process is using the file.
The FCB is removed from the system-wide OFT.
How does file reading work in this structure?
User space requests data using the read(index) function.
The per-process OFT retrieves the index from the system-wide OFT.
Data is accessed from secondary storage blocks via the FCB.
What are the three types of file allocation methods on a disk?
Contiguous Allocation:
All blocks of a file are stored together.
Fast for sequential access but requires contiguous free space.
Linked Allocation:
Each block points to the next block.
No need for contiguous space, but random access is inefficient
Indexed Allocation:
Uses an index block to store all pointers to the file blocks.
Efficient for random access but can waste space if the file is small
What is the FAT (File Allocation Table) and how does it manage file storage?
The FAT (File Allocation Table) is a data structure used by some operating systems, such as DOS and Windows, to manage file storage on a disk. It tracks the allocation of disk blocks to files and manages the sequence of blocks that make up each file.
Advantages, disadvantages of contiguous, linked, index allocation of blocks
Advantages:
Fast access for sequential reads.
Disadvantages:
Finding contiguous free space is difficult.
Growth of the file may require relocation or over-allocating space.
Advantages:
Files can grow dynamically.
No need to know the file size in advance.
Disadvantages:
Random access is slow.
If a pointer is lost or corrupted, the file is partially unrecoverable
Advantages:
Supports dynamic file growth.
Allows efficient random access.
Disadvantages:
Requires additional space for index blocks.
Can limit file size if the index block is small
What is the inode structure in the UNIX file system?
Direct blocks: Store data for small files directly in the inode.
Indirect blocks: Pointers to other blocks for larger files.
Single: Points to data blocks.
Double: Points to blocks that point to data blocks.
Triple: Points to blocks that point to other blocks, which then point to data blocks.
How does an inode manage large files?
Uses direct pointers for the first few blocks.
Switches to indirect pointers for larger files.
Uses multi-level indexing (single, double, triple) to handle very large files efficiently.
What is the primary method used for free space management in file systems?
What are the primary challenges with this method?
How do “linked” and “indexed” file allocation strategies manage free space?
Free space is managed using a linked list.
Traversing the list and finding a contiguous block of a given size are not easy tasks.
They add and remove single blocks from the beginning of the list.
Is a system in an unsafe state for sure going to reach a deadlock?
No