Questions from tests Flashcards

1
Q

SWhat is the lesson from this question?

A

we must set the interest field to false before we allow the access of other threads in the CS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • What is the lesson from this question?*
  • (solving starvation)*
A
  • Recall: this case does not involve a simple order between threads. It demonstrates the producer-costumer problem. Once a mass of readers read, the writer must find itself starved to death.*
  • How is it solved?*

Another mutex!! we wrap the mutex which is used in order to get into the CS in another mutex. That is, all of the “Readers” are now waiting on the additional mutex, and not on the main mutex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

One-way + barrier of 5 threads.

That is, the barrier maintains the demand that exactly 5 vehicles pass the tunnel and additional 5 might enter only after all of them has left the tunnel.

A

It is done by adding a new mutex which is responsible of having the first 4 vehicles wait for the fifth to come. Once the fifth comes, it releases them.

NOTE! DOWN(busy) comes after UP(mutex). That is super important becuase the release of busy is placed after the acquisition of mutex.

In order words, if i sleep on busy while holding mutex, i create a deadlock =[

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can we implement FIFO waking order?

A

The attached algorithm with the addition of wrapping mutex around flag = 0; sleep;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  • How many bytes a second-level page table can point to?*
  • How many pages are required for a 12MB process?*
  • what are the key point in this question?*
A

There are 2^10 page directory entries, each of them refers to a page table with 2^10 page table entries.

Each page table entry referes to a base address of a physical page-frame, which can hold 2^12 addresses.

In total, 2^10*2^10^2^12 = 4*2^30 = 4GB of bytes.

Each pde refers to 2^10*2^12 =4*2^20 = 4MB.

ket point: when we think of the number of pages need to be allocated to memory for some program, we should first ask ourselves how many first-level pages we need. According to that, we shall add the number of second-level pages that will satisfy such an amount. Them combined should be the answer. For a 12MB process, we need 3 first-level pages. to provide these for also need 1 second-level page. In total - we need 4 pages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Find the given address: 0x00403004

A
  • 0x00403004 = 4 + 3*16^3+ 4*16^5 = 4,206,596*
  • We know that starting from p1[0], we have 2^22 bytes. That is, [0,4,194,303‬]*

p1[1] = [4,194,304,8,388,607] (address is here)

  • 4,206,596* - 4,194,304 = 12292
  • Each entry in p2 has 2^12 bytes.*
  • p2[0] - [0,4095]*
  • p2[1] - [4096, 8191]*
  • p2[2] - [8192, 12287]*

p3[3] - [12,288, 16383] (address is here)

12292 - 12288 = 4.

p1 = 1, p2 =3 ,p3 = 4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  • In a two-level page table, how many accesses to memory are required to reach a paged memory reference if:*
    1. page is on memory*
    1. There is TLB , page on memory but is not on TLB*
  • 3.There is TLB and page is found there.*
A

1. Answer is: 3 accesses

  • Get page directory
  • get page table
  • get page + offset
  1. Answer is: 3 accesses
  2. Answer is: 1 access
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A computer has an address space of 32 bits, and page size of 2KB, first level:

  • What is its virtual memory size?
  • What is the size of the page table?
  • What is the maximal size of a program’s memory? does it depend on the size of the pages?
A
  • 2^32.*
  • We can divide the memory into 2^32/2^11 = 2^21 pages.*
  • The page table is constructed of PTEs. There are 2^21 of such, each of size 4 bytes. Therefore, the overall size of the page table is 2^21*2^2=2^23 = 8MB.*
  • The maximal size is the size of the virtual address, AKA 4GB. it has nothing to do with the page size.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Assume an address space of 32 bits, 2KB size pages and a two-level page table, in which 8 bits are dedicated for the first level table.

What is the size of the 2nd table?

A

Virtual address - 2^32

  • 8 bits for first level(2^8 PDEs)
  • 11 bits for address(2^11 addresses in a page)
  • 13 bits for second level(2^13 PTEs)

*2^13* 4 = 2^15 =32KB*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  • What is the size of a single level page table on a 32 bit machine, with 4KB pages?*
  • What is the size of a 64 bit machine’s page table with 4KB pages?*
  • How many layers will we need if we want to ensure that each page table entry will require only a single page?*
A
  • 2^32 addresses which are spread accross pages of size 4KB.*
  • 2^32/2^12 = 2^20 PTEs.*

*That is, 2^20*4 = 2^22 = 4MB*

  • 2^64 address which are spread accross pages of size 4KB.*
  • 2^64/2^12=2^52*

That is, 2^52*8 = 2^55. 32 Peta Bytes

How many pages are there in one page table?

2^12/2^3 = 2^9.

  • In order to “spread” 2^52 address to a specific PTE we need 52/9 layers, which is 6 layers.*
  • In other words, we need to have 6 accesses to memory to retrieve each virtual address.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  • How many time will the disk be accessed when a user executes the following command:*
  • more /usr/tmp/a.txt*
  • Assume that:*
  • 1.The size of ‘a.txt’ is 1 block.*
  • 2.The i-node of the root directory is not in the memory.*
  • 3.Entries ‘usr’, ‘tmp’ and ‘a.txt’ are all located in the first block of their directories.*
  • 4.Ignore the disk access required in order to load more*
A
  • So, how does it work?*
  • Each struct file holds an inode, which holds a copy of dinode. since the root directory is not on memory, we need to find dinode of the root(first access), there, we are told to go to the first block(second access), to reach user, same process again to reach temp and the same to reach a.txt.*
  • We reach a.txt after 6 accesses.*
  • The more command displays the file content, since it is of size 1 block, we reach the inode, and then reach the block(2 accesses)*
  • overall: 8 accesses.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The Ofer2000 Operating Systems, based on UNIX, provides the following system call:

rename(char *old, char *new)

This call changes a file’s name from ‘old’ to ‘new’. What is the difference between using this call, and just copying ‘old’ to a new file, ‘new’, followed by deleting ‘old’? Answer in terms of disk access and allocation.

A

In terms of allocation:

  • we may have a situation where we don’t have enough space for a new allocation using copy+delete.
  • changing the location involves updating the inodes table in the kernel.

in terms of disk accesses:

  • changing a name does not involve any disk access. It is on the user-level.
  • copying old to new requires reaching the source inode, then accessing each one of its blocks and copying them to the blocks of another inode. That is, a lot of accesses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What would be the maximal size of a file in a UNIX system with an address size of 32 bits if :

  • The block size is 1K
  • The block size is 4K
  • The i-node has 10 direct block entries,*
  • single, double & triple indirect*
A
  • virtual address: 2^32*
  • block size is 1K:*
  • 10 direct blocks - 10K
  • single indirect - 256K - 2^8K
  • double indirect - 256^2 - 2^16K
  • triple - 256^3 - 2^32K
  • total:

block size is 4K:

  • 10 direct blocks - 40K
  • single indirect - 1024K - 2^10K*4
  • double indirect - 1024^2 - 2^20K*4
  • triple - 1024^3 - 2^30K*4
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
A
  • Recall two fundamental points about fork and files:*
    1. Fork creates for the child a reference to the files(struct file)* of the parent.
      1. Because parent and child share a reference:
  • The file is removed only when both call the close command.
  • Both respond to the same offset.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A

very important points regarding threads:

  • When a thread opens a file, all other threads have access to that file, but, they the ref count of the struct file is 1. That is, in order to close the file only one thread shall call close.
  • all global variables are shared.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  • what can a soft link provide?*
  • what can a hard link provide?*
A
  • It can provide merely the name of the path, and thus the name of the files. It has nothing to do with the inode of the files.*
  • hard-link is a creation of a new file struct. It does not hold the name of the file*
  • The task of saving names of files is given to the dirent struct*. Hard-link of a file has nothing to do with its name
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the steps of finding /usr/ast/mbox

A
  • root directory is on memory with the dirent structs it holds.*
  • we find usr, there we see its attributes and a list of its blocks. Within the blocks we search the dirent with the name “ast” and proceed in the same manner.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Does it matter how many references a page had with FIFO?

A

With FIFO it does not matter how much a page has been referenced. All matters is when it was first loaded in the memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The the optimal page replacement is not realistic?

A
  • Because it is based on an idea that we know which pages are going to be requested, and according to that, we choose the pages in memory.*
  • Thing is, we don’t know.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  • A computer has a virutal address of 38 bits.*
  • Each page-frame is of size 16KB*
  • The system is managed by a two-level page table, where the first-level page hold one page, and the size of each page table entry is 4 bytes.*
  • How much bits of the virtual address should be assigned to the first-level page and the second-level page*
A

When we’re told the page-frame is of size 16KB, we can be sure the “offset” part is constructed of 14 bits. (2^14 = 16KB)

Now, if the first-level page size is 16KB, and each PTE is of size 4B, then the second-level page has 2^14/2^2 entries. That is, its offset is constructed of 12 bits.

In total. Second level - 12 bits. offset - 14 bits.

Because the address is of 38 bits, 12 bits construct the first-level page

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  • If the block size increases, the size assigned to the page table increases/decreases?*
  • and will the chance for TLB hit increases?*
  • will page-fault treatment be longer?*
  • will inner fragmentation decrease?*
A

Decreases, because it sure has now less entries.

Yes, because the memory is more “dense”, instead of being spread over x page, it is now spread over y pages, when y<x.></x.>

Yes, because copying from disk to memory takes longer.

No, it most probably increase because the minimum size for allocation is now bigger, thus requests for allocation of smaller chunks will hold a much bigger space than they need.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
  • what does namex do?*
  • what does dirlookup do(ip,name)?*
A

Given a path, the function looks for the relevant inode and retruns its number. If not found returns 0.

dirlookup searches within disk, in inode ip, in jumps of “struct dirent”, the direct which has the name of name, if found, it returns its inode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  • what iget(dev, inum) do?*
  • what idup(struct inode*) do?*

what is the difference between link and ref in the inode struct

A

Find the inode with number inum on device dev
and return the in-memory copy. Does not lock
the inode and does not read it from disk.

  • increases ref by 1, and returns the ip.*
  • ref uses the current source, so it lets him know that someome is using it so even if you’re told to close the file - don’t.*(OS level)
  • link asks to duplicate the source, to have additional dirent entry in another directory.* (Disk level)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What if during the execution context-switch between two threads of different processes, the TLB will not flush?

A
  • Each process has its own unique mappings from virtual to physical.*
  • After context-switch, the virtual address of the thread of the new process, has nothing to do with the virtual address of the previous process. That is, if we don’t flush, a given address may lead to a physical address of the previous process instead of the new.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

what is a sector?

A

The data on the disk is presented as a numbered sequence of 512-byte blocks - called sectors.

26
Q
  • What does a buffer, in the buffer-cahce, represents?*
  • what is the advantage of a greater sized buffer?*
  • in order to find a block in the buffer-cache, what arguments shall we supply?*
A
  • Each buffer represents the contents of one sector on a particular disk device*
  • With a bigger buffer size we can obtain a greater throughput.*
  • We need to supply the device number and the sector number.*
27
Q

What are the steps of checking if some sector is found on the buffer cache?

A
28
Q

What does iderw do?

A

It updates a locked buffer as indicated by the flags:

  • If B_DIRTY is set - writes to disk
  • Else, if B_VALID is not set - read from disk

It maintains that by keeping the list of pending requests in a queue and using interrupts to find out when each request has finished.

29
Q

iderw maintains a queue of requests, but how many operations at a time can the simple IDE disk controller handle?

A

only one

30
Q
  • what does idestart do?*
  • what does ideintr do?*
A

It issues either a write or read command for the buffer’s device and sector, according to the flags.

ideintr is where we get after the disk triggers interrupt. ideintr consults the first buffer in the queue to find which operation was happening. If the buffer was being read and the disk controller has data waiting, ideintr reads the data into the buffer with insl . Now the buffer is ready: ideintr sets B_VALID, clears B_DIRTY, and wakes up any process sleeping on the buffer. Finally, ideintr must pass the next waiting buffer to the disk

31
Q

dirlookup(struct inode *dp, char *name, uint *poff)

uses:

readi(struct inode *ip, char *dst, uint off, uint n)

Explain the relation

A
  • dirlook gets an inode, and has the job of searching within its blocks (struct dirent), the struct which holds the name of name.*
  • It uses readi, to read from disk the content of each dirent, then it check if the name fits. If so, it saves its offset and returns its inode.*
  • readi calls bread which searches in the buffer queue, using the device number, and the block number in the device.*
32
Q

we try to implement a buffer cache in a user library, such that every user has its own buffer cache. What’s the problem with searching the cache with the same arguments we supply in bget? that is, device number and the address to the relevant block number?

A
  • The address of the relevent block number is not accessible via user mode.*
  • Full path of the the file + number of block in the file(which is computed via offset of the file) will do the job.*
33
Q

Is it necessary to flush TLB after context-switch in an inverted page table?

A

No, because one of the keys in the hash function is process id.

34
Q
A
35
Q

Buffer cache

    1. How block buffer are identified?*
    1. What are the two functional parts of the buffer cache?*
    1. Which buffers belong to the free block buffers?*
    1. Can a buffer be in the free list and in the buffer cache?*
    1. How is the buffer cache implemented*?
      1. How is the hash index generated?
    1. Where does the LRU method takes part?*
A
    1. Block buffers are uniquely identified by the owning device identifier and the block number of the buffer.*
    1. The list of free block buffers and the cache itself*
    1. block buffers are queued into the free block list when they are first created or when they have been discarded.*
    1. Yes. A free block is in the free list in addition to being on a single hash queue.*
    1. The buffer cache is implemented using a hash table which is a vector of pointers to chains of buffers that have the same hash index.*
    1. The hash index is generated from the owning device identifier and the block number of the data block.*
    1. Buffers in the buffer cache are also queued onto LRU lists. There is an LRU list for each buffer type and these are used by the system to perform work on buffers of a type: clean, busy, dirty.*
36
Q
    1. What’s the relation between a buffer and a block?*
    1. When the kernel searches for a block.*
  • found and free
  • found and busy
  • not found and there are free buffers
  • not found and while searching for a free buffer, “delayed-write” buffers are found
  • not found and free-list is empty
A
    1. Each buffer has a header that includes the pair*
  • When the kernel searches for a block.*
  • Buffer is marked “busy” and removed from free list
  • Process sleeps until buffer is freed, then checks again
  • A free buffer is allocated from the free list.
  • Write delayed-write buffers to disk, move them to MRU and find a free buffer
  • Block requesting process, when scheduled, go through hash-queue again.
37
Q

How multiple-programming might decrease CPU time compared to single-programming?

A
    1. Bad scheduling which results in a lot of context-switch operations.*
    1. Processes that have shared-resources and are waiting one for the other to finish reading/writing.*
    1. TLB can’t be handled efficiently due to large amoung of proccesses, resulting with thrashing.*
38
Q

Name three advantages of segmentation versus paging.

A
    1. Memory chunks can be adjusted. (increased/decreased)*
    1. The memory is divided logically based on the program*
    1. Much easier to share resources - data/procedures*
    1. Much easier to use protection over resources - because there is an order over different types of data.*
    1. Linking is simpler*
39
Q

How is translation of an address from virtual to physical in two-level paging+sementation is done?

A
40
Q

What are the pros & cons of an inverted page table?

A
  • Reduced memory space - maximum number of entries could be the number of page frame in physical memory.
  • Longer look-up time. Though it can be decreased using hash table
  • Because each index in the table refers to a specific page-frame in memory, we no longer can map the same physical frame to multiple virtual page numbers in different processes
41
Q

How is copy-on-write can be done?

A
    1. On fork we set a new-flag - COW, for parent and child, and in copyuvm, we make only the change that the page table of the child holds the same values as in the page table of the parent. In addition. we turn off the “write” bit from every pte.*
    1. Since the ptes write option is turned off, once a try for a write is established, a page-fault is raised. Once that happen, we check the COW bit; if set, we search for the parent process:*
  • We check in the field parent whether the COW flag is also set in the parent process. If so, we hold the child process. We allocate a page and copy the page frame of the father to that frame, then set the pte of the child to that specific frame.
  • If the paren’t flag is not set, we are the parent, we find the child and allocate for him a page using the same process.
42
Q
  1. Does kalloc returns a physical address

    1. Does every page-frame which is in use of a user-program has atleast two mapping?*
    1. Does V2P translates any virtual address into a physical one*
    1. Does first-level page table loaded into cr3 register as a virtual address?*
A
    1. No. It returns a virtual address in the kernel.*
    1. Yes, one mapping is from the kernel space and other via PTE.*
    1. No, it translates address between KERNBASE and PHYSTOP.*
    1. Yes. lcr3(V2P(p->pgdir))*
43
Q

Explain the usefulness of the two-level paging.

A

A page table is an array of 2^20 PTEs, each of size 4 bytes. That is, the size of a page table is 4MB. Each frame is of size 2^12.

In total - the virtual memory address of each process is 2^32.

In order to hold its page table in the RAM we need 4MB. when a lot of processes are involved it taks a lot of place.

An alternative is to use two-level paging.

Page directory has 1K entries, each points to a page tables, which itself has 1K PTEs. This way we only need to save 4K of memory in memory for the page directory and when the desired page table is found, we need to allocate another 4K.

In total we have 8K versus 4MB.

44
Q

What is the main thing you should look for when asked to find a possibility of two process simultaneously accessing the CS.

A
  • The same repeats in almost every test:*
  • There is a flag which represents “intention”. This flag must be set to false only at the last line of the end procedure. If you see the flag is set to false at the beginning fo the end procedure, it is the cause for deadlock!*
45
Q

Wait a process is waiting on a monitor, does it release the key?

A

Of course!

46
Q

Where is bit-map usually located and what if we moved it to cache?

A

It is usually located on disk. The advantage of that is that data is saved once the program crashes. Advantage of cache is of course - speed. We don’t need to read/write from disk. We can use the cache bit-map and write to disk bit-map logically .

47
Q

In NTFS system file a file size can have two meanings:

  • size: the size of the file in bytes
  • size-on-disk: the size which the content of the file holds on disk.

Name three reasons which suggests a difference in size between the two.

A
  1. File is compressed on disk thus size-on-disk is smaller.
  2. The file’s size demands several blocks but it doesn’t fill them all completely. Therefore, its size-on-disk is greater than its actual size.
  3. File is very tiny, thus located inside the MFS. Its side on disk is sizeof(MFS) even though its real size is smaller.
48
Q
  • In a given system the size of each file is exactly 1MB.*
  • For each one of the following file systems, suggest changes such that the amount of memory in disk required by the data-structures implementing these file system is minimal:*
  • FAT
  • NTFS
  • inode-based
A

First, we set each block on disk(sector) to be of size 1K, on every one of the file systems.

FAT

  • Each file has an header which points to the first block in the linked list. Since we know the size of the file, and we know it holds exactly 1 block, the linked list is not needed. We can have the file point to the block itself.

NTFS

  • Instead of saving the runs in the MFS record, we can simply save the block number. Also the size field is not necessary.

inode-based

  • Remove from each inode the size field and leave only one block for each inode.
49
Q

Provide a simple difference between inode-based and NTFS to FAT?

A
  • FAT only points to the relevant blocks in disk, it does not save attributes.*
  • inodes* do save attributes and so are MTF entries
50
Q

What is the difference between MFT records and inodes?

A
  • MFT record weights much more than inode. (1K vs 64 bit)*
  • MFT file can be located anywhere, inode table has a fixed location*
  • MFT index is similar to inode-number*
  • Name of file in MFT - not in i-node*
  • Data is sometimes in NFT but never in inode*
  • Allocation by run is good for sequential access. In inodes allocation by tree of indexes(indirect block) - good for random access, more space efficient.*
51
Q

How small/large directories are implemented in NTFS?

A

MFT record for a small directory

  • standard info about the directory
  • list of directory entries, each contains:
    • The MFT index for the file
    • The length of the file name
    • The file name itself
      • Both here and in the MFT record of the file.
    • various fields and flags

Large directories are implemented as B+ trees.

That is, each entry points to another MFT records which might hold more MFT records and so on..

52
Q

After compression, in order to find a specific file we need to have a function telling us the compression unit size. That itself reduces the speed of random access.

A
53
Q

What may be the cause of dynamically changing priorities of processes?

A
  • Starvation, setting different priorities might result with some process getting no CPU time.
  • We can investigate how much CPU time a process requires out of the quantum, and change its priority according to that.
  • Important operation - like writing to disk, or kernel operation usually deserve a better priority.
54
Q

What is the definition of FIFO in mutual exclusion?

A

If process p is waiting and process q has not yet started the doorway, then q will not enter the CS before p

55
Q
A
56
Q

What’s the difference between inverted page table and page table?

A

Inverted: contains one entry per physical frame as opposed to one for each virtual page as in the case of standard page tables. This makes it much smaller compared to standard page tables. However, the lookup time can be very high, because the whole table has to be linearly searched typically.​

57
Q

Why processes interect with the OS via traps?

A

That’s the only way by which services supplied to the user are entitled to full privileges.

58
Q
  • fork()*
  • Is it possible:*
  • Parent writes to a page
  • It also write to child’s page.

How come modifying a pointer after fork doesn’t result with a change in both parent and child?

A
  • Sure. Shared files.
  • A pointer is some virtual address. Both parent and child hold the same virtual address but each of which refers do different page-frames in memory. Therefore, dereferncing a pointer will not result in a chane in both parent and child.
59
Q

NOTE! wait() waits for a child even if the child exited before the parent even reached the wait call.

A
60
Q

Amazing explanation of the difference between cahce and TLB

A
61
Q
  • Round Robin might have a better turnaround time than Shortest-Job-First because it might occur that it behaves “accidently” like “preemptive-shortest-job-first”*
  • p1 - 3 cpu time, start time: 0*
  • p2 - 1 cpu time, start: 1*
  • Assume quantum in RR is 1.*
  • SJF - p1 finishes and then p2. (3+3)/2 = 3*
  • RR - p1, p2 ends, p1 ends - (1+4)/2.5*
  • Nothing can be faster than Preemptive SJF*
A
62
Q

A way to prevent semaphore starve..

A