24. Journaling and FFS Flashcards
What is not atomic when it comes to consistency?
Writing multiple disk blocks
What is atomic when it comes to consistency?
Writing one disk block.
What is journaling?
Track pending changes to the file system in a special area on disk called the journal.
Following a failure, replay the journal to bring the file system back to a consistent state
Walk through an example of a journal.
Dear Journal, here’s what I’m going to do today:
- Allocate inode 567 for a new file
- Associate data blocks 5, 87, and 98 with inode 567
- Add inode 567 to the directory with inode 33
- That’s it!
What happens when we flush cached data to disk?
We have to update the journal.
This is called a checkpoing
What happens on recovery (as it pertains to journaling)?
Start at the last checkpoint and work forward, updating on-disk structures as needed.
Ex:
Dear Journal, I did everything listed above. Checkpoint!
Dear Journal, here’s what I have to do today:
1. Allocate … [Did this]
2. Associate …
3. Add …
4. That’s it!
[Go through to last checkpoint and go through remaining lines line by line and check to see if they’ve been done already, then add checkpoint]
What do we do with incomplete journal entries when in recovery?
These are ignored as they may leave the file system in an incomplete state.
What would happen if we processed the following journal entry?
Dear Journal, here’s what I’m going to do today:
- Allocate inode 567 for a new file
- Associate data blocks 5, 87, and 98 with inode 567
Who tf knows
Observation: metadata updates (allocate inode, free data block, add to directory, etc) can be represented compactly and probably written to the journal atomically.
What about data blocks themselves changed by write()?
We could include them in the journal meaning that each data block would potentially be written twice (ugh).
We could exclude them from the journal meaning that file system structures are maintained but not file data
What is the FFS?
The Berkeley Fast File System
Included in the Berkeley Software Distribution (BSD) Unit release in 1982.
Developed by Kirk McKusick.
FFS is the basis of the Unix File System (UFS), which is still in use and still developed by Kirk today
What are some disk geometry-related questions that file systems might try to address?
- Where to put inodes?
- Where to put data blocks, particularly where with respect to the inodes that they are linked to?
- Where to put related files?
- What files are likely to be related?
Why did many file systems prioritize writes to disk on the outside of the spinning disk over the inside?
Because assuming you can keep up with the speed of the disk, you can read data faster from the outside edge than from the inside.
Think about the physics. The outside edge has farther to travel than the inside, but both edges make the same number of revolutions per second. That means the outside edge moves faster.
Given multiple heads stacked on top of each other, one per platter, how can we safe a file to take advantage of this feature?
Safe the file across all of the heads so the tracks of each platter are in the same position across all platters. This means we can access a file on disk without having to move the heads.
Moving the heads is extremely slow.
How did FFS improve memory blocks?
Early file systems had a small block size of 512 B.
FFS introduced larger 4K blocks
How did FFS improve allocating contiguous blocks on disk?
Early file systems had no way to allocate contiguous blocks on disk.
FFS introduced an ordered free block list allowing contiguous or near-contiguous block allocation