College 9: DFS Flashcards

1
Q

DFS

A

A distributed file system (DFS) is a file system that is spread across multiple networked computers to
provide a united storage space and access to files. It allows multiple machines to work together and
share resources to provide:
* fault tolerance: continue to operate even if individual node fails
* scalability: adding more nodes to increase high-performance storage and therefore handle larger amounts of files
* Consistency and security: let people and programs access and manage files stored on different computers across a network. They make it look like all the files are in one place, so users can work with them easily using familiar commands and tools. To keep things running smoothly, these systems use special methods to make sure that if multiple users are trying to access or change the same file at the same time, everything stays organized and accurate. This way, everyone sees the most up-to-date version of a file, and no data gets lost or corrupted.
* Efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

HDFS

A

It is a distributed file system designed to run/build on commodity hardware. Failure is the norm rather
than the exception. A HDFS instance may consist of thousands of server machines, each storing part of
the file system’s data. Since we have huge number of components and that each component has non-
trivial probability of failure means that there is always some component that is non-functional.
Detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HTTP (HypertextTransferProtocol)

A

HTTP defines how requests from clients
(such as web browsers) are made to servers and how servers respond with the requested information.
* FTP is a predecessor to modern file transfer methods, allowing remote machine connections to send or fetch files.
* It operates primarily in a 1-1 configuration and is scalable by default due to this model.
* FTP does not support replication or transparent migration, though server-supported caching is possible.
* While not location-independent, FTP can be perceived as such.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

NFS (Network File System)

A

NFS is a distributed file system protocol enabling clients to access remote files and directories as if they were local.
* Any node can act as a server, sharing points that other nodes can mount as directories.
* NFS emphasizes high performance through caching and read-ahead, though it is not highly fault-tolerant or scalable.
* It supports transparent migration and is potentially location-independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Redundant Array of Independent Disks (RAID)

A

RAID technology combines multiple physical drives into an array for improved redundancy and performance. RAID is typically implemented at the hardware level within a single server or storage system.
* RAID 0 offers increased performance but no redundancy.
* RAID 1 provides data mirroring for redundancy.
* RAID 2-5 offer various forms of data striping with parity for a balance of performance and fault tolerance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Google File System (GFS)

A

A highly scalable distributed file system developed by Google to meet the needs of their large-scale data processing applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

GFS – Why 64MB Chunks

A
  • Much larger than typical file system block size: Most file systems use block sizes of 4KB to 16KB, whereas GFS uses 64MB chunks to minimize the overhead of managing many small blocks.
  • Reduces client interaction with the master: By using larger chunks, the client can cache chunk locations, reducing the number of interactions with the master server, even for multi-terabyte datasets.
  • Reduces network overhead: Larger chunks mean fewer chunks overall, leading to less metadata to manage and fewer network operations to locate and access the chunks.
  • Reduces the size of metadata stored in the master: With each 64MB chunk requiring only 64 bytes of metadata, the master can handle a large number of chunks efficiently.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

GFS – Reading

A
  • Application initiates a read request.
  • GFS client translates the request and sends it to the master.
  • Master responds with the chunk handle and replica locations.
  • Client selects a replica location and sends the request to it.
  • Chunk server sends the requested data to the client.
  • Client forwards the data to the application.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

GFS - writes

A
  • Application initiates the write request.
  • GFS client translates the request and sends it to the master.
  • Master responds with the chunk handle and replica locations.
  • Client pushes write data to all replica locations; data is stored in internal buffers of chunk servers.
  • Client sends a write command to the primary chunk server.
  • Primary chunk server determines the serial order for secondaries and instructs them to perform the write.
  • Secondaries perform the write as instructed.
  • Secondaries respond to the primary confirming the write.
  • Primary responds back to the client.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

GFS – Mutation (Updates)

A
  • Logs and checkpoints are replicated across multiple machines: Ensures data durability and fault tolerance.
  • A mutation is committed when its log record has been pushed to disk on all master replicas.
  • Single active master server: Manages metadata and chunk locations.
  • Lease mechanism: Master grants a lease to one replica for coordinating updates, typically with a 60-second timeout.
  • Ordered mutations: Ensures consistency by performing mutations in the same order across all chunk replicas.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

GFS – Record Append Algorithm

A
  1. Client pushes write data to all replica locations.
  2. Primary checks if the record fits within the specified chunk.
  3. If the record does not fit:
    - Primary pads the chunk.
    - Informs secondaries to pad the chunk similarly.
    - Instructs the client to retry the append.
  4. If the record fits:
    • Primary appends the record.
    • Informs secondaries to append the record.
    • Collects responses from secondaries and responds to the client.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GFS – BigTable

A

BigTable: A distributed storage system built on GFS, designed to scale to petabytes of data and thousands of machines.
Usage: Supports more than 60 Google applications (e.g., Google Earth, Orkut).
Data Model: Provides a simple data model, unlike traditional relational databases, making it suitable for large-scale, distributed data storage and access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Dynamo (amazon)

A
  • Dynamo is Amazon’s highly available key-value store designed for always-writeable storage with minimal update rejections.
  • It is decentralized, using consistent hashing and replication for data partitioning and maintaining consistency.
  • Dynamo is optimized for latency-sensitive applications, ensuring 99.9% of operations complete within a few hundred milliseconds.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HDFS

A

Designed to store large amounts of data across many machines while providing high throughput and fault tolerance.
Components: Comprises NameNode (manages metadata: keeps track of the directory tree, file names, block locations, and file attributes.), DataNodes (store actual data blocks: Handles read and write requests from clients, reports back to the NameNode with lists of blocks it stores, and performs block creation, deletion, and replication as directed by the NameNode.), Secondary NameNode (helps with checkpointing:Periodically merges the NameNode’s namespace image with the edit log to prevent the edit log from becoming too large and to create a new checkpoint.), and HDFS Clients ( Interacts with both the NameNode and DataNodes to perform file operations such as creating, reading, and writing files).
Data Management: Uses large block sizes (typically 128 MB or 256 MB) to reduce the overhead of metadata and improve data transfer rates. Designed to support batch processing rather than low-latency data access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Rack awareness

A

the concept of knowing where the data is and where the replicas should be created. A rack can have 30-40 DataNodes. Note: the size will be 3x.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

To cope with this (DataNode failures)

A

Replication. So not all the same rack on the same node.
Default replication factor is 3. Data can be shuffed very fast.