Networking Flashcards

1
Q

What are the stages of enterprising infrastructure?

A
  1. Monolithic app, with minimal network demands and preparatory protocols.
  2. Client server, with high network demand inside the enterprise, applications walled within the enterprise, TCP/IP plus proprietary protocols.
  3. Web applications, with ubiquitous TCP/IP, access from anywhere, servers are broken into multiple units
  4. Microservices, infrastructure moved to cloud providers, servers broken into Micro services, increase of server to server traffic.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Regarding networking, what is the consequence of the increase in the performance of servers?

A

The performance of servers increase overtime, the demand for inter-server bandwidth naturally increases as well

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why networking has no straightforward, horizontal scaling solution

A

Because although doubling the leaf bandwith is easy, if you assume that every server needs to talk to every other server, we need to deal with bisection bandwidth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the bisection bandwidth?

A

It is the bandwith across the narrow line that equally divide the cluster into two parts

It characterizes network capacity since randomly communicating processors must send data across the middle of the network

We assume that every server needs to talk to every other server, we need to double not just the leaf bandwith, but section bandwidth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the principles of designing the data center network?

A

Very scalable in order to support a very large number of servers

Minimum cost in terms of basic building blocks

Modular to reuse simple, basic modules

Reliable and resilient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Data center network can be classified into three main categories

A

Switch-centric architecture. Uses switches to perform packet forwarding.

Server-centric architecture, uses servers with multiple network interface cards (NIC) to act as switches in addition to performing other computational functions

Hybrid architectures. Combine switches and servers for packet forwarding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the possible traffics flows inside a data Center network

A
  1. North-South Traffic:
    • Definition: This is the traffic that flows between external clients/users and the data center. It moves in and out of the data center.
    • Examples:
    • Client requests to a web server.
    • Data uploads or downloads from external users.
    • Path: External network (e.g., internet) → data center firewall → load balancers → application servers (and vice versa).
  2. East-West Traffic:
    • Definition: This is the traffic that flows within the data center, between servers and other infrastructure components.
    • Examples:
    • Communication between application servers and database servers.
    • Data replication between storage systems.
    • Server-to-server communications for distributed computing tasks.
    • Path: Server-to-server traffic within the same data center, often involving ToR, EoR, or MoR switches.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the East-West traffic generally used for

A

Storage replication

VM Migration

Network function virtualization (NFV)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the layers in a classical network

A
  1. Core Layer:
    • Function: The primary role of the core layer is to provide high-speed, reliable connectivity between different parts of the network, often connecting multiple aggregation/distribution layers.
  2. Aggregation (or Distribution) Layer:
    • Function: The aggregation layer serves as an intermediary between the access and core layers, aggregating traffic from multiple access layer switches before forwarding it to the core layer. It often handles policy enforcement, routing, and load balancing.
  3. Access Layer:
    • Function: The access layer is where end devices, such as servers, storage devices, and other endpoints, connect to the network. It is the first point of entry for data into the network infrastructure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are ToR architectures?

A

Top of Rack (ToR) architecture is a common design approach in data center networks where a network switch is placed at the top (or sometimes the middle) of each server rack. This architecture simplifies cabling and improves network management and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How is the designing layout of ToR architecture

A
  1. Design and Layout:
    • Switch Placement: A network switch is installed at the top or middle of each server rack.
    • Server Connectivity: Servers within the rack connect directly to the ToR switch via short Ethernet cables.
    • Uplink Connections: The ToR switch connects to aggregation or core switches via uplinks, which are usually high-speed connections (e.g., 10Gbps, 40Gbps, or higher).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the advantages and disadvantages of ToR architectures

A

Since the number of cables is limited, it offers simpler cabling. The number of ports, therefore, per switch is also limited.

Disadvantages: higher complexity, off switch management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are EoR architecture?

A

End of row architecture is the one that positions aggregation switches at the end of a line rack of each corridor. Servers in the rack are connected directly to the aggregation switch in another rack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the advantages and disadvantages of EoR architectures

A

Disadvantages: Since aggregation switches must have a large number of parts, it offers more complex, cabling, longer cables are required also

Advantages: simpler switch management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can we increase bandwidth in a tree tier network?

A

Bandwidth can be increased by increasing the switches at the core and aggregation layers, and by using routing protocol, such as equal cost multiple path (ECMP) the equally shares the traffic among different routes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the advantages and disadvantages of increasing the bandwidth in a three tier network?

A

Although it is a very simple solution, it can be very expensive in larged data centers

17
Q

How is a DCN Clos Topology devided?

A
  1. Leaf Layer:
    • Position: The leaf layer is the first level of switches (ToR switches) that connect directly to servers and other endpoints.
  2. Spine Layer:
    • Position: The spine layer consists of high-performance switches that form the core of the network. Dedicated switches (aggregation switches)
18
Q

Where does the Clos topology comes from? And what is its main logic?

A

Spine – leaf topology are borrowed from the telephone world

Given M middle stage switches, and in input and output, if M is greater or equal than N, there is always a way to rearrange communications to free a path between any pair of idle input/output. Also, if M is greater or equal than the double off N minus one there is always a free path between any pair of idle input/output.

19
Q

What is the function of the leaf?

A

Function: These switches handle traffic within the rack or row and connect to the spine switches for inter-rack communication.

20
Q

What is the function of the spine?

A

Function: Spine switches interconnect all leaf switches, providing a high-speed backbone for the network.

21
Q

What are the advantages of Clos topology

A

Use of homogenous equipment

Simple routing

The number of hops is the same for any pair of nodes

Small blast radius

22
Q

What is a point of delivery?

A

A POD or point of delivery is a module or group of network, compute, storage, and application components that work together to deliver a network service

23
Q

What are some of the advantages of POD?

A

It increases the modularity, scalability, and manageability of data

24
Q

What is a camcube server centric architecture?

A

CamCube is a novel server-centric architecture designed to optimize data center network (DCN) performance, scalability, and efficiency. Developed by researchers at Microsoft Research, CamCube rethinks traditional data center designs by integrating network functionality directly into servers. Here are the key features and concepts of CamCube server-centric architecture:

Key Concepts

  1. Server-Centric Design:
    • Unlike traditional DCN architectures that rely on dedicated network switches and routers, CamCube integrates network functionality into servers themselves. Each server has multiple network interfaces, and servers directly connect to each other
25
Q

How is it the topology of a CamCube architecture

A

3D Torus Topology:
• CamCube employs a 3D torus topology, where each server is connected to six neighbors (front, back, left, right, top, and bottom) in a three-dimensional grid. This topology provides multiple paths for data transmission, enhancing fault tolerance and load balancing.

26
Q

What is a DCell

A

Scalable and cost-efficient hybrid architecture that uses switches and servers for packet forwarding

Servers inside a cell communicate through a switch, servers between different cells communicate through servers

27
Q

What are the drawbacks of the CamCube

A

It requires servers with multiple NICs to assemble a 3-D Tours network, long paths (pass through a lot of intermediate nodes, high latency), and high routing complexity

28
Q

What are the drawbacks of the DCell

A

Long communication paths, many required NICs, and increased cabling costs

29
Q

What is a BCube

A

A hybrid and cost-efficient architecture that can ski you up through a recursion

Uses BCubes as building block, which consist of N servers connected to N ports switch

30
Q

Discribe the how would you connect 3072 servers using switches of 32 ports in a leaf and spine topology

A

For 3072 servers we would need 96 leafs. Each leaf would have 32 ports to connect to the spine and, therefore we would have 32 spine switches. Since we need to connect each leaf to each spine each spine needs 96 ports

31
Q

It is also possible to derive the network by the number of ports into a switch given a specific network. How?

A

Given a switch of 2K ports in each of those we can connect K servers in the leaf and then, to K spines. This gives us K spines of 2K ports that can, therefore, be connected to 2K leafs

32
Q

In Leaf spine networks, we have a problem of scalability due to the number of ports in each switch. What is a way to solve it

A

I’ll wait to solve it is to use a POD based model, fat tree. Transform each spine leaf group into a POD (point of delivery) and add a super spine tier. This way you can increase the number of ports leaving the system without changing the switches used.

See picture in the Comp Infra album

33
Q

What is the difference between POD and virtual chassis

A

In virtual chassis they modulate and replicate the spine structure

34
Q

Who reads and forward packages in the server-centric architectures?

A

The servers