Traffic Engineering Flashcards
Recall that the three steps of traffic engineering: Measure, Model, and Control.
- What are the two things that need to be measured, and how could each be measured?
- What are two ways that control could be implemented?
- The topology connectivity and capacity of each link and router. This could be done by routers self-reporting, similar to how they exchange information in a Link State protocol, but in practice is probably more often simply entered as data by a network engineer. We also need to measure the traffic, or offered load. This can be done using the “simple counters” measurement technique that we learned about earlier, since we want to know how much traffic is on each part of the network but don’t necessarily need the details of specific flows
- The “traditional” way to implement control is by adjusting link weights, which indirectly affects the routes calculated by the routing protocol. Another way to implement control is by using SDN to directly control the routes that are used on the network.
In intra-AS multipath, traffic can be split among paths with the same path length (i.e., sum of link weights along that path). In inter-AS multipath, what properties of the paths (i.e., of the BGP advertisements) need to be equal in order to allow multipath over those paths?
LOCAL_PREF, the local preference parameter AS_PATH length, as determined by counting the number of ASes in the AS_PATH MULTI_EXIT_DISC, the MED value IGP metric to the NEXT_HOP, i.e., equal “hot potato” routing distance
How does using pods and pseudo-MACs improve the scalability of a Layer 2 network?
This changes the flat layer 2 addressing (MAC addresses) into a hierarchical addressing (pseudo-MAC addresses). This means that switches only need to store a forwarding entry for each host in the same pod plus one for each other pod, rather than needing an entry for each host on the entire network.
What are the advantages of using a Jellyfish topology over a traditional hierarchical data center topology?
Network load balancing – prevents bottleneck links and heavily loaded aggregation or core switches
Higher capacity – since the network is balanced, more hosts can reasonably be hosted on a network with the same number of switches
Shorter paths – shorter average number of hops between any two hosts results in faster network performance
Incremental expansion – allows adding switches to the network without reconfiguring the existing network infrastructure or adding additional “higher-level” switches
What are the drawbacks or problems with using a Jellyfish topology?
Does not handle heterogeneous switch devices well, except when expanding the network with switches larger than those originally used.
Long cable runs between random switch pairs may be necessary, but are inconvenient and difficult to install
Heterogenous Switches
Switches that are not uniform on their properties and could potentially have different Port Numbers, Link Speeds etc.
Traffic Engineering
Process of reconfiguring the network in response to changing traffic loads, to achieve some operational goal.
3 Steps of Traffic Engineering
Measure: Figure out current traffic loads
Model: How Configuration affects underlying paths
Control: Reconfigure network to exert control over how traffic flows through network
Interdomain Traffic Engineering Goals
Predictability: Should be possible to predict how traffic flows will change in response to changes in network configuration
Limit Influence of Neighboring Domains: Use BGP policies to limit how neighboring AS’s might change their behavior in response to policy changes
Reduce Overhead of routing changes: Achieving traffic engineering goals with changes to as few IP prefixes as possible.
Characteristics of a Datacenter
Multi-tenancy
Elastic resources
Flexible service management
Main Objectives of VL2
- Achieve layer 2 semantics across entire data center topology. Done with name/location separation and a resolution service similar to Fabric Manager
- Relies on Flow Based random traffic indirection to achieve uniform high capacity between servers and balance load across links in the topology.
Goals of Valiant Load Balancing
- Spread Traffic Evenly across the Servers
2. Ensure Traffic Load is balanced independently of the destination of the traffic flows.