Lesson 9.1 - Programming SDNs Flashcards
The OpenFlow API does not provide…
Specific guarantees about the level of consistency that packets along an end-to-end path can experience
What are 2 consistency problems with SDNs?
Packet-level consistency problems
Flow-level consistency problems
Packet-level consistency problem
Updates to multiple switches along a path in a network that occur at different times may result in problems such as forwarding loops
Flow-level consistency problem
If updates to switches along an end-to-end path occur in the middle of a flow, packets from the same flow may be subjected to 2 different network states
3 steps of SDN program
- Controller needs to read/monitor network state as well as various events that may be occurring in the network
* Events may include failures, topology changes, security events, etc. - Compute policy based on the state that the controller sees from the network
* Roughly the role of the decision plane, deciding the behavior of the network in response to various events - Write policy back to the switches by installing the appropriate flow table state into the switches
- Consistency problems can arise in steps 1 and 3:
* In step 1: Controller may read state from network switches at different times, resulting in an inconsistent view of the network-wide state
* In step 3: Controller may be writing policy as traffic is actively flowing through the network, which can disrupt packets along an end-to-end path or packets that should be treated consistently because they’re part of the same flow
Reading and writing network state can be challenging because OpenFlow rules are simple match-action predicates, so it can be difficult to express complex logic. Need more sophisticated predicates.
3 approaches to help guarantee consistency in reading state
Predicates
Rule unfolding
Suppression
Reasons a controller might want to write policy to change state in network
- Maintenance
* Unexpected failure
* Traffic engineering
*** These all require updating state in the network switches
When writing policy changes, we want to make sure forwarding remains…
correct and consistent
When writing policy changes, we’d like to maintain the following invariants…
- No forwarding loops
* No black holes (a router or switch receives a packet and doesn’t know what to do with it)
* No security violations (traffic is going where it shouldn’t be allowed to go)
What can happen if policies are written in an inconsistent fashion?
- You could have a forwarding loop.
* If rules are installed out of order, packets may reach a switch before the new rules do.
* We need atomic updates to the entire configuration.
Solution to inconsistent policy writing?
- Use a two-phase commit:
- Packets are either subjected to the old config on all switches, or the new config on all switches. Not a mix.
- Idea: tag the packet on ingress such that the switches maintain copies of both P1 and P2 for some time.
- Once all switches receive rules corresponding to the new policy, then incoming packets can be tagged with P2.
- Once we know that no more packets of P1 are being forwarded through the network, we can only then remove the rules corresponding to policy P1.
- Naive version of 2-phase commit requires doing this on all switches at once, which essentially doubles the rule space requirements.
- We can limit the scope/optimize by only applying this mechanism on switches that involve the affected portions of the traffic or the affected portions of the topology.
What problems can arise from inconsistent “writes” of network state?
- Inability to respond to failures
- Forwarding loops
- A flood of traffic at the controller
- Security policy violations
- Forwarding loops
- A flood of traffic at the controller
- Security policy violations
How do you cope with inconsistency?
- Different controllers for different switches
- Keeping a “hot spare” replica
- Keeping the old and new state on the routers/switches
- Resolving conflicts on the routers
-Keeping the old and new state on the routers/switches
What is network virtualization?
- An application of SDN
- One example is Mininet
- An abstraction of the physical network where multiple logical networks can be run on the same underlying physical substrate
- Each logical network has its own view, even though it may share nodes with other logical networks, as if it were running its own private version of the network.
* Nodes need to be shared or sliced. They are typically virtual machines (VMs)
* A single link in a logical topology might actually map to multiple links in the physical topology. The mechanism to achieve these virtual links is typically through tunneling. For example, a packet from A->B in the logical topology below might be encapsulated as a packet that goes from A->X->B in the physical topology.
In network virtualization, a single link in a logical topology might actually map to multiple links in the physical topology. The mechanism to achieve these virtual links is typically through:
tunneling
For example, a packet from A->B in the logical topology in the lecture might be encapsulated as a packet that goes from A->X->B in the physical topology.
Network virtualization can also be thought of as an analogy to…
virtual machines
- In VMs, a hypervisor arbitrates access to the underlying physical resources, providing to each virtual machine the illusion that it’s operating on its own dedicated version of the hardware.
- Similarly, with virtual networking, a network hypervisor of sorts arbitrates to the underlying physical network to multiple virtual networks.
Why use network virtualization?
- One of the main motivations: the ossification of Internet architecture
* In particular, because IP so pervasive, makes it difficult to make fundamental changes to the underlying internet architecture and how it operates.
* Lots of work done in the early 2000s on network overlays, but 1-size fits all architectures very difficult to deploy. So, rather than trying to replace existing network architectures, network virtualization was intended to allow for easier evolution.- Network virtualization enables evolution by letting multiple architectures exist in parallel. We didn’t have to pick a winner for a replacement for IP.
- In practice, network virtualization has really taken off in multi-tenant data centers, where there may be multiple tenants/applications running on a shared cluster of servers. Well-known examples of this include Amazon’s EC2, Rackspace, and things like Google App Engine.
- Service providers such as Google, Yahoo, and so forth also use network virtualization to adjust the resources devoted to any particular service at a given time.
Network virtualization has really taken off in…
Multi-tenant data centers where there may be multiple tenants/applications running on a shared cluster of servers. Well-known examples of this include Amazon’s EC2, Rackspace, and things like Google App Engine.