22 SDN Controllers Flashcards
What is an SDN ?
Software Defined Networking controller that centrally manages and monitors the network instead of doing everything manually.
Traditional Network Monitoring
Systems (NMS)
Continuously poll your network devices using SNMP so it can pinpoint any misbehaving devices. The NMS also keeps track of how much traffic is moving through the network, and depending on the exact solution being used, everything from applications to VoIP call quality can be tracked. By default, NMS polls the network devices every couple of minutes. Plus, when an interface goes down on a Cisco router, the router can send an SNMP trap to the NMS. Most NMS solutions can send an email, an SMS text message if someone’s on call for network support, or just display the alert so the Network Operations Center can investigate the issue.
Configuring SNMP
C3750X-SW01(config)#snmp-server community testlabRO RO
C3750X-SW01(config)#snmp-server community testlabRW RW
C3750X-SW01(config)#snmp-server host 10.20.2.115 traps testlabTRAPS
C3750X-SW01(config)#snmp-server source-interface traps vlan 310
C3750X-SW01(config)#snmp-server enable traps
C3750X-SW01(config)#do sh run | in enable trap
Network Health
NMS’s usually show the status of your network with three colors:
■ Green - Healthy nodes with no reported problems.
■ Yellow - Nodes that are up but have reported issues like interfaces that are down, or a hardware issue like a fan that isn’t working.
■ Red - Nodes that aren’t reachable and are probably down.
NMS is configured with lots of default rules defining how a system should treat various events discovered by the SNMP polling.
Central Syslog
NMS solutions can also serve as a convenient central syslog when troubleshooting. On the Syslog page, you can run searches to make it easier to filter exactly what you’re looking for. Syslog messages from a network device can also be used to notify the NMS there’s a network issue. You can even configure rules that will determine how the syslog will react to whatever syslog messages you want. This allows you to make things happen like having the server run a script if the NMS receives an OSPF-related message. All we have to do to configure a Cisco router to send syslog messages to the NMS is tell the router what logging level the router should send traps at.
Interface Information
You can easily view a graphical representation of various interface information through the interface utilization graph.
Hardware Health
When you’re troubleshooting manually, hardware health is notoriously hard to keep track of. The NMS makes life a whole lot easier by presenting us with an easy-to-read graph on our hardware health.
Network Information
The network monitoring system does offers a ton of information about what’s generally up with the network. Two graphs worth highlighting are Response Time & Packet Loss, which is great for troubleshooting those “slow Internet” phone calls, and CPU Load and Memory Utilization. NMS also pulls in intel like CDP information to show you vitals like which devices are directly attached to the switch and which VLANs are on the switch so it can populate other graphs. It analyzes routes within the routing table so it can determine if flapping is happening in your network, where a route appears and then disappears over and over. It culls even more advanced information like if the switch is in a stack or not, revealed in the VLAN table.
Traditional Network Configuration
Managers (NCMs)
It manages your network configuration. Depending on the specific solution, the NCM can be the same server as your NMS, or it can even be an entirely separate server with no integration. The NCM routinely backups your configuration by connecting to each network device and copying the configuration over to the server. Network devices can also be set up to notify the NCM about any changes so the server knows to collect the new configuration. Once the config is on the server, the NCM lets you search through saved configs for keywords, letting you compare configuration to see if there are any changes between saved versions. And you can push out configurations too! The NCM can also push out simple configurations. What’s more, you can even use the NCM scripting language to make configuration templates to effect mass configuration changes across the network. Every type NCM uses a different scripting engine for the template. feature.
Traditional Networking
To really get what Software Defined Networking (SDN) is you’ve got to understand how a regular router sends traffic first. When a router receives a packet, it jumps through several hoops before it can send that packet out towards destination. Before the router can send out traffic, it has to know all the available destination routes. These routes are learned via a static route, a default route, or through a routing protocol. The router will need an ARP entry for the next hop IP address before it can send the traffic. The TTL on the packet will also be decreased by one as it passes through the router, and the IP header and Ethernet frame checksum will also be recalculated before the traffic is sent over the wire. Routers divide these different tasks into three different planes:
■ The management plane
■ The control plane
■ The data plane
Management Plane
The management plane controls everything about loging into a network device including telnet and SSH access. SNMP is also included in the management plane, which allows Network Monitoring Systems to poll the device for information. And HTTP and HTTPs are also part of the plane. APIs are also considered management access including restful API discussed back in the Automation chapter. Ports like the console port, the AUX port, and the management port are also found here.
Control Plane
it’s where all the protocols are run and where all the decisions are made. The goal of this plane is to generate all necessary forwarding information needed to send the packet on towards its destination. So, lots of important things happen in the control plane. Security functions like defining ACLs and NAT, if the packet needs to change its source, or if the destination changed. Of course, everything to do with routing protocols like OSFP, including forming adjacencies and learning the routes, all occur on this plane. ARP is also a big part of the control plane, since knowing how to reach the layer 2 address of the next hop is essential for the actual routing to occur. Other control plane protocols include things like STP, VTP, and MAC address tables on switches, as well as QoS and CDP/LLDP.
Data Plane
The data plane’s job is to take all the information presented from the control plane and use it to send the packet on its merry way. Everything that happens at the data plane directly affects traffic. These are activities like encapsulating and de-encapsulating traffic as it arrives at and leaves the router, adding and removing packet headers as needed, plus actually dropping traffic that hits a deny statement. on a ACL are all data plane tasks. Even the actual forwarding, where the packet moves from the inbound interface to the outbound interface, happen here as well.
Southbound Interfaces
the South Bound Interface (SBI) is how the SDN Controller actually talks with the network device, and there are lots of different ways it can do that depending on your specific solution. For instance, OpenDaylight, a popular open source SDN Controller, uses a protocol called OpenFlow to talk to switches. On the other hand, Meraki uses a proprietary solution right now since they manage everything themselves.
OpenFlow
Describes an industry-standard API defined by the ONF (opennetworking. org). It configures non-proprietary, white label switches and determines the flow path through the network. All configuration is done via NETCONF. OpenFlow first sends detailed and complex instructions to the control plane of the network elements in order to implement a new application policy. This is referred to as an imperative SDN model.
NETCONF
Even though all devices don’t yet support NETCONF, it provides a network management protocol standardized by the IETF. With the help of RPC, you can install, manipulate, and delete the configuration of network devices using XML.
onePK
This is a Cisco proprietary SBI that allows you to inspect or modify the network element configuration without hardware upgrades. It makes life easier for developers by providing software development kits for Java, C, and Python. One PK is now legacy, but it’s still possible to find it in the real world.
OpFlex
This is a southbound API that is used by Cisco ACI, OpFlex uses a declarative SDN model because the controller, which Cisco calls the Application Policy Infrastructure Controller (APIC), sends a more abstract, “summary policy” to the network elements. The summary policy makes the controller believe that the network elements will implement the required changes using their own control planes, since the devices will use a partially centralized control plane.
SDN Solutions
Cisco APIC-EM
This was Cisco’s first real attempt at an enterprise SDN controller, and its main focus was configuring Cisco’s IWAN solution. Considered legacy these days, APIC-EM was succeeded by DNA Center.
Cisco DNA-Center
This is Cisco’s main enterprise SDN controller
Cisco ACI
This is Cisco’s Data Center focused SDN solution.
Cisco SD-WAN
This solution brings the benefits of SDN to the WAN. You’ll learn more about SD-WAN when you tackle the CCNP.
OpenDaylight
ODL is a popular open source OpenFlow controller. Cisco offers a bit of OpenFlow support, but Cisco definitely prefers their own SDN solutions due to OpenFlow limitations.
Controller-Based Architectures
Cisco SDN solutions like Digital Network Architecture (DNA) Center, allows you to centrally manage your network device’s configuration through several applications that live on the SDN Controller. This is better than traditional configuration because if you need to make changes to your network, you just adjust the settings in DNA Center to be replicated to your network’s relevant devices. It ensures configuration is consistent everywhere at once, greatly reducing the risk of a typo when making changes manually. The downside, if you make a mistake on a template thats being pushed to many devices that can cause a huge problem. Controllers also give us a convenient central point for monitoring and automation since they’re usually aware of a large part of the network, if not all of it.
Campus Architecture
Switches are connected to each other in a hierarchical fashion. The upside to this approach is that troubleshooting is easy since the stuff that belongs in each layer of the model is well defined, just like how the OSI model makes it easier to understand what’s happening on the network and where. All the endpoints in the network connect to the access layer where VLANs are assigned. Port-level features like port security or 802.1X are applied at this layer. Since access layer switches don’t have a lot of responsibilities and generally, no layer 3 configuration aside from what’s needed for managing the switch, you can usually get away with cheaper layer 2 switches and save some coin. The Distribution layer hosts all the SVIs and provides any IP-based services the network needs like DHCP relay. The distribution switch uses layer 2 interfaces with the access layer switches to terminate the VLANs, plus layer 3 interfaces to connect to the core switches. It will also a run a routing protocol to share routes with them. The core layer’s only job is providing high speed routing between the distribution switches. It doesn’t offer any other services—it just makes sure packets get from one switch to another.
Spine/Leaf Architecture
The new and preferred architecture for controller-based networks and data centers is called CLOS, which stands for nothing other than the guy’s name who thought it up. CLOS is a spine/leaf design wherein you have two types of switches: a spine and a leaf. The Leaf switch maps to the access and distribution layers in the Cisco 3 tier model and is what you connect your devices into. Each leaf switch has a high-bandwidth uplink to each spine switch. The spine switch is a lot like the core because its sole job is to provide super fast transport across the leaf switches. Because leaf switches only connect to the spine switches not other leaf switches, traffic is really predictable since all destinations in the fabric follow the same path: Leaf -> Spine -> Leaf. Because everything is 3 hops away, traffic is easily load balanced in the routing table via equal-cost load balancing (ECMP). What’s more, it’s also very easy to expand the network. If you need more ports, just add a leaf switch. Need more bandwidth? Just add another spine switch.
SDN Network Components
One of the benefits of Software Defined Networking is that a SDN controller can abstract away the “boring stuff” so you can focus on the fun more complex configurations. One of the ways SDN achieves this is by dividing the network into two different parts. An underlay, this is the physical network that is focused on providing a lot of layer 3 connectivity throughout the network. The underlay typically uses the spine/leaf architecture we just discussed but can also use the campus architecture depending on the solution being used. For example, DNA Center’s Software Defined Access solution is based on a typical campus topology because it is aimed at enterprise networks. SD-Access does use slightly different names though, the access layer is called the edge, and the intermediate node is the equivalent of the distribution layer but we don’t need to worry too much about that architecture. There is also the overlay component, which is where the services the SDN controller provides is tunneled over the underlay.
Underlay
■ MTU
■ Interface Config
■ OSPF or IS-IS Config
■ Verification
So the underlay is basically the physical network that provides connectivity so that the overlay network can be built upon, or over it. There’s usually basic configuration on it and its focus is on advertising the device’s loopback IP into OSPF or IS-IS is another link state routing protocol that is beyond what we need to learn for the CCNA. Devices in the underlay tend to be cabled so they’re highly redundant, removing single points of failure and optimizing performance. One way to implement this is via a full mesh topology, where every device is connected to every other device. Even though a full mesh network provides maximum redundancy, it can get out of hand fast because of the number of links involved as your network grows.