Basic terms Flashcards
Compare NSX-T with NSX-V
NSX-T
decoupled from VCenter
Supports : ESXI, KVM, Bare Metal, Kubernetes, AWS and Azure
supports containers
Standalone Solution
can point to VCenter in order to register hosts
NSX manager and controller are on the same appliance
Uses GENEVE for encapsulation
NSX-V VCenter based has NSX manager registered with VCenter Separate appliances for NSX manager and NSX Controller Uses vSphere distributed switch Uses VXLAN for encapsulation
NSX Management
Control Plane
Data Plane
Management Plane
Three Nodes cluster of three virtual appliances
user interface
desired configuration to device
Control Plane
provided by the same NSX management cluster
dynamic state of logical routing, distributed firewall
it learns topology information and pushes it down to data plane
Date Plane VMs Containers NSX Edge Nodes NSX Transport Nodes
NSX Manager Roles
Policy
Manager
Controller
NSX Manager Cluster VIP
Each of the nodes in the management cluster has a dedicated IP but they are managed by a VIP that points to one Node at a time (Leader Node)
NSX Manager Database
A distributed shared database to ensure all information is synchronized between all devices in the cluster. Replicated and distributed.
NSX Controller Functions
Logical Switching
Logical Routing
Distributed Firewall
CCP and LCP
CCP and LCP
CCP
Central control plane that exists on the NSX manager and pushes information to the local control plane that exists on nodes.
NSX Controller Plane Shrading
Each Transport Node is controlled by one NSX Controller in the NSX Management cluster.
Preparing Transport Nodes for NSX-T - Data Plane
On
Hypervisors
Bare Metal Servers
Transport Zone
NVDS
TEPS
VIBs
MPA
Management Plane Agent
retrieves status of distributed firewall
retrieves statistics from Host
NSX-T Segment
Similar to VLAN Identifies a layer 2 segment Spans multiple transport nodes Like a distributed port group on vSphere Identified by a VNI
Distributed Router
Used to route traffic between multiple segments
Spans multiple Transport Nodes
Exists on Edge Nodes
Distributed Firewall
Applies firewall rules directly on the VM Level
ARP Request without NSX
ARP Request with NSX
Without NSX you can’t have a layer 2 network that spans a layer 3 network. A router will drop the ARP broadcast.
NSX allows layer 2 extension by using the concept of overlay and underlay.
GENEVE and TEP
VMkernel port
payload - IP -MAC - VNI - IP - MAC
VNI used to identify the segment that is dropping the frame into the correct Segment aka Correct Logical Switch
Transport Zones
Identifies the scope of an NSX network
A collection of transport nodes that are connected by the GENEVE overlay
When created an N-VDS will be created Two types: Overlay transport zone each transport node can be a member of one the overlay part of the network
VLAN transport zone
used with endpoints we connect directly to vlan backed distributed group
supports 802.1q
NSX Uplink Profile
Has the settings for :
Teaming method
MTU
And more
VTEP IP Pool
Needed to allocate IP addresses for Nodes in the Fabric
VLAN Transport Zone
We can create Segments inside it
each Segment is associated with a specific VLAN
Backed port groups are being created on the ESXi Hosts
edge Nodes will connect to these Nodes
Transport Node Profile
collection of settings applied to host transport Node
transport zone
uplink profile
TEP Pool
Transport VLAN in Uplink Profile
The underlay VLAN connection
Logical Switching
Provided by the N-VDS Switch
NSX Controller Tables
MAC Table -> MAC to TEP mapping
ARP Table -> IP to MAC mapping
TEP table -> VNI to TEP to MAC mapping
MAC Table
Which VTEP each mac is reachable through
After a VM is detected a MAC report is sent to the NSX controller
Distributed across all Nodes in that VNI
ARP Table
ARP regular table
When an ip to mac is detected on a VM an IP report is sent to the NSX controller
Replicated across all nodes in a specific VNI
Used to suppress ARP broadcasts Locally
VTEP Table
Tracks all TEPs participating in a VNI
Important for layer 2 broadcast in a VNI
Distributed to all hosts in that VNI
Command to display tables
get logical-switch [uuid] mac-table
get logical-switch [uuid] arp-table
get logical-switch [uuid] vtep
BUM
Broadcast
Unknown Multicast
Multicast
BUM handling
Flooded inside the VNI using the VTEP table
Routing without NSX
Traffic should be sent across a trunk outside ESXI and then sent back on the correct VLAN. Same as classic intervlan routing
East West Routing with NSX-T DR
Kernel Module that runs on each host
Distributed to Hosts
IPv6 Support
Has a leg in each segment it is active on
Uses VTEP to Route packets to different hosts
Single Tier Routing
Each Transport Node has a T0 DR
T0 DRs are connected across a Transit Overlay Link
Edge Node has a T0 DR and a Service Router (SR ) that has two connection one to Transit network and one to External network
Edge node has its own TEP too
Services Router
Handles N-S routing NAT DHCP Load Balancing Gateway Firewall VPN Bridging Connects to an outside work via an external segment Transit link connect DR routers with SR router
North South Packet Walk with T0 Architecture
Packet sent from VM to DR gateway
DR routes packet via default Route to SR [VTEP]
SR Routes packet outside via the external network
NSX Multi Tier Routing Use Cases
Multi Tenant Support
Logical Separation between Provider Router and Tenant Router
Top tier is T0 gateway
Bottom tier is T1 gateway
Tenant has complete control of Tier-1 Gateway
Multi Tier Routing Connections
Service Interface : from T0 GW to VLAN segment
Router Link Interface : from T1 GW to T0 GW
T0 can also connect to segments overlay
Two Tier Routing on Same host
VM1 to T1 DR
T1 DR T1 to T0 GW
T0 GW to T2 DR
Two Tier Routing on Different hosts
VM1 to T1 DR [inside host]
T1 DR T1 to T0 GW [inside host ]
T0 GW to T2 DR [TEP overlay]
Two Tier Routing External
VM1 -> Tenant 1 T1 DR
Tenant 1 T1 DR -> T0 GW DR
T0 GW DR -> T0 SR
SR High Availability Active Standby
All Traffic Flows through a single SR Required for Stateful Services Supported on T0 and T1 one edge node is preferred Multiple GWs can run on each node
NSX-T edge Nodes
run network services that can't be distributed north South connectivity Centralized Services: DHCP NAT VPN LB l2 bridging Service interface Gateway FW
VLAN Segments
a layer 2 broadcast domain implemented as a traditional VLAN in the physical infrastructure.
this requires traffic between two VMS on two different transport nodes but attached to same VLAN backed segment gets carried over the same VLAN on physical network