Networking Flashcards
OSI Physical Layer
- Is responsible for the transmission and reception of bits between a device and a physical transmission medium
- It converts these digital bits into electrical, radio, or optical signals
OSI Data Link Layer
- Provides node-to-node data transfer (a link between two directly connected nodes)
- It detects and possibly corrects any transmission errors that may occur in the physical layer
- Is divided into two sublayers: MAC and LLC
- MAC (media access control) is responsible for controlling how devices in a network gain access to a medium and permission to transmit data
- LLC (logical link control) is responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization
OSI Network Layer
- Provides the means of transferring variable length packets from one node to another connected in a different network
- It also manages data transfer rates, via flow control as a means of easing congestion
OSI Transport Layer
- Deals with the packetisation of data that comes in from the session layer so that it’s suitable for the network layer
- Provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred
OSI Session Layer
- Manages connections between source and destination
- This layer is responsible for responsible for gracefully closing a session as well as session checkpointing and recovery in the case of interruptions
- Allows the establishment, use and termination of a connection
OSI Presentation Layer
- Makes decisions on how data is represented before it is sent across the network
- Takes care of data compression and encryption/decryption
OSI Application Layer
- The layer closest to the end user
- This layer and the user interact directly with the software application
- This layer also serves as a window for the application services to access the network and for displaying the received information to the user
TCP/IP Link Layer
- Corresponds to the OSI physical and data link layers
- Ensures that the model can send and receive data from other layers
- Defines the protocols and hardware required to connect a host to a physical network and to deliver data across it
- Packets from the Internet layer are sent down this layer for delivery within the physical network
- The destination can be another host in the network, itself, or a router for further forwarding
TCP/IP Network Layer
- Corresponds to the OSI network layer
- Performs two functions:
1) Host addressing and identification: this is accomplished with a hierarchical IP addressing system
2) Packet routing: this is the basic task of sending packets of data (datagrams) from source to destination by forwarding them to the next network router closer to the final destination - It also defines the protocols which are responsible for logical transmission of data over the entire network: IP, ICMP, ARP
TCP/IP Transport Layer
- Corresponds to the OSI transport layer
- Is responsible for end-to-end communication and ensuring delivery of data is error-free
- Also shields the upper-layer applications from the complexities of data
- Contains two protocols: UDP and TCP
TCP/IP Application Layer
- Corresponds to the OSI session, presentation and application layers
- Is responsible for node-to-node communication and controls user-interface specifications
- Contains the communications protocols and interface methods used in process-to-process communications across an IP network
CSMA/CD
- Forces the sender to send transmissions to check for the presence of a digital signal on the wire
- If no other hosts are transmitting packets, the sender begins sending the frame
- The sender also monitors the wire to make sure no other hosts begin transmitting
- However, if another host begins transmitting at the same time and a collision occur, the transmitting host sends a jam signal that causes all hosts on the network segment to stop sending data
- The CSMA/CD rules define how long the device should wait if a collision occurs
Metcalfe’s Law
A network’s value increases exponentially with the size of the network (in this case, the users)
IP address types
- Unicast (a single packet destination)
- Broadcast (one packet goes to every local host)
- Multicast (one packet goes to one or more hosts)
- Anycast (IPv6 only, a packet goes to any one of a selection of servers, usually the closest in some sense)
IP routing classes
- Local hosting, requiring an interior gateway protocol
- Non-local hosting, requiring an exterior gateway protocol
Classful systems
- Original architecture used until CIDR was introduced
- Divides IP addresses into one of five classes, depending on its four leading bits
- Class A:
1) Most significant bit sequence is 0
2) Next 7 bits gives network number
3) Leaves 24 bits for determining the host in any of the networks - Class B:
1) Most significant bit sequence is 10
2) Next 14 bits gives network number
3) Leaves 16 bits for determining the host in any of the networks - Class C:
1) Most significant bit sequence is 110
2) Next 21 bits gives network number
3) Leaves 8 bits for determining the host in any of the networks - Class D:
1) Most significant bit sequence is 1110
2) Next 28 bits gives network number
3) Used for multicasting (when one host sends data to more than one destination) - Class E:
1) Most significant bit sequence is 11110
2) Next 27 bits gives network number
3) Reversed for future use
Motivation for introducing classful addressing
- Needed to deal with the expansion of the Internet, since the previous header could only name 256 networks
- This expansion was to cater for the trend of increasing local area networking
CIDR
- Stands for Classless Interdomain Routing
- Unlike classful routing, CIDR allows for blocks of IP addresses (typically class C) to be allocated by region and then allocated to ISPs who then assigned addresses to their customers
- CIDR has been effective in making good reuse of class A networks
- It is subnet mask-based, meaning that the IP address is split into a network and host address, with focus on the host address
Why CIDR?
- IP addressing used to be class-based, and allocations were based on the bit boundaries of the four octets of an IP addresss:
1) Class B networks were limited to 65535 network interfaces
2) Class C networks were limited to 254 network interfaces - This led to inefficiencies as well as exhaustion of IP address availability a large numbers of class C networks were required with individual route announcements, leading to exhaustion of
- As a result, the system of allocating IP addresses was no longer scalable
- Since CIDR is VLSM-based, it allows the division of a network into arbitrarily sized subnets
TCP header flags
- URG: urgent data
- ACK: acknoledgement field
- PSH: push this data to the application as possible
- RST: reset the connection
- SYN: synchronise a new connection
- FIN: finish a connection
- ECE: congestion notification
- CWR: congestion window reduced
SYN
- Only the first packet sent from each end should have this flag set
- Used as the first step in establishing a 3-way handshake between hosts
ACK
- Indicates that the acknowledgement field is significant
- Acknowledges that a packet has been successfully received
- In the second step of the 3-way handshake, the receiver sends an ACK = 1 along with a SYN = 1 to notify the sender that it received the initial packet
- All packets after the initial SYN packet sent by the client should have a set ACK flag
FIN
- Indicates the last packet from the sender
- Is used to request for connection termination (when there is no more data from the sender)
RST
- Resets the connection
- Can be used to terminate the conection if the RST sender thinks there is an issue with the TCP conection
- Usually gets sent to the sender from the receiver when a packet is sent to a particular host that was not expecting it
URG
- Indicates that the urgent field is significant
- Data with this flag is for
- Notifies the receiver to process the urgent packet before processing all other packets
ECE
- Depends on the SYN flag value, this flag checks if a TCP peer is ECN capable during a 3-way handshake
- If SYN flag = 0, indicates that there is network congestion to the TCP sender
- If SYN flag = 1, then peer is ECN capable
CWR
- Set by the sending host to indicate it received a TCP segment with the ECE flag set and had responded in congestion control mechanism
PSH
- All data in the buffer is pushed to the receiving application
- Indicates that the data should be passed as soon as possible
TCP reliability
- If one host sends a packet to another, the receiving host needs to send an ACK packet to inform the sender that it has received it
- Otherwise, the sender just resends the packet
- Every byte in a TCP connection is numbered
- The sequence number is initialised to some random number and represents the byte number of the first byte of data in the TCP packet sent
- The acknowledgement number is the sequence number of the next byte the receiver expects to receive
- The receiving host ACKs the client’s sequence number by incrementing it by 1
Three-way handshake
- Used to open a TCP connection
- There are three steps:
1) SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment’s sequence number to a random value A.
2) SYN-ACK: In response, the server replies with a SYN-ACK. The ACK number is set to one more than the received sequence number, and the sequence number that the server chooses for the packet is another random number, B.
3) ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received ACK value, and the ACK number is set to one more than the received sequence number
Passive open
The server must first bind to and listen at a port to open it up for connections before a client attempts to connect with a server, occurs once the active open is established
Active open
- The creation of a listening socket, to accept incoming connections
Active close
- The side that initiates the closedown process by issuing the first close() call is said to initiate an active close
Passive close
- The side that closes in response to the initiation is said to initiate a passive close
TCP segment exchange
- Uses up to four segments
- When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK
- After the side that sent the first FIN has responded with the final ACK, it waits for a timeout before finally closing the connection
- A connection can be “half closed”, in which case one side has terminated its end, but the other has not
- Therefore the side that has terminated can no longer send any data into the connection, but the other side can
- The terminating side should continue reading the data until the other side terminates as well
TCP three way handshake termination
- A second way of terminating TCP connections
- Merely combines two steps into one, when a host sends a FIN and the receiving host replies with a FIN & ACK and the host that sent the original FIN replies with an ACK
TCP reset sending termination
- A third way of terminating TCP connections
- Only used in the case of errors, such as connection to a non-existent port
- When a host receives a RST packet, it ends the connection and discards any packets in transit
- There are no ACKs required, therefore the connection ends immediately
TCP flow control
- Adjusts how frequently packets are sent depending on the network conditions and the state of the host receiving these packets, in order to ensure reliability
- The advertised window manages the state of the receiving host
- Different from congestion control in that flow control prevents the end-node from being overwhelmed
- Uses 16 bits of window size
- The destination has a limited amount of buffer space and fills up if data is not being processed quickly enough
- After receiving a segment, the receiving window sends back to the sender a segment indicating the amount of space left
- That way, the sender can slow down the rate of transfer for the buffer to free up space
Congestion window
- A TCP state variable that determines the amount of data the TCP can send into the network before receiving an ACK
- When a connection is set up, the congestion window is set to a small multiple of the MSS allowed on that connection
- It is calculated by estimating how much congestion there is on the link
Advertised window
- An adjustable field that determines the speed of transferring packets to make best use of current conditions in the receiving host
Sliding window
- Describes the range of bytes that the sender can send at one time
- The sender recomputes the availability of the buffer after every ACK it receives
Describing the sliding window
- The window size is sent in every ACK segment
- There are two edges on the sliding window
1) The left hand edge is defined by the number of the latest ACK segment
2) The right hand edge is defined by adding on the window size that is included in the ACK segment - The sliding window closes as more ACKs are received and the left hand edge advances (moves further left)
- The window opens as the application reads data and the right hand edge advances (moves further right)
Window scale field
- An optional field that can increase the receive window size allowed in TCP above its former maximum value of 65,535 bytes
- Is necessary because modern networks need to get the most out of the available bandwidth, which can only be achieved with a large window
Delayed ACKs
- Used to reduce traffic
- A host may delay sending an ACK response by up to 500 ms
- Additionally, with a stream of full-sized incoming segments, ACK responses must be sent for every second segment
- Delayed ACKs can give the application the opportunity to update the TCP receive window and also possibly to send an immediate response along with the ACK attached
Piggybacking
- Wherever a frame is received, the receiver waits and does not send the control frame (ACK) back to the sender immediately
- The receiver waits until its network layer passes in the next data packet
- The delayed acknowledgement is then attached to this outgoing data frame
Slow start
- The congestion window is initialised to the destination’s MSS
- A threshold value, ssthresh, is initialised to 64kb
- The value for the congestion window size increased by one with each ACK received, effectively doubling the window size each RTT
- The increase in tranmission rate continues until either a loss is detected or ssthresh is reached
- If a loss event occurs, TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network
Congestion avoidance
- Starts when ssthresh is reached without any problems
- At this point, the window is increased by 1 segment for each round-trip delay time (RTT), making growth linear rather than near-exponential like slow start
- The window continues to increase until the network’s limit is reached, usually dute to a timeout
- When this happens, TCP assumes this is due to network congestion
- The following steps occur:
1) Congestion window is reset to MSS = 1
2) ssthresh is set to half the congestion window size before the timeout
3) The congestion window starts increasing again, irrespective of whether slow start was running again
Fast Retransmit
- Reduces the time a sender waits before retransmitting a lost segment
- After receiving a packet an acknowledgement is sent for the last in order byte of data received
- When a sender receives three duplicate acknowledgements, it can believe that the segment carrying the data that followed the last in order byte specified in the acknowledgment was lost
- A sender with fast retransmit will then retransmit this packet immediately without waiting for its timeout
- On receipt of the re-transmitted segment, the receiver can acknowledge the last in order byte of data received
NAT
- Stands for Network Address Translation
- A method of mapping many local IP addresses into one global IP address to provide Internet access to the local hosts, packet addresses are modified as they traverse the gateway
- This works by allowing a single device, usually a router, to act as a gateway, meaning that only a single unique IP address is required to represent an entire group of computers to anything outside their network
- This helps in mitigating address exhaustion
Deployment of IPv6
- Considered the ultimate solution to address exhaustion since it’s a completely new version of IP
- It was designed to have a significantly larger address space (approximately 2¹²⁸ addresses) and more simplified headers to improve processing of packets
- Routers will never fragment as packets are dropped and error messages are sent to the sender (every IPv6 host is required to use path MTU discovery to avoid fragmentation)
Solving address exhaustion
- The entire range of usable IPv4 addresses has now been depleted, despite the mitigating efforts of CIDR and NAT
- CIDR has simply extended the time before exhaustion by around 3 years
- As a result, IPv6 is being developed and most ISPs are deploying it
Issues with CIDR:
- It is no longer possible to determine by looking at the first octet to determine how many bits of an IP address represent the network ID and how many the host ID
Issues with NAT:
- In the private network, the NAT becomes the endpoint meaning that the device knows only its private IP address, which can’t be accessed from the internet (effectively breaking the end-to-end principle, which is key in the development of new applications)
- An issue of compliance, developers have to use port 80 (the HTTP port) to work around NATs. However, reuse of this port makes applications process traffic through that port, which makes said traffic appear as basic web browsing, which is infringing.
Issues with IPv6 deployment:
- Cost: it’s expensive as several software upgrades may be required, since the majority of systems won’t have IPv6 support
- Lack of demand: consumers don’t care about the protocol, just accessibility to content and services, therefore there is no ability for ISPs to charge for them
UDP
- Stands for User Datagram Protocol
- If a node wants to send UDP data, it first creates a socket, then sends the data to that socket
- If a node wants to receive UDP data, it first creates a socket on an address that is known by the node that will send the data, then reads the data from that socket
- UDP layers are thinner than TCP
WPA-PSK
- Works by firstly configuring a password, between 8 and 63 characters
- This password derives a key, depending on the access point
- Using this key, the host device can then be authenticated in the access point
- Every device on the network shares the same password and as a result, share the same key
WPA Enterprise
- Each user self-authenticates via a server (usually a RADIUS one)
- RADIUS server authenticates users by account certificates
- Each device each has its own password
- Works by assigning a long encryption key to each connected device
- The key is not visible as it’s only created when a user presents their login credentialst
Wi-Fi Protected Setup
- A wireless network standard that attempts to make it easier for wireless devices to connect to a router
- Only works with networks that are encrypted with a WPA-PSK key
- Works by two ways:
1) Pressing a dedicated WPS button on the router and then on the device, which allows the device to connect without the need for a password
2) Using an eight-digit pin that is generated by the router - However, the pin is easy to hack via brute force attack
- This is because the pin is stored in two blocks of four digits, thus the first block can be brute-forced (there would only be 10000 possiblilities) and the second block can be done the same way
Carrier Sense
- The wire is listened to in order to determine whether there is a signal passing along it
- A frame cannot be transmitted if the wire is in use
Multiple access
- Allows multiple devices to share the same amount of wire to transfer data between themselves
CSMA/CA
- Stands for carrier sense multiple access with collision avoidance
- Used in WiFi
1) In wireless networks, there is no way for the sender to detect collisions the same way CSMA/CD does since the sender is only able to transmit and receive packets on the medium but is not able to sense data traversing that medium
2) Should the control message collide with another control message from another node, it means that the medium is not available for transmission and the back-off algorithm needs to be applied before attempting retransmission