Lesson 9: Installing and Configuring Security Appliances Flashcards
Firewalls
the devices principally used to implement security zones, such as intranet, demilitarized zone (DMZ), and the Internet. The basic function of a firewall is traffic filtering. A firewall resembles a quality inspector on a production line; any bad units are knocked off the line and go no farther. The firewall processes traffic according to rules; traffic that does not conform to a rule that allows it access is blocked.
There are many types of firewalls and many ways of implementing a firewall. One distinction can be made between firewalls that protect a whole network (placed inline in the network and inspecting all traffic that passes through) and firewalls that protect a single host only (installed on the host and only inspect traffic destined for that host). Another distinction can be made between border firewalls and internal firewalls. Border firewalls filter traffic between the trusted local network and untrusted external networks, such as the Internet. DMZ configurations are established by border firewalls. Internal firewalls can be placed anywhere within the network, either inline or as host firewalls, to filter traffic flows between different security zones. A further distinction can be made about what parts of a packet a particular firewall technology can inspect and operate on.
Packet filtering
describes the earliest type of network firewall. All firewalls can still perform this basic function. A packet filtering firewall is configured by specifying a group of rules, called an access control list (ACL). Each rule defines a specific type of data packet and the appropriate action to take when a packet matches the rule. An action can be either to deny (block or drop the packet, and optionally log an event) or to accept (let the packet pass through the firewall).
Another distinction that can be made is whether the firewall can control only inbound traffic or both inbound and outbound traffic. This is also often referred to as ingress and egress traffic or filtering. Controlling outbound traffic is useful because it can block applications that have not been authorized to run on the network and defeat malware, such as backdoors. Ingress and egress traffic is filtered using separate ACLs.
A packet filtering firewall can inspect the headers of IP packets. This means that rules can be based on the information found in those headers:
- IP filtering—accepting or denying traffic on the basis of its source and/or destination IP address.
- Protocol ID/type (TCP, UDP, ICMP, routing protocols, and so on).
- Port filtering/security—accepting or denying a packet on the basis of source and destination port numbers (TCP or UDP application type).
Packet filtering is a stateless technique because the firewall examines each packet in isolation and has no record of previous packets.
This means that it does not preserve information about the connection between two hosts. Each packet is analyzed independently, with no record of previously processed packets. This type of filtering requires the least processing effort, but it can be vulnerable to attacks that are spread over a sequence of packets. A stateless firewall can also introduce problems in traffic flow, especially when some sort of load balancing is being used or when clients or servers need to use dynamically assigned ports.
circuit-level stateful inspection firewall
A circuit-level stateful inspection firewall addresses these problems by maintaining stateful information about the session established between two hosts (including malicious attempts to start a bogus session). Information about each session is stored in a dynamically updated state table.
When a packet arrives, the firewall checks it to confirm whether it belongs to an existing connection. If it does not, it applies the ordinary packet filtering rules to determine whether to allow it. Once the connection has been allowed, the firewall allows traffic to pass unmonitored, in order to conserve processing effort.
A circuit-level firewall examines the TCP three-way handshake and can detect attempts to open connections maliciously (a flood guard). It also monitors packet sequence numbers and can prevent session hijacking attacks. It can respond to such attacks by blocking source IP addresses and throttling sessions.
application aware firewall
one that can inspect the contents of packets at the application layer. For example, a web application firewall could analyze the HTTP headers and the HTML code present in HTTP packets to try to identify code that matches a pattern in its threat database. Application aware firewalls have many different names, including application layer gateway, stateful multilayer inspection, or deep packet inspection. Application aware devices have to be configured with separate filters for each type of traffic (HTTP and HTTPS, SMTP/POP/ IMAP, FTP, and so on). Application aware firewalls are very powerful, but they are not invulnerable. Their very complexity means that it is possible to craft DoS attacks against exploitable vulnerabilities in the firewall firmware. Also, the firewall cannot examine encrypted data packets (unless configured with an SSL inspector).
appliance firewall
a stand-alone hardware firewall that performs the function of a firewall only. The functions of the firewall are implemented on the appliance firmware. This is also a type of network-based firewall and monitors all traffic passing into and out of a network segment. This type of appliance could be implemented with routed interfaces or as a layer 2/virtual wire transparent firewall. Nowadays, the role of advanced firewall is likely to be performed by an all-in-one or unified threat management (UTM) security appliance, combining the function of firewall, intrusion detection, malware inspection, and web security gateway (content inspection and URL filtering).
router firewall
A router firewall is similar, except that the functionality is built into the router firmware. Most SOHO Internet router/modems have this type of firewall functionality. An enterprise-class router firewall would be able to support far more sessions than a SOHO one. Additionally, some layer 3 switches can perform packet filtering.
Firewalls can also run as software on any type of computing host. There are several types of application-based firewalls:
- Host-based firewall (or personal firewall)—implemented as a software application running on a single host designed to protect that host only.
- Application firewall—software designed to run on a server to protect a particular application only (a web server firewall, for instance, or a firewall designed to protect an SQL Server® database). This is a type of host-based firewall and would typically be deployed in addition to a network firewall.
- Network operating system (NOS) firewall—a software-based firewall running under a network server OS, such as Windows® or Linux®. The server would function as a gateway or proxy for a network segment.
Host-based firewall (or personal firewall)
tend to be program-or process-based; that is, when a program tries to initiate (in the case of outbound) or accept (inbound) a TCP/IP network connection, the firewall prompts the user to block, allow once, or allow always. Advanced configuration options allow the user to do things such as specify ports or IP scopes for particular programs (to allow access to a local network but not the Internet, for instance), block port scans, and so on.
Unlike a network firewall, a host-based firewall will usually display an alert to the user when a program is blocked, allowing the user to override the block rule or add an accept rule (if the user has sufficient permissions to reconfigure firewall settings).
One of the main drawbacks of a personal firewall is that as software it is open to compromise by malware. For example, there is not much point in allowing a process to connect if the process has been contaminated by malicious code, but a basic firewall would have no means of determining the integrity of the process. Therefore, the trend is for security suite software, providing comprehensive anti-virus and intrusion detection.
web application firewall (WAF)
one designed specifically to protect software running on web servers and their backend databases from code injection and DoS attacks. WAFs use application-aware processing rules to filter traffic. The WAF can be programmed with signatures of known attacks and use pattern matching to block requests containing suspect code. The output from a WAF will be written to a log, which you can inspect to determine what threats the web application might be subject to.
A WAF may be deployed as an appliance or as plug-in software for a web server platform. Some examples of WAF products include:
- ModSecurity (http://www.modsecurity.org) is an open source (sponsored by Trustwave) WAF for Apache®, Nginx, and IIS.
- NAXSI (https://github.com/nbs-system/naxsi) is an open source module for the nginx web server software.
- Imperva (http://www.imperva.com) is a commercial web security offering with a particular focus on data centers. Imperva markets WAF, DDoS, and database security through its SecureSphere appliance.
proxy server
The basic function of a packet filtering network firewall is to inspect packets and determine whether to block them or allow them to pass. By contrast, a proxy server works on a store-and-forward model. Rather than inspecting traffic as it passes through, the proxy deconstructs each packet, performs analysis, then rebuilds the packet and forwards it on (providing it conforms to the rules). In fact, a proxy is a legitimate “man in the middle”! This is more secure than a firewall that performs only filtering. If a packet contains malicious content or construction that a firewall does not detect as such, the firewall will allow the packet. A proxy would erase the suspicious content in the process of rebuilding the packet. The drawback is that there is more processing to be done than with a firewall.
web security gateways
Web proxies are often also described as web security gateways as usually their primary functions are to prevent viruses or Trojans infecting computers from the Internet, block spam, and restrict web use to authorized sites, acting as a content filter.
caching engines
The main benefit of a proxy server is that client computers connect to a specified point within the perimeter network for web access. This provides for a degree of traffic management and security. In addition, most web proxy servers provide caching engines, whereby frequently requested web pages are retained on the proxy, negating the need to re-fetch those pages for subsequent requests. Some proxy servers also pre-fetch pages that are referenced in pages that have been requested. When the client computer then requests that page, the proxy server already has a local copy.
A proxy server must understand the application it is servicing. For example, a web proxy must be able to parse and modify HTTP and HTTPS commands (and potentially HTML too). Some proxy servers are application-specific; others are multipurpose. A multipurpose proxy is one configured with filters for multiple protocol types, such as HTTP, FTP, and SMTP.
Proxy servers can generally be classed as non-transparent or transparent.
- A non-transparent server means that the client must be configured with the proxy server address and port number to use it. The port on which the proxy server accepts client connections is often configured as port 8080.
- A transparent (or forced or intercepting) proxy intercepts client traffic without the client having to be reconfigured. A transparent proxy must be implemented on a switch or router or other inline network appliance.
reverse proxy server
A reverse proxy server provides for protocol-specific inbound traffic. For security purposes, it is inadvisable to place application servers, such as messaging and VoIP servers, in the perimeter network, where they are directly exposed to the Internet. Instead, you can deploy a reverse proxy and configure it to listen for client requests from a public network (the Internet), and create the appropriate request to the internal server on the corporate network.
Reverse proxies can publish applications from the corporate network to the Internet in this way. In addition, some reverse proxy servers can handle the encryption/decryption and authentication issues that arise when remote users attempt to connect to corporate servers, reducing the overhead on those servers. Typical applications for reverse proxy servers include publishing a web server, publishing IM or conferencing applications, and enabling POP/IMAP mail retrieval.
rule-based management
A firewall, proxy, or content filter is an example of rule-based management. Firewall and other filtering rules are configured on the principle of least access. This is the same as the principle of least privilege; only allow the minimum amount of traffic required for the operation of valid network services and no more. The rules in a firewall’s ACL are processed top-to-bottom. If traffic matches one of the rules, then it is allowed to pass; consequently, the most specific rules are placed at the top. The final default rule is typically to block any traffic that has not matched a rule (implicit deny).
Each rule can specify whether to block or allow traffic based on several parameters, often referred to as tuples. If you think of each rule being like a row in a database, the tuples are the columns. For example, in the previous screenshot, the tuples include Protocol, Source (address), (Source) Port, Destination (address), (Destination) Port, and so on.
Even the simplest packet filtering firewall can be complex to configure securely. It is essential to create a written policy describing what a filter ruleset should do and to test the configuration as far as possible to ensure that the ACLs you have set up work as intended. Also test and document changes made to ACLs. Some other basic principles include:
- Block incoming requests from internal or private IP addresses (that have obviously been spoofed).
- Block incoming requests from protocols that should only be functioning at a local network level, such as ICMP, DHCP, or routing protocol traffic.
- Use penetration testing to confirm the configuration is secure. Log access attempts and monitor the logs for suspicious activity.
- Take the usual steps to secure the hardware on which the firewall is running and use of the management interface.
Denial of Service (DoS) attack
causes a service at a given host to fail or to become unavailable to legitimate users. Typically, DoS attacks focus on overloading a service by using up CPU, system RAM, disk space, or network bandwidth (resource exhaustion). It is also possible for DoS attacks to exploit design failures or other vulnerabilities in application software. An example of a physical DoS attack would be cutting telephone lines or network cabling or switching off the power to a server. DoS attacks may simply be motivated by the malicious desire to cause trouble. They may also be part of a wider attack, such as the precursor to a MitM or data exfiltration attack.
Many DoS attacks attempt to deny bandwidth to web servers connected to the Internet. They focus on exploiting historical vulnerabilities in the TCP/IP protocol suite. TCP/IP was never designed for security; it assumes that all hosts and networks are trusted. Other application attacks do not need to be based on consuming bandwidth or resources. Attacks can target known vulnerabilities in software to cause them to crash; worms and viruses can render systems unusable or choke network bandwidth.
All these types of DoS attack can have severe impacts on service availability, with a consequent effect on the productivity and profitability of a company. Where a DoS attack disrupts customer-facing services, there could be severe impacts on the company’s reputation. An organization could also be presented with threats of blackmail or extortion.
Distributed Dos (DDoS) attack
Most bandwidth-directed DoS attacks are distributed. This means that the attacks are launched from multiple, compromised computers.
The handlers are used to compromise hundreds or thousands or millions of zombie (agent) PCs with DoS tools (bots) forming a botnet. To compromise a computer, the attacker must install a backdoor application that gives them access to the PC. They can then use the backdoor application to install DoS software and trigger the zombies to launch the attack at the same time.
DoS attacks might be coordinated between groups of attackers. There is growing evidence that nation states are engaging in cyber warfare, and terrorist groups have also been implicated in DoS attacks on well-known companies and government institutions. There are also hacker collectives that might target an organization as part of a campaign.
Some types of attacks simply aim to consume network bandwidth, denying it to legitimate hosts. Others cause resource exhaustion on the hosts processing requests, consuming CPU cycles and memory. This delays processing of legitimate traffic and could potentially crash the host system completely. For example, a SYN flood attack works by withholding the client’s ACK packet during TCP’s three-way handshake. Typically, the client’s IP address is spoofed, meaning that an invalid or random IP is entered so the server’s SYN/ACK packet is misdirected. A server can maintain a queue of pending connections. When it does not receive an ACK packet from the client, it resends the SYN/ACK packet a set number of times before “timing out” and giving up on the connection. The problem is that a server may only be able to manage a limited number of pending connections, which the DoS attack quickly fills up. This means that the server is unable to respond to genuine traffic.
Servers can suffer the effects of a DDoS even when there is no malicious intent. For instance, the Slashdot effect is a sudden, temporary surge in traffic to a website that occurs when another website or other source posts a story that refers visitors to the victim website. This effect is more noticeable on smaller websites, and the increase in traffic can slow a website’s response times or make it impossible to reach altogether.
zombie
agent
bots
DoS tools
Distributed Reflection DoS (DRDoS) or amplification attack
A more powerful TCP SYN flood attack is a type of Distributed Reflection DoS (DRDoS) or amplification attack. In this attack, the adversary spoofs the victim’s IP address and attempts to open connections with multiple servers. Those servers direct their SYN/ACK responses to the victim server. This rapidly consumes the victim’s available bandwidth.
Smurf attack
A similar type of amplification attack can be performed by exploiting other protocols. For example, in a Smurf attack, the adversary spoofs the victim’s IP address and pings the broadcast address of a third-party network (one with many hosts; referred to as the “amplifying network”). Each host directs its echo responses to the victim server.
bogus DNS queries
The same sort of technique can be used to bombard a victim network with responses to bogus DNS queries. One of the advantages of this technique is that while the request is small, the response to a DNS query can be made to include a lot of information, so this is a very effective way of overwhelming the bandwidth of the victim network with much more limited resources on the attacker’s botnet.
Network Time Protocol (NTP)
The Network Time Protocol (NTP) can be abused in a similar way. NTP helps servers on a network and on the Internet to keep the correct time. It is vital for many protocols and security mechanisms that servers and clients be synchronized. One NTP query (monlist) can be used to generate a response containing a list of the last 600 machines that the NTP server has contacted. As with the DNS amplification attack, this allows a short request to direct a long response at the victim network.
blackhole
When a network is faced with a DDoS or similar flooding attack, an ISP can use either an ACL or a blackhole to drop packets for the affected IP address(es). A blackhole is an area of the network that cannot reach any other part of the network. The blackhole option is preferred, as evaluating each packet in a multi-gigabit stream against ACLs overwhelms the processing resources available. The blackhole also makes the attack less damaging to the ISP’s other customers. With both approaches, legitimate traffic is discarded along with the DDoS packets.
sinkhole routing
Another option is to use sinkhole routing so that the traffic flooding a particular IP address is routed to a different network where it can be analyzed. Potentially, some legitimate traffic could be allowed through, but the real advantage is to identify the source of the attack and devise rules to filter it. The target can then use low TTL DNS records to change the IP address advertised for the service and try to allow legitimate traffic past the flood.
load balancer
A load balancer distributes client requests across available server nodes in a farm or pool. Clients use the single name/IP address of the load balancer to connect to the servers in the farm. This provides for higher throughput or supports more connected users. A load balancer provides fault tolerance. If there are multiple servers available in a farm, all addressed by a single name/IP address via a load balancer, then if a single server fails, client requests can be routed to another server in the farm. You can use a load balancer in any situation where you have multiple servers providing the same function. Examples include web servers, front-end email servers, and web conferencing, A/V conferencing, or streaming media servers.
There are two main types of load balancers:
- Layer 4 load balancer—early instances of load balancers would base forwarding decisions on IP address and TCP/UDP port values (working at up to layer 4 in the OSI model). This type of load balancer is stateless; it cannot retain any information about user sessions.
- Layer 7 load balancer (content switch)—as web applications have become more complex, modern load balancers need to be able to make forwarding decisions based on application-level data, such as a request for a particular URL or data types like video or audio streaming. This requires more complex logic, but the processing power of modern appliances is sufficient to deal with this.
Most load balancers need to be able to provide some or all of the following features:
- Configurable load—the ability to assign a specific server in the farm for certain types of traffic or a configurable proportion of the traffic.
- TCP offload—the ability to group HTTP packets from a single client into a collection of packets assigned to a specific server.
- SSL offload—when you implement SSL/TLS to provide for secure connections, this imposes a load on the web server (or other server). If the load balancer can handle the processing of authentication and encryption/decryption, this reduces the load on the servers in the farm.
- Caching—as some information on the web servers may remain static, it is desirable for the load balancer to provide a caching mechanism to reduce load on those servers.
- Prioritization—to filter and manage traffic based on its priority.
Virtual IP (VIP) address (or addresses)
Each server node or instance needs its own IP address, but externally a load-balanced service is advertised using a Virtual IP (VIP) address (or addresses). There are different protocols available to handle virtual IP addresses and they differ in the ways that the VIP responds to ARP and ICMP, and in compatibility with services such as NAT and DNS. One of the most widely used protocols is the Common Address Redundancy Protocol (CARP). There is also Cisco’s proprietary Gateway Load Balancing Protocol (GLBP).
scheduling algorithm
The scheduling algorithm is the code and metrics that determine which node is selected for processing each incoming request. The simplest type of scheduling is called round robin; this just means picking the next node. Other methods include picking the node with fewest connections or best response time. Each method can also be weighted, using administrator set preferences or dynamic load information or both.
The load balancer must also use some type of heartbeat or health check probe to verify whether each node is available and under load or not. Layer 4 load balancers can only make basic connectivity tests while layer 7 appliances can test the application’s state, as opposed to only verifying host availability.
round robin DNS (RRDNS)
Load balancing can be accomplished using software rather than dedicated hardware appliances. One example is round robin DNS (RRDNS), which is where a client enters a web server name in a browser and the DNS server responsible for resolving that name to an IP address for client connectivity will return one of several configured addresses, in turn, from amongst a group configured for the purpose. This can be cost-effective, but load balancing appliances provide better fault tolerance and more efficient algorithms for distribution of requests than RRDNS.
Source IP or session affinity
When a client device has established a session with a particular node in the server farm, it may be necessary to continue to use that connection for the duration of the session. Source IP or session affinity is a layer 4 approach to handling user sessions. It means that when a client establishes a session, it becomes stuck to the node that first accepted the request. This can be accomplished by hashing the IP and port information along with other scheduling metrics. This hash uniquely identifies the session and will change if a node stops responding or a node weighting is changed. This is cost-effective in terms of performance but not sticky enough for some applications. An alternative method is to cache the client IP in memory (a stick table).
An application-layer load balancer can use persistence to keep a client connected to a session. Persistence typically works by setting a cookie, either on the node or injected by the load balancer.