Lesson 9: Installing and Configuring Security Appliances Flashcards

1
Q

Firewalls

A

the devices principally used to implement security zones, such as intranet, demilitarized zone (DMZ), and the Internet. The basic function of a firewall is traffic filtering. A firewall resembles a quality inspector on a production line; any bad units are knocked off the line and go no farther. The firewall processes traffic according to rules; traffic that does not conform to a rule that allows it access is blocked.

There are many types of firewalls and many ways of implementing a firewall. One distinction can be made between firewalls that protect a whole network (placed inline in the network and inspecting all traffic that passes through) and firewalls that protect a single host only (installed on the host and only inspect traffic destined for that host). Another distinction can be made between border firewalls and internal firewalls. Border firewalls filter traffic between the trusted local network and untrusted external networks, such as the Internet. DMZ configurations are established by border firewalls. Internal firewalls can be placed anywhere within the network, either inline or as host firewalls, to filter traffic flows between different security zones. A further distinction can be made about what parts of a packet a particular firewall technology can inspect and operate on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Packet filtering

A

describes the earliest type of network firewall. All firewalls can still perform this basic function. A packet filtering firewall is configured by specifying a group of rules, called an access control list (ACL). Each rule defines a specific type of data packet and the appropriate action to take when a packet matches the rule. An action can be either to deny (block or drop the packet, and optionally log an event) or to accept (let the packet pass through the firewall).

Another distinction that can be made is whether the firewall can control only inbound traffic or both inbound and outbound traffic. This is also often referred to as ingress and egress traffic or filtering. Controlling outbound traffic is useful because it can block applications that have not been authorized to run on the network and defeat malware, such as backdoors. Ingress and egress traffic is filtered using separate ACLs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A packet filtering firewall can inspect the headers of IP packets. This means that rules can be based on the information found in those headers:

A
  • IP filtering—accepting or denying traffic on the basis of its source and/or destination IP address.
  • Protocol ID/type (TCP, UDP, ICMP, routing protocols, and so on).
  • Port filtering/security—accepting or denying a packet on the basis of source and destination port numbers (TCP or UDP application type).

Packet filtering is a stateless technique because the firewall examines each packet in isolation and has no record of previous packets.

This means that it does not preserve information about the connection between two hosts. Each packet is analyzed independently, with no record of previously processed packets. This type of filtering requires the least processing effort, but it can be vulnerable to attacks that are spread over a sequence of packets. A stateless firewall can also introduce problems in traffic flow, especially when some sort of load balancing is being used or when clients or servers need to use dynamically assigned ports.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

circuit-level stateful inspection firewall

A

A circuit-level stateful inspection firewall addresses these problems by maintaining stateful information about the session established between two hosts (including malicious attempts to start a bogus session). Information about each session is stored in a dynamically updated state table.

When a packet arrives, the firewall checks it to confirm whether it belongs to an existing connection. If it does not, it applies the ordinary packet filtering rules to determine whether to allow it. Once the connection has been allowed, the firewall allows traffic to pass unmonitored, in order to conserve processing effort.

A circuit-level firewall examines the TCP three-way handshake and can detect attempts to open connections maliciously (a flood guard). It also monitors packet sequence numbers and can prevent session hijacking attacks. It can respond to such attacks by blocking source IP addresses and throttling sessions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

application aware firewall

A

one that can inspect the contents of packets at the application layer. For example, a web application firewall could analyze the HTTP headers and the HTML code present in HTTP packets to try to identify code that matches a pattern in its threat database. Application aware firewalls have many different names, including application layer gateway, stateful multilayer inspection, or deep packet inspection. Application aware devices have to be configured with separate filters for each type of traffic (HTTP and HTTPS, SMTP/POP/ IMAP, FTP, and so on). Application aware firewalls are very powerful, but they are not invulnerable. Their very complexity means that it is possible to craft DoS attacks against exploitable vulnerabilities in the firewall firmware. Also, the firewall cannot examine encrypted data packets (unless configured with an SSL inspector).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

appliance firewall

A

a stand-alone hardware firewall that performs the function of a firewall only. The functions of the firewall are implemented on the appliance firmware. This is also a type of network-based firewall and monitors all traffic passing into and out of a network segment. This type of appliance could be implemented with routed interfaces or as a layer 2/virtual wire transparent firewall. Nowadays, the role of advanced firewall is likely to be performed by an all-in-one or unified threat management (UTM) security appliance, combining the function of firewall, intrusion detection, malware inspection, and web security gateway (content inspection and URL filtering).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

router firewall

A

A router firewall is similar, except that the functionality is built into the router firmware. Most SOHO Internet router/modems have this type of firewall functionality. An enterprise-class router firewall would be able to support far more sessions than a SOHO one. Additionally, some layer 3 switches can perform packet filtering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Firewalls can also run as software on any type of computing host. There are several types of application-based firewalls:

A
  • Host-based firewall (or personal firewall)—implemented as a software application running on a single host designed to protect that host only.
  • Application firewall—software designed to run on a server to protect a particular application only (a web server firewall, for instance, or a firewall designed to protect an SQL Server® database). This is a type of host-based firewall and would typically be deployed in addition to a network firewall.
  • Network operating system (NOS) firewall—a software-based firewall running under a network server OS, such as Windows® or Linux®. The server would function as a gateway or proxy for a network segment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Host-based firewall (or personal firewall)

A

tend to be program-or process-based; that is, when a program tries to initiate (in the case of outbound) or accept (inbound) a TCP/IP network connection, the firewall prompts the user to block, allow once, or allow always. Advanced configuration options allow the user to do things such as specify ports or IP scopes for particular programs (to allow access to a local network but not the Internet, for instance), block port scans, and so on.

Unlike a network firewall, a host-based firewall will usually display an alert to the user when a program is blocked, allowing the user to override the block rule or add an accept rule (if the user has sufficient permissions to reconfigure firewall settings).

One of the main drawbacks of a personal firewall is that as software it is open to compromise by malware. For example, there is not much point in allowing a process to connect if the process has been contaminated by malicious code, but a basic firewall would have no means of determining the integrity of the process. Therefore, the trend is for security suite software, providing comprehensive anti-virus and intrusion detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

web application firewall (WAF)

A

one designed specifically to protect software running on web servers and their backend databases from code injection and DoS attacks. WAFs use application-aware processing rules to filter traffic. The WAF can be programmed with signatures of known attacks and use pattern matching to block requests containing suspect code. The output from a WAF will be written to a log, which you can inspect to determine what threats the web application might be subject to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A WAF may be deployed as an appliance or as plug-in software for a web server platform. Some examples of WAF products include:

A
  • ModSecurity (http://www.modsecurity.org) is an open source (sponsored by Trustwave) WAF for Apache®, Nginx, and IIS.
  • NAXSI (https://github.com/nbs-system/naxsi) is an open source module for the nginx web server software.
  • Imperva (http://www.imperva.com) is a commercial web security offering with a particular focus on data centers. Imperva markets WAF, DDoS, and database security through its SecureSphere appliance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

proxy server

A

The basic function of a packet filtering network firewall is to inspect packets and determine whether to block them or allow them to pass. By contrast, a proxy server works on a store-and-forward model. Rather than inspecting traffic as it passes through, the proxy deconstructs each packet, performs analysis, then rebuilds the packet and forwards it on (providing it conforms to the rules). In fact, a proxy is a legitimate “man in the middle”! This is more secure than a firewall that performs only filtering. If a packet contains malicious content or construction that a firewall does not detect as such, the firewall will allow the packet. A proxy would erase the suspicious content in the process of rebuilding the packet. The drawback is that there is more processing to be done than with a firewall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

web security gateways

A

Web proxies are often also described as web security gateways as usually their primary functions are to prevent viruses or Trojans infecting computers from the Internet, block spam, and restrict web use to authorized sites, acting as a content filter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

caching engines

A

The main benefit of a proxy server is that client computers connect to a specified point within the perimeter network for web access. This provides for a degree of traffic management and security. In addition, most web proxy servers provide caching engines, whereby frequently requested web pages are retained on the proxy, negating the need to re-fetch those pages for subsequent requests. Some proxy servers also pre-fetch pages that are referenced in pages that have been requested. When the client computer then requests that page, the proxy server already has a local copy.

A proxy server must understand the application it is servicing. For example, a web proxy must be able to parse and modify HTTP and HTTPS commands (and potentially HTML too). Some proxy servers are application-specific; others are multipurpose. A multipurpose proxy is one configured with filters for multiple protocol types, such as HTTP, FTP, and SMTP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Proxy servers can generally be classed as non-transparent or transparent.

A
  • A non-transparent server means that the client must be configured with the proxy server address and port number to use it. The port on which the proxy server accepts client connections is often configured as port 8080.
  • A transparent (or forced or intercepting) proxy intercepts client traffic without the client having to be reconfigured. A transparent proxy must be implemented on a switch or router or other inline network appliance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

reverse proxy server

A

A reverse proxy server provides for protocol-specific inbound traffic. For security purposes, it is inadvisable to place application servers, such as messaging and VoIP servers, in the perimeter network, where they are directly exposed to the Internet. Instead, you can deploy a reverse proxy and configure it to listen for client requests from a public network (the Internet), and create the appropriate request to the internal server on the corporate network.

Reverse proxies can publish applications from the corporate network to the Internet in this way. In addition, some reverse proxy servers can handle the encryption/decryption and authentication issues that arise when remote users attempt to connect to corporate servers, reducing the overhead on those servers. Typical applications for reverse proxy servers include publishing a web server, publishing IM or conferencing applications, and enabling POP/IMAP mail retrieval.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

rule-based management

A

A firewall, proxy, or content filter is an example of rule-based management. Firewall and other filtering rules are configured on the principle of least access. This is the same as the principle of least privilege; only allow the minimum amount of traffic required for the operation of valid network services and no more. The rules in a firewall’s ACL are processed top-to-bottom. If traffic matches one of the rules, then it is allowed to pass; consequently, the most specific rules are placed at the top. The final default rule is typically to block any traffic that has not matched a rule (implicit deny).

Each rule can specify whether to block or allow traffic based on several parameters, often referred to as tuples. If you think of each rule being like a row in a database, the tuples are the columns. For example, in the previous screenshot, the tuples include Protocol, Source (address), (Source) Port, Destination (address), (Destination) Port, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Even the simplest packet filtering firewall can be complex to configure securely. It is essential to create a written policy describing what a filter ruleset should do and to test the configuration as far as possible to ensure that the ACLs you have set up work as intended. Also test and document changes made to ACLs. Some other basic principles include:

A
  • Block incoming requests from internal or private IP addresses (that have obviously been spoofed).
  • Block incoming requests from protocols that should only be functioning at a local network level, such as ICMP, DHCP, or routing protocol traffic.
  • Use penetration testing to confirm the configuration is secure. Log access attempts and monitor the logs for suspicious activity.
  • Take the usual steps to secure the hardware on which the firewall is running and use of the management interface.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Denial of Service (DoS) attack

A

causes a service at a given host to fail or to become unavailable to legitimate users. Typically, DoS attacks focus on overloading a service by using up CPU, system RAM, disk space, or network bandwidth (resource exhaustion). It is also possible for DoS attacks to exploit design failures or other vulnerabilities in application software. An example of a physical DoS attack would be cutting telephone lines or network cabling or switching off the power to a server. DoS attacks may simply be motivated by the malicious desire to cause trouble. They may also be part of a wider attack, such as the precursor to a MitM or data exfiltration attack.

Many DoS attacks attempt to deny bandwidth to web servers connected to the Internet. They focus on exploiting historical vulnerabilities in the TCP/IP protocol suite. TCP/IP was never designed for security; it assumes that all hosts and networks are trusted. Other application attacks do not need to be based on consuming bandwidth or resources. Attacks can target known vulnerabilities in software to cause them to crash; worms and viruses can render systems unusable or choke network bandwidth.

All these types of DoS attack can have severe impacts on service availability, with a consequent effect on the productivity and profitability of a company. Where a DoS attack disrupts customer-facing services, there could be severe impacts on the company’s reputation. An organization could also be presented with threats of blackmail or extortion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Distributed Dos (DDoS) attack

A

Most bandwidth-directed DoS attacks are distributed. This means that the attacks are launched from multiple, compromised computers.

The handlers are used to compromise hundreds or thousands or millions of zombie (agent) PCs with DoS tools (bots) forming a botnet. To compromise a computer, the attacker must install a backdoor application that gives them access to the PC. They can then use the backdoor application to install DoS software and trigger the zombies to launch the attack at the same time.

DoS attacks might be coordinated between groups of attackers. There is growing evidence that nation states are engaging in cyber warfare, and terrorist groups have also been implicated in DoS attacks on well-known companies and government institutions. There are also hacker collectives that might target an organization as part of a campaign.

Some types of attacks simply aim to consume network bandwidth, denying it to legitimate hosts. Others cause resource exhaustion on the hosts processing requests, consuming CPU cycles and memory. This delays processing of legitimate traffic and could potentially crash the host system completely. For example, a SYN flood attack works by withholding the client’s ACK packet during TCP’s three-way handshake. Typically, the client’s IP address is spoofed, meaning that an invalid or random IP is entered so the server’s SYN/ACK packet is misdirected. A server can maintain a queue of pending connections. When it does not receive an ACK packet from the client, it resends the SYN/ACK packet a set number of times before “timing out” and giving up on the connection. The problem is that a server may only be able to manage a limited number of pending connections, which the DoS attack quickly fills up. This means that the server is unable to respond to genuine traffic.

Servers can suffer the effects of a DDoS even when there is no malicious intent. For instance, the Slashdot effect is a sudden, temporary surge in traffic to a website that occurs when another website or other source posts a story that refers visitors to the victim website. This effect is more noticeable on smaller websites, and the increase in traffic can slow a website’s response times or make it impossible to reach altogether.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

zombie

A

agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

bots

A

DoS tools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Distributed Reflection DoS (DRDoS) or amplification attack

A

A more powerful TCP SYN flood attack is a type of Distributed Reflection DoS (DRDoS) or amplification attack. In this attack, the adversary spoofs the victim’s IP address and attempts to open connections with multiple servers. Those servers direct their SYN/ACK responses to the victim server. This rapidly consumes the victim’s available bandwidth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Smurf attack

A

A similar type of amplification attack can be performed by exploiting other protocols. For example, in a Smurf attack, the adversary spoofs the victim’s IP address and pings the broadcast address of a third-party network (one with many hosts; referred to as the “amplifying network”). Each host directs its echo responses to the victim server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

bogus DNS queries

A

The same sort of technique can be used to bombard a victim network with responses to bogus DNS queries. One of the advantages of this technique is that while the request is small, the response to a DNS query can be made to include a lot of information, so this is a very effective way of overwhelming the bandwidth of the victim network with much more limited resources on the attacker’s botnet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Network Time Protocol (NTP)

A

The Network Time Protocol (NTP) can be abused in a similar way. NTP helps servers on a network and on the Internet to keep the correct time. It is vital for many protocols and security mechanisms that servers and clients be synchronized. One NTP query (monlist) can be used to generate a response containing a list of the last 600 machines that the NTP server has contacted. As with the DNS amplification attack, this allows a short request to direct a long response at the victim network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

blackhole

A

When a network is faced with a DDoS or similar flooding attack, an ISP can use either an ACL or a blackhole to drop packets for the affected IP address(es). A blackhole is an area of the network that cannot reach any other part of the network. The blackhole option is preferred, as evaluating each packet in a multi-gigabit stream against ACLs overwhelms the processing resources available. The blackhole also makes the attack less damaging to the ISP’s other customers. With both approaches, legitimate traffic is discarded along with the DDoS packets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

sinkhole routing

A

Another option is to use sinkhole routing so that the traffic flooding a particular IP address is routed to a different network where it can be analyzed. Potentially, some legitimate traffic could be allowed through, but the real advantage is to identify the source of the attack and devise rules to filter it. The target can then use low TTL DNS records to change the IP address advertised for the service and try to allow legitimate traffic past the flood.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

load balancer

A

A load balancer distributes client requests across available server nodes in a farm or pool. Clients use the single name/IP address of the load balancer to connect to the servers in the farm. This provides for higher throughput or supports more connected users. A load balancer provides fault tolerance. If there are multiple servers available in a farm, all addressed by a single name/IP address via a load balancer, then if a single server fails, client requests can be routed to another server in the farm. You can use a load balancer in any situation where you have multiple servers providing the same function. Examples include web servers, front-end email servers, and web conferencing, A/V conferencing, or streaming media servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

There are two main types of load balancers:

A
  • Layer 4 load balancer—early instances of load balancers would base forwarding decisions on IP address and TCP/UDP port values (working at up to layer 4 in the OSI model). This type of load balancer is stateless; it cannot retain any information about user sessions.
  • Layer 7 load balancer (content switch)—as web applications have become more complex, modern load balancers need to be able to make forwarding decisions based on application-level data, such as a request for a particular URL or data types like video or audio streaming. This requires more complex logic, but the processing power of modern appliances is sufficient to deal with this.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Most load balancers need to be able to provide some or all of the following features:

A
  • Configurable load—the ability to assign a specific server in the farm for certain types of traffic or a configurable proportion of the traffic.
  • TCP offload—the ability to group HTTP packets from a single client into a collection of packets assigned to a specific server.
  • SSL offload—when you implement SSL/TLS to provide for secure connections, this imposes a load on the web server (or other server). If the load balancer can handle the processing of authentication and encryption/decryption, this reduces the load on the servers in the farm.
  • Caching—as some information on the web servers may remain static, it is desirable for the load balancer to provide a caching mechanism to reduce load on those servers.
  • Prioritization—to filter and manage traffic based on its priority.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Virtual IP (VIP) address (or addresses)

A

Each server node or instance needs its own IP address, but externally a load-balanced service is advertised using a Virtual IP (VIP) address (or addresses). There are different protocols available to handle virtual IP addresses and they differ in the ways that the VIP responds to ARP and ICMP, and in compatibility with services such as NAT and DNS. One of the most widely used protocols is the Common Address Redundancy Protocol (CARP). There is also Cisco’s proprietary Gateway Load Balancing Protocol (GLBP).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

scheduling algorithm

A

The scheduling algorithm is the code and metrics that determine which node is selected for processing each incoming request. The simplest type of scheduling is called round robin; this just means picking the next node. Other methods include picking the node with fewest connections or best response time. Each method can also be weighted, using administrator set preferences or dynamic load information or both.

The load balancer must also use some type of heartbeat or health check probe to verify whether each node is available and under load or not. Layer 4 load balancers can only make basic connectivity tests while layer 7 appliances can test the application’s state, as opposed to only verifying host availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

round robin DNS (RRDNS)

A

Load balancing can be accomplished using software rather than dedicated hardware appliances. One example is round robin DNS (RRDNS), which is where a client enters a web server name in a browser and the DNS server responsible for resolving that name to an IP address for client connectivity will return one of several configured addresses, in turn, from amongst a group configured for the purpose. This can be cost-effective, but load balancing appliances provide better fault tolerance and more efficient algorithms for distribution of requests than RRDNS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Source IP or session affinity

A

When a client device has established a session with a particular node in the server farm, it may be necessary to continue to use that connection for the duration of the session. Source IP or session affinity is a layer 4 approach to handling user sessions. It means that when a client establishes a session, it becomes stuck to the node that first accepted the request. This can be accomplished by hashing the IP and port information along with other scheduling metrics. This hash uniquely identifies the session and will change if a node stops responding or a node weighting is changed. This is cost-effective in terms of performance but not sticky enough for some applications. An alternative method is to cache the client IP in memory (a stick table).

An application-layer load balancer can use persistence to keep a client connected to a session. Persistence typically works by setting a cookie, either on the node or injected by the load balancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

clustering

A

Apart from the affinity and cookie persistence methods discussed earlier, load balancing can only provide for stateless fault tolerance, as by itself it cannot provide a mechanism for transferring the state of data. If you need fault tolerance of stateful data, you must implement a clustering technology, whereby the data residing on one node (or pool) is made available to another node (or pool) seamlessly and transparently in the event of a node failure. This allows servers in the cluster to communicate session information to one another so, for example, if a user logs in on one instance, the next session can start on another instance and the new server can access the cookies or other information used to establish the login.

37
Q

back-end

A

Where load balancing provides front-end distribution of client requests, clustering is used to provide fault tolerance for back-end applications. For example, if you wanted to provide a resilient online purchasing system based around SQL Server, you might install a clustering solution to support the actual SQL databases.

38
Q

Active/Active configurations

A

consist of n nodes, all of which are processing concurrently. This allows the administrator to use the maximum capacity from the available hardware while all nodes are functional. In the event of a failover (the term used to describe the situation where a node has failed) the workload of the failed node is immediately (and transparently) shifted onto the remaining node(s). At this time, the workload on the remaining nodes is higher and performance is degraded during failover—a significant disadvantage.

Some applications and services will not function in a clustered environment and some sub-components of cluster-aware applications cannot run on a cluster. You will need to be aware of these restrictions when planning the cluster implementation.

39
Q

intrusion detection system (IDS)

A

An intrusion detection system (IDS) is a means of using software tools to provide real-time analysis of either network traffic or system and application logs. IDS is similar to anti-virus software but protects against a broader range of threats. A network IDS (NIDS) is basically a packet sniffer (referred to as a sensor) with an analysis engine to identify malicious traffic and a console to allow configuration of the system.

The basic functionality of a NIDS is to provide passive detection; that is, to log intrusion incidents and to display an alert at the management interface or to email the administrator account. This type of passive sensor does not slow down traffic and is undetectable by the attacker (it does not have an IP address on the monitored network segment).

A NIDS will be able to identify and log hosts and applications, and detect attack signatures, password guessing attempts, port scans, worms, backdoor applications, malformed packets or sessions, and policy violations (ports or IP addresses that are not permitted, for instance). You can use analysis of the logs to tune firewall rulesets, remove or block suspect hosts and processes from the network, or deploy additional security controls to mitigate any threats you identify.

40
Q

The main disadvantages of NIDS are:

A
  • If an attack is detected, without an effective active response option there can be a significant delay before an administrator is able to put countermeasures in place.
  • Heavy traffic, such as a large number of sessions or high load, may overload the sensor or analysis engine, causing packets to pass through uninspected. A blinding attack is a DoS aimed at the IDS with the intention of generating more incidents than the system can handle. This attack would be run in parallel with the “real” attack.
  • Training and tuning are complex, resulting in high false positive and false negative rates, especially during the initial deployment.
  • Encrypted traffic cannot be analyzed, though often the setup of an encrypted session can be monitored to ensure that it is valid.
41
Q

There are three main options for connecting a sensor to the appropriate point in the network:

A
  • SPAN (switched port analyzer)/mirror port—this means that the sensor is attached to a specially configured port on the switch that receives copies of frames addressed to nominated access ports (or all the other ports). This method is not completely reliable. Frames with errors will not be mirrored and frames may be dropped under heavy load.
  • Passive test access point (TAP)—this is a box with ports for incoming and outgoing network cabling and an inductor or optical splitter that physically copies the signal from the cabling to a monitor port. There are types for copper and fiber optic cabling. Unlike a SPAN, no logic decisions are made so the monitor port receives every frame—corrupt or malformed or not—and the copying is unaffected by load.
  • Active TAP—this is a powered device that performs signal regeneration (again, there are copper and fiber variants), which may be necessary in some circumstances. Gigabit signaling over copper wire is too complex for a passive tap to monitor and some types of fiber links may be adversely affected by optical splitting. Because it performs an active function, the TAP becomes a point of failure for the links in the event of power loss. When deploying an active TAP, it is important to use a model with internal batteries or connect it to a UPS.

A TAP will usually output two streams to monitor a full-duplex link (one channel for upstream and one for downstream). Alternatively, there are aggregation TAPs, which rebuild the streams into a single channel, but these can drop frames under very heavy load.

42
Q

Network-Based Intrusion Prevention System (NIPS)

A

Compared to the passive logging of IDS, an IPS or Network-Based Intrusion Prevention System (NIPS) can provide an active response to any network threats that it matches. One typical preventive measure is to end the TCP session, sending a spoofed TCP reset packet to the attacking host. Another option is for the sensor to apply a temporary filter on the firewall to block the attacker’s IP address (shunning). Other advanced measures include throttling bandwidth to attacking hosts, applying complex firewall filters, and even modifying suspect packets to render them harmless. Finally, the appliance may be able to run a script or third-party program to perform some other action not supported by the IPS software itself.

Some IPS provide inline, wire-speed anti-virus scanning. Their rulesets can be configured to provide user content filtering, such as blocking URLs, applying keyword-sensitive blacklists or whitelists, or applying time-based access restrictions.

IPS appliances are positioned like firewalls at the border between two network zones. As with proxy servers, the appliances are “inline” with the network, meaning that all traffic passes through them (also making them a single point-of-failure if there is no fault tolerance mechanism). This means that they need to be able to cope with high bandwidths and process each packet very quickly to avoid slowing down the network.

43
Q

in-band

A

As well as considering the placement of the sensor, when configuring an IDS/IPS you need to consider how it will provide event reporting and alerting. The management channel could use the same network as the link being monitored (in-band). This is less secure because the alerts might be detected by an adversary and intercepted or blocked.

44
Q

out-of-band

A

An out-of-band link offers better security. This might be established using separate cabling infrastructure or using the same cabling and physical switches but a separate VLAN for the management channel. You may also be implementing a complex architecture where the feeds from multiple sensors are aggregated by a security information and event management (SIEM) server and backend database. This architecture should use dedicated network links for both security and performance (the link utilization is likely to be very high).

45
Q

host-based IDS (HIDS)

A

A host-based IDS (HIDS) captures information from a single host, such as a server, router, or firewall. Some organizations may configure HIDS on each client workstation. HIDS come in many different forms with different capabilities. The core ability is to capture and analyze log files, but more sophisticated systems can also monitor OS kernel files, monitor ports and network interfaces, and process data and logs generated by specific applications, such as HTTP or FTP.

Installing HIDS/HIPS is simply a case of choosing which hosts to protect, then installing and configuring the software. There will also normally be a reporting and management server to control the agent software on the hosts.

46
Q

Host-based Intrusion Prevention System (HIPS)

A

A Host-based Intrusion Prevention System (HIPS) with active response can act to preserve the system in its intended state. This means that the software can prevent system files from being modified or deleted, prevent services from being stopped, log off unauthorized users, and filter network traffic.

Installing HIDS/HIPS is simply a case of choosing which hosts to protect, then installing and configuring the software. There will also normally be a reporting and management server to control the agent software on the hosts.

47
Q

The main advantage of HIDS/HIPS is that they can be much more application specific than NIDS. For example, HIDS/HIPS can analyze encrypted traffic (once it has been decrypted on the host) and it is easier to train the system to recognize normal traffic. The main disadvantages of HIDS/HIPS are:

A
  • The software is installed on the host and, therefore, detectable. This means that it is vulnerable to attack by malware.
  • The software also consumes CPU, memory, and disk resources on the host.

HIDS/HIPS software produces similar output to an anti-malware scanner. If the software detects a threat, it may just log the event or display an alert. The log should show you which process initiated the event and what resources on the host were affected. You can use the log to investigate whether the suspect process is authorized or should be removed from the host.

48
Q

analysis engine

A

In both network and host intrusion detection, the analysis engine is the component that scans and interprets the traffic captured by the sensor or agent with the purpose of identifying suspicious traffic. The analysis engine determines whether any given event should be classed as an incident (or violation of the security policy or standard). The analysis engine is programmed with a set of rules that it uses to drive its decision-making process. There are several methods of formulating the ruleset.

49
Q

Signature-based detection (or pattern-matching)

A

means that the engine is loaded with a database of attack patterns or signatures. If traffic matches a pattern, then the engine generates an incident.

The signatures and rules (often called plug-ins or feeds) powering intrusion detection need to be updated regularly to provide protection against the latest threat types. Commercial software requires a paid-for subscription to obtain the updates. It is important to ensure that the software is configured to update only from valid repositories, ideally using a secure connection method, such as HTTPS.

50
Q

Behavioral-based detection (or statistical- or profile-based detection)

A

means that the engine is trained to recognize baseline “normal” traffic or events. Anything that deviates from this baseline (outside a defined level of tolerance) generates an incident. The idea is that the software will be able to identify “zero day” attacks (those for which the exploit has not been detected or published).

51
Q

heuristics

A

The analysis engine does not keep a record of everything that has happened and then try to match new traffic to a precise record of what has gone before. It uses heuristics (meaning to learn from experience) to generate a statistical model of what the baseline looks like. It may develop several profiles to model network use at different times of the day. This means that the system generates false positive and false negatives until it has had time to improve its statistical model of what is “normal.”

52
Q

Anomaly-based detection

A

Often behavioral- and anomaly-based detection are taken to mean the same thing (in the sense that the engine detects anomalous behavior). Anomaly-based detection can also be taken to mean specifically looking for irregularities in the use of protocols. For example, the engine may check packet headers or the exchange of packets in a session against RFC standards and generate an alert if they deviate from strict RFC compliance.

53
Q

Analytics

A

the process of reviewing the events and incidents that trigger IDS/IPS. The aim is to ensure that only (or mostly) genuine incidents are being recorded, and conversely that incidents are not going unreported. A false positive is where legitimate behavior is identified as an incident. Conversely, a false negative is where malicious traffic is not identified. High volumes of false positives can blind the incident response team, which can also result in attacks going undetected. Consequently, IDS/IPS requires a high degree of tuning to work optimally.

54
Q

behavior-based monitoring

A

Most IDS/IPS use a combination of detection methods, but there are advantages and disadvantages to each. The two principal vulnerabilities of signature detection are that the protection is only as good as the last signature update and that no protection is provided against threats that cannot be matched in the pattern database. Another issue is that it is difficult to configure pattern matching that can detect attacks based on a complex series of communications.

These vulnerabilities are addressed by behavior-based monitoring or behavior-based detection, which can be effective at detecting previously unknown threats. Heuristic, profile-based detection is usually harder to set up and generates more false positives and false negatives than 1:1 pattern matching.

55
Q

anti-virus scanner

A

or intrusion prevention system works by identifying when processes or scripts are executed and intercepting (or hooking) the call to scan the code first. If the code matches a signature of known malware or exhibits malware-like behavior that matches a heuristic profile, the scanner will prevent execution and attempt to take the configured action on the host file (clean, quarantine, erase, and so on). An alert will be displayed to the user and the action will be logged (and also may generate an administrative alert). The malware will normally be tagged using a vendor proprietary string and possibly by a CME (Common Malware Enumeration) identifier. These identifiers can be used to research the symptoms of and methods used by the malware. This may help to confirm the system is fully remediated and to identify whether other systems have been infected. It is also important to trace the source of the infection and ensure that it is blocked to prevent repeat attacks and outbreaks.

56
Q

Unified threat management (UTM)

A

Unified threat management (UTM) refers to a system that centralizes various security controls—firewall, anti-malware, network intrusion prevention, spam filtering, content inspection, etc.—into a single appliance. In addition, UTM security appliances usually include a single console from which you can monitor and manage various defense settings. UTM was created in response to several difficulties that administrators face in deploying discrete security systems; namely, managing several complex platforms as well as meeting the significant cost requirements. UTM systems help to simplify the security process by being tied to only one vendor and requiring only a single, streamlined application to function. This makes management of your organization’s network security easier, as you no longer need to be familiar with or know the quirks of each individual security implementation. Nevertheless, UTM has its downsides. When defense is unified under a single system, this creates the potential for a single point of failure that could affect an entire network. Distinct security systems, if they fail, might only compromise that particular avenue of attack. Additionally, UTM systems can struggle with latency issues if they are subject to too much network activity.

57
Q

When installing software from other sources, a file integrity check can be performed manually using tools such as the following:

A
  • certutil -hashfile File Algorithm—this is a built-in Windows command, where File is the input and Algorithm is one of MD5, SHA1, SHA256, or SHA512. You have to compare the value obtained to the published fingerprint manually (or by using a shell script).
  • File Checksum Integrity Verifier (fciv)—this is a downloadable Windows utility that can be used as an alternative to certutil. You can use the -v switch to compare the target with the value stored in a file, add thumbprints to an XML database, and check to see if the hash of a target file matches one stored in the database.
  • md5sum | sha1sum | sha256sum | sha512sum—Linux tools to calculate the fingerprint of a file supplied as the argument. You can also use the -c switch to compare the input file with a source file containing the pre-computed hash.
  • gpg—if a Linux source file has been signed, you need to use the publisher’s public key and the gpg utility to verify the signature.
58
Q

File integrity monitoring (FIM)

A

There is also the case that files already installed could have been compromised. File integrity monitoring (FIM) software audits key system files to make sure they match the authorized versions. In Windows, the Windows File Protection service runs automatically and the System File Checker (sfc) tool can be used manually to verify OS system files. Tripwire® (https://www.tripwire.com) and OSSEC (http://www.ossec.net) are examples of multi-platform tools with options to protect a wider range of applications. FIM functionality is built into HIDS/HIPS suites too.

59
Q

Malware

A

Malware is often able to evade detection by automated scanners. Analysis of SIEM and intrusion detection logs might reveal suspicious network connections, or a user may observe unexplained activity or behavior on a host. When you identify symptoms such as these, but the AV scanner or UTM appliance does not report an infection, you will need to analyze the host for malware using advanced tools.

60
Q

Sysinternals

A

There is a plethora of advanced analysis and detection utilities, but the starting point for most technicians is Sysinternals (https://docs.microsoft.com/sysinternals). Sysinternals is a suite of tools designed to assist with troubleshooting issues with Windows.

When hunting for a malicious process using a tool such as Process Explorer (part of Sysinternals), you need to be able to filter out the legitimate activity generated by normal operation of the computer and look for the signs that could identify a process as suspicious. APT-type malware is typically introduced by a dropper application. To infect the system, the malware author must be able to run the dropper with appropriate privileges, either by tricking the user into running it or by exploiting a vulnerability to execute code without authorization. The malware will then try to deliver a payload covertly, usually by performing code injection against a valid process. The advantage of compromising a valid process is that the code runs with the permissions and identity of the host process, which can allow it to pass through firewall ACLs.

61
Q

Given the potential exploit techniques, to locate a malicious process you may be looking for a process name that you do not recognize or for a valid process name that is not entirely as it should be in other respects:

A
  • Look for unrecognized process names, especially names that mimic a legitimate system process (scvhost, for instance, instead of svchost) or randomly generated names. You can use the Search Online function to look up known processes.
  • Look for processes with no icon, version information, description, or company name and for processes that are unsigned (especially a process with a company name like Microsoft Corporation that is also unsigned).
  • Examine processes hosted by the service host executable (svchost.exe) and other Windows utilities (explorer.exe, notepad.exe, taskmgr.exe, iexplore.exe, and so on). Look closely at processes that do not have a valid parent/child relationship with the principal Windows processes.
  • When you find a suspect process, examine how it is interacting with the registry, the file system, and the network.
62
Q

data exfiltration

A

In a workplace where mobile devices with huge storage capacity proliferate and high bandwidth network links are readily available, attempting to prevent the loss of data by controlling the types of storage devices allowed to connect to PCs and networks can be impractical. Unauthorized copying or retrieval of data from a system is referred to as data exfiltration. Data exfiltration attacks are one of the primary means for attackers to retrieve valuable data, such as Personally Identifiable Information (PII) or payment information, often destined for later sale on the black market.

63
Q

Data exfiltration can take place via a wide variety of mechanisms, including:

A
  • Copying the data to removable media or other device with storage, such as USB drive, the memory card in a digital camera, or a smartphone.
  • Using a network protocol, such as HTTP, FTP, SSH, email, or Instant Messaging (IM)/ chat. A sophisticated adversary might use a Remote Access Trojan (RAT) to perform transfer of data over a non-standard network port or a packet crafter to transfer data over a standard port in a non-standard way. The adversary may also use encryption to disguise the data being exfiltrated.
  • By communicating it orally over a telephone, cell phone, or Voice over IP (VoIP) network. Cell phone text messaging is another possibility.
  • Using a picture or video of the data—if text information is converted to an image format it is very difficult for a computer-based detection system to identify the original information from the image data.
64
Q

While some of these mechanisms are simple to mitigate through the use of security tools, others may be much less easily defeated. You can protect data using mechanisms and security controls that you have examined previously:

A
  • Ensure that all sensitive data is encrypted at rest. If the data is transferred outside the network, it will be mostly useless to the attacker without the decryption key.
  • Create and maintain offsite backups of data that may be targeted for destruction or ransom.
  • Ensure that systems storing or transmitting sensitive data are implementing access controls. Check to see if access control mechanisms are granting excessive privileges to certain accounts.
  • Restrict the types of network channels that attackers can use to transfer data from the network to the outside. Disconnect systems storing archived data from the network.
  • Train users about document confidentiality and the use of encryption to store and transmit data securely. This should also be backed up by HR and auditing policies that ensure staff are trustworthy.

Even if you apply these policies and controls diligently, there are still risks to data from insider threats and Advanced Persistent Threat (APT) malware. Consequently, a class of security control software has been developed to apply access policies directly to data, rather than just the host or network on which data is located.

65
Q

Data loss prevention (DLP) products

A

scan content in structured formats, such as a database with a formal access control model or unstructured formats, such as email or word processing documents. These products use some sort of dictionary database or algorithm (regular expression matching) to identify confidential data. The transfer of content to removable media, such as USB devices, or by email, IM, or even social media, can then be blocked if it does not conform to a predefined policy.

66
Q

the transfer of content to removable media, such as USB devices, or by email, IM, or even social media, can then be blocked if it does not conform to a predefined policy. Such solutions will usually consist of the following components:

A
  • Policy server—to configure confidentiality rules and policies, log incidents, and compile reports.
  • Endpoint agents—to enforce policy on client computers, even when they are not connected to the network.
  • Network agents—to scan communications at network borders and interface with web and messaging servers to enforce policy.
67
Q

Cloud-based DLP

A

Cloud-based DLP extends the protection mechanisms to cloud storage services, using either a proxy to mediate access or the cloud service provider’s API to perform scanning and policy enforcement. As an example, SkyHigh Networks’ cloud-based DLP (https://www.skyhighnetworks.com/cloud-data-loss-prevention) can integrate with Symantec’s on-premises DLP (https://www.symantec.com/products/data-loss-prevention) to apply the same policies across different infrastructures.

68
Q

Remediation is the action the DLP software takes when it detects a policy violation. The following remediation mechanisms are typical:

A
  • Alert only—the copying is allowed, but the management system records an incident and may alert an administrator.
  • Block—the user is prevented from copying the original file but retains access to it. The user may or may not be alerted to the policy violation, but it will be logged as an incident by the management engine.
  • Quarantine—access to the original file is denied to the user (or possibly any user). This might be accomplished by encrypting the file in place or by moving it to a quarantine area in the file system.
  • Tombstone—the original file is quarantined and replaced with one describing the policy violation and how the user can release it again.

When it is configured to protect a communications channel such as email, DLP remediation might take place using client-side or server-side mechanisms. For example, some DLP solutions prevent the actual attaching of files to the email before it is sent. Others might scan the email attachments and message contents, and then strip out certain data or stop the email from reaching its destination.

69
Q

Rights Management (IRM)

A

As another example of data protection and information management solutions, Microsoft® provides an Information Rights Management (IRM) feature in their Office productivity suite, SharePoint document collaboration services, and Exchange messaging server. IRM works with the Active Directory Rights Management Services (RMS) or the cloud-based Azure Information Protection.

70
Q

These rights management technologies provide administrators with the following functionality:

A
  • Assign file permissions for different document roles, such as author, editor, or reviewer.
  • Restrict printing and forwarding of documents, even when sent as file attachments.
  • Restrict printing and forwarding of email messages.
71
Q

Logs

A

one of the most valuable sources of security information. A log can record both authorized and unauthorized uses of a resource or privilege. Logs function both as an audit trail of actions and (if monitored regularly) provide a warning of intrusion attempts.

Each log can be assigned a category to indicate its severity. For example, in Windows, system and application events are defined as informational, warning, or critical, while audit events are categorized as success or fail. This classification is one way to spot anomalies within logged events more easily and prioritize incidents for troubleshooting.

72
Q

Log review

A

a critical part of security assurance. Only referring to the logs following a major incident is missing the opportunity to identify threats and vulnerabilities early and to respond proactively. Software designed to assist with security logging and alerting is often described as security information and event management (SIEM). The core function of a SIEM tool is to aggregate logs from multiple sources. In addition to logs from Windows and Linux-based hosts, this could include switches, routers, firewalls, IDS sensors, vulnerability scanners, malware scanners, Data Loss Prevention (DLP) systems, and databases.

73
Q

correlation

A

The second critical function of SIEM (and the principal factor distinguishing it from basic log management) is that of correlation. This means that the SIEM software can link individual events or data points (observables) into a meaningful indicator of risk, or Indicator of Compromise (IOC). Correlation can then be used to drive an alerting system. Finally, SIEM can provide a long-term retention function and be used to demonstrate regulatory compliance.

74
Q

first task for SIEM

A

to aggregate data outputs from multiple sources. This is an obviously complex process if the sources use different formats for data output. Some tools are oriented toward using eXtensible Markup Language (XML) formatted output. This provides a self-describing file format that can be imported more easily. Most data sources are vendor-specific, however, so SIEM solutions need a way of standardizing the information from these different sources.

75
Q

SIEM software features

A

collectors or connectors to store and interpret (or parse) the logs from different types of systems (host, firewall, IDS sensor, and so on), and to account for differences between vendor implementations. A collector would usually be implemented as plug-in code written for the SIEM and would scan and parse each event as it was submitted to the SIEM over the network. A collector might also be implemented as a software agent running on the device. The agent would parse the logs generated by the device and establish the network connection back to the SIEM. Usually, parsing will be accomplished using regular expressions tailored to each log file format to identify attributes and content that can be mapped to standard fields in the SIEM’s reporting and analysis tools. The SIEM system might also be able to deploy its own sensors to collect network traffic.

76
Q

correlation engine

A

The sensors and collectors gathering data can be separate from the main SIEM server hosting the correlation engine. On enterprise networks, this data is likely to be stored on a storage area network (SAN), rather than directly on the SIEM server, as local storage is unlikely to be able to cope with the volume of data that will be collected.

77
Q

Event log

A

records things that occur within an operating system (the System event log in Windows, for instance) or a software application (Windows’ Application log). These logs are used to diagnose errors and performance problems.

78
Q

Audit log

A

records the use of system privileges, such as creating a user account or modifying a file. Security logging needs to be configured carefully, as over-logging can reduce the effectiveness of auditing by obscuring genuinely important events with thousands of routine notifications and consuming disk resources on the server.

79
Q

Security log

A

this is another way of describing an audit log. The audit log in Windows Event Viewer is called the Security log.

80
Q

Access log

A

server applications such as Apache can log each connection or request for a resource. This log is typically called the access log.

81
Q

baseline

A

A baseline establishes (in security terms) the expected pattern of operation for a server or network. As well as baselining the server configuration, you can also take a baseline performance measurement. Significant variation from the baseline could be an indicator of attack or other security breach. Remember that server usage will change during the day and there may be known, expected events that cause utilization to go up (scanning for viruses or running Windows Update, for instance). Your baseline should identify typical usage patterns so that it is easier to spot anything genuinely out of the ordinary. Most operating systems provide some tools for this process, and most server vendors ship equipment with their own monitoring software, or you can use third-party tools. Remember that changes to the system require a new baseline to be taken.

82
Q

Thresholds

A

Thresholds are points of reduced or poor performance or change in configuration (compared to the baseline) that generate an administrative alert. Examples include low disk space; high memory, CPU, or network utilization; server chassis intrusion; failed logins; and so on. Setting thresholds is a matter of balance. On the one hand, you do not want performance to deteriorate to the point that it affects user activity; on the other, you do not want to be overwhelmed by performance alerts.

83
Q

Some of the key performance counters to watch for in terms of detecting security-related intrusions or attacks are:

A
  • Free disk space—rapid decreases in available disk space could be caused by malware or illegitimate use of a server (as a peer-to-peer file sharing host, for instance).
  • High CPU or network utilization—this could have many causes but could indicate the presence of a worm, Trojan, or peer-to-peer file sharing software.
  • Memory leak—a process that takes memory without subsequently freeing it up could be a legitimate but faulty application or could be a worm or other type of malware. To detect a memory leak, look for decreasing Available Bytes and increasing Committed Bytes.
  • Page file usage—high page file utilization could be caused by insufficient physical memory but otherwise could indicate malware.
  • Account activity—any unusual activity in the areas of account creation, allocation of rights, logon attempts, and so on might be suspicious.
  • Out-of-hours utilization—if you can discount scheduled activities, such as backup or virus scanning, any sort of high utilization when employees are not working is suspicious.
84
Q

automated alert or alarm

A

If a threshold is exceeded (a trigger), some sort of automated alert or alarm notification must take place. A low priority alert may simply be recorded in a log.

85
Q

logs and events anomalies

A

A high priority alarm might make some sort of active notification, such as emailing a system administrator or triggering a physical alarm signal. This allows administrators to identify and troubleshoot serious logs and events anomalies promptly. All alerting systems suffer from the problems of false positives and false negatives. False positives overwhelm resources while false negatives mean that security administrators are exposed to threats without being aware of them. This means that the rules used to trigger alerting must be carefully drafted and tuned to avoid either over-alerting or under-reporting.

86
Q

log analysis

A

One of the features of log analysis and reporting software should be to identify trends. It is difficult to spot a trend by examining each event in a log file. Instead, you need software to chart the incidence of particular types of events and show how the number or frequency of those events changes over time. Examples could include:

  • Increasing amounts of malware activity.
  • Failure of hosts to obtain security patches.
  • Increasing bandwidth usage/reducing performance.

Analyzing trends can help to further tune the alerting ruleset. An alerting ruleset could be based on identifiers found in single events or on a sequence or pattern of events.

87
Q

Write Once, Read Many (WORM) media

A

For computer logs to be accepted as an audit trail, they must be shown to be tamper-proof (or tamper-evident). It is particularly important to secure logs against tampering by rogue administrative accounts as this would be a means for an insider threat to cover his or her tracks. Log files should be writable only by system processes or by secure accounts that are separate from other administrative accounts. Log files should be configured to be “append only” so that existing entries cannot be modified. Another option is for the log to be written to a remote server over a secure communications link. Alternatively, log files could be written to Write Once, Read Many (WORM) media. WORM technology used to mean optical drives, such as CD-R and DVD-R. There are now magnetic WORM drives and RAID arrays developed for secure logging solutions by companies such as EMC

88
Q

A SIEM will assist log maintenance with the following functions:

A

• Time synchronization—logs may be collected from appliances in different geographic locations and, consequently, may be configured with different time zones. This can cause problems when correlating events and analyzing logs. A SIEM may be able to normalize events to the same time zone.

Note: Offsetting the time zone to provide consistent reporting is one thing, but the appliances across the network must be synchronized to the same time in the first place. This is usually achieved using a Network Time Protocol (NTP) server.

• Event deduplication—some errors may cause hundreds or thousands of identical error messages to spawn, temporarily blinding the reporting mechanisms of the SIEM system. Event deduplication means that this type of event storm is identified as a single event.

89
Q

Follow these guidelines when configuring network security technologies:

A
  • Familiarize yourself with the common devices that comprise a network, as well as the specific security concerns for each device.
  • Incorporate security gateways in the network to better control the state of traffic that enters and leaves the private network.
  • Implement network scanning technology like protocol and packet analyzers to stay up-to-date on the state of traffic in your network.
  • Implement network intrusion detection systems to help you identify unwanted network behavior.
  • Be aware of the risks of using an active intrusion prevention device, especially false positives.
  • Consider implementing DLP solutions to prevent the unwanted loss or leakage of sensitive data.
  • Consider using a UTM to streamline the management of network security devices.
  • Be aware of the risks involved in UTM, especially as it may become a single point of failure.
  • Consider incorporating SIEM technology in the organization to aggregate and correlate network event data.