Sy06 Exam Braindumps 251-300 Flashcards

1
Q

Which of the following environments utilizes dummy data and is MOST likely to be installed locally on a system that allows code to be assessed directly and modified easily with each build?

a. Production
b. Test
c. Staging
d. Development

A

d. Development

Explanation:

Development environment is typically where software developers work on code and applications. It is localized on individual developer systems or a dedicated development server.
In this environment, dummy data (synthetic or placeholder data) is commonly used for testing and debugging purposes.
Developers have the flexibility to assess and modify code easily during the development process, often using version control systems to manage changes.
Unlike production, test, and staging environments, the development environment is less formal and is focused on iterative development and debugging, making it suitable for direct code assessment and frequent modifications.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An analyst receives multiple alerts for beaconing activity for a host on the network. After analyzing the activity, the analyst observes the following activity:

-A user enters comptia.org into a web browser.
-The website that appears is not the comptia.org site.
-The website is a malicious site from the attacker.
-Users in a different office are not having this issue.

Which of the following types of attacks was observed?

a. On-path attack
b. DNS poisoning
c. Locator (URL) redirection
d. Domain hijacking

A

(Braindump: c. Locator (URL) redirection)
DNS Poisoning would affect ALL users in the office. C: is the Only Correct Answer, here.

“Domain Name System (DNS) poisoning happens when fake information is entered into the cache of a domain name server, resulting in DNS queries producing an incorrect reply, sending users to the wrong website. DNS poisoning also goes by the terms “DNS spoofing” and “DNS cache poisoning.”

(Community : B 79%, C 21%)

Chat GPT :
B. DNS poisoning

Here’s why:

DNS poisoning (also known as DNS spoofing) occurs when a malicious actor alters DNS records to redirect traffic from a legitimate site to a malicious one. In this case, when the user enters "comptia.org" into the web browser, they are redirected to a malicious site instead of the legitimate CompTIA website. This redirection happens because the DNS resolution process has been compromised, resulting in the wrong IP address being returned for the legitimate domain.

Let’s briefly review the other options:

On-path attack (formerly known as man-in-the-middle attack): This involves intercepting and potentially altering communication between two parties. While it could result in redirection to a malicious site, it typically involves active interception rather than altering DNS records.

Locator (URL) redirection: This involves changing the URL or using a URL shortener to redirect users to a different site. This is usually done at the URL level rather than manipulating DNS records.

Domain hijacking: This involves taking control of the entire domain registration, allowing the attacker to control DNS settings and web content for that domain. This scenario specifically indicates that only some users (in a different office) are experiencing the issue, which suggests a localized DNS poisoning rather than full domain control.

Therefore, the most likely attack observed is DNS poisoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following in the incident response process is the BEST approach to improve the speed of the identification phase?

a. Activate verbose logging in all critical assets.
b. Tune monitoring in order to reduce false positive rates.
c. Redirect all events to multiple syslog servers.
d. Increase the number of sensors present on the environment.

A

b. Tune monitoring in order to reduce false positive rates.

Explanation:

Tuning monitoring to reduce false positive rates involves configuring monitoring systems to more accurately detect and alert on genuine security incidents while minimizing false alarms.
False positives can slow down the identification phase by diverting attention and resources from genuine incidents.
By fine-tuning monitoring rules and thresholds, security analysts can focus on relevant alerts, leading to quicker identification of potential security incidents.
This approach is generally more effective than simply increasing logging verbosity, redirecting events to multiple syslog servers, or adding more sensors, as those actions may increase data volume without necessarily improving the relevance or accuracy of alerts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A security administrator is analyzing the corporate wireless network. The network only has two access points running on channels 1 and 11. While using airodump-ng, the administrator notices other access points are running with the same corporate ESSID on all available channels and with the same BSSID of one of the legitimate access points. Which of the following attacks is happening on the corporate network?

a. On-path
b. Evil twin
c. Jamming
d. Rogue access point
e. Disassociation

A

b. Evil Twin attack.

Here’s why:

Evil Twin attack: In this type of attack, a malicious actor sets up a rogue wireless access point (AP) with the same SSID (network name) and often the same BSSID (MAC address) as a legitimate AP in the vicinity. The goal is to trick users and devices into connecting to the rogue AP instead of the legitimate one. Once connected, the attacker can intercept traffic, conduct man-in-the-middle attacks, or lure users into providing sensitive information.

In your case:

The presence of multiple access points with the same ESSID (corporate network name) and the same BSSID as one of the legitimate access points suggests that someone has set up rogue APs to mimic the corporate network.
These rogue APs are likely broadcasting on multiple channels, trying to attract devices across different areas of the wireless spectrum (channels 1 and 11 in your case).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When implementing automation with IoT devices, which of the following should be considered FIRST to keep the network secure?

a. Z-Wave compatibility
b. Network range
c. Zigbee configuration
d. Communication protocols

A

d. Communication protocols

Here’s why:

IoT devices communicate using various protocols such as Wi-Fi, Bluetooth, Zigbee, Z-Wave, and others. The choice of communication protocol significantly impacts the security and functionality of IoT deployments:

Security Features: Different protocols offer varying levels of security features. For example, Zigbee and Z-Wave are designed with mesh networking capabilities and built-in encryption, which can enhance security compared to protocols like Bluetooth, which may require additional security measures.

Interoperability: Ensuring that IoT devices can communicate securely and effectively with each other and with central management systems often depends on selecting compatible communication protocols.

Network Segmentation: Understanding how these protocols operate and interact with other network components is crucial for proper network segmentation and isolation of IoT devices from critical network resources.

Considering communication protocols first allows you to establish a secure foundation for IoT deployments, addressing encryption, authentication, and interoperability requirements from the outset. Therefore, the correct answer is d. Communication protocols.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An organization is concerned that its hosted web servers are not running the most updated version of the software. Which of the following would work BEST to help identify potential vulnerabilities?

a. hping3 -S comptia-org -p 80
b. nc -l -v comptia.org -p 80
c. nmap comptia.org -p 80 -sV
d. nslookup –port=80 comptia.org

A

c. nmap comptia.org -p 80 -sV.

Here’s why:

nmap is a versatile network scanning tool that can scan hosts for open ports and services.
-p 80 specifies that the scan should target port 80, which is commonly used for HTTP traffic.
-sV enables version detection, which allows nmap to determine the versions of services running on open ports.

By running this command (nmap comptia.org -p 80 -sV), the organization can discover the versions of software running on the web servers hosted at comptia.org. This information is crucial for identifying whether the software versions are up-to-date or if there are known vulnerabilities associated with the detected versions.

Therefore, c. nmap comptia.org -p 80 -sV is the most appropriate choice to help identify potential vulnerabilities related to outdated software versions on the organization’s hosted web servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A news article states hackers have been selling access to IoT camera feeds. Which of the following is the MOST likely reason for this issue?

a. Outdated software
b. Weak credentials
c. Lack of encryption
d. Backdoors

A

b. Weak credentials.

Here’s why:

IoT devices, including cameras, are often deployed with default or weak credentials (such as username/password combinations like admin/admin or no password at all). These devices are frequently targeted by attackers who scan the internet for devices with known vulnerabilities. Once they find devices with weak credentials, hackers can gain unauthorized access to them.

Selling access to IoT camera feeds implies that hackers have compromised these devices and are exploiting them for financial gain or malicious purposes. Weak credentials make it easy for attackers to brute-force their way into these devices or exploit default credentials that haven’t been changed by users.

While other factors like a. Outdated software, c. Lack of encryption, and d. Backdoors can also contribute to IoT vulnerabilities, weak credentials remain one of the most exploited weaknesses in IoT security, making it the most likely reason in this scenario. Strengthening credentials by using strong, unique passwords and implementing multi-factor authentication (MFA) can significantly mitigate this risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company wants to build a new website to sell products online. The website will host a storefront application that will allow visitors to add products to a shopping cart and pay for the products using a credit card. Which of the following protocols would be the MOST secure to implement?

a. SSL
b. SFTP
c. SNMP
d. TLS

A

d. TLS (Transport Layer Security).

Here’s why TLS is the best choice:

Encryption: TLS provides encryption of data in transit between the user's browser and the web server. This ensures that sensitive information such as credit card details cannot be intercepted or read by unauthorized parties.

Data Integrity: TLS ensures data integrity, meaning that the data cannot be altered or tampered with while it is being transmitted over the network.

Authentication: TLS supports server authentication, which allows the client (user's browser) to verify the identity of the server it is communicating with. This helps prevent man-in-the-middle attacks where an attacker might impersonate the server.

Compatibility: TLS is widely supported across browsers and devices, making it suitable for ensuring secure communications between clients and servers on the web.

While SSL (Secure Sockets Layer) was its predecessor, TLS has superseded SSL due to various security vulnerabilities found in SSL over time. Therefore, TLS is the current standard for securing web communications and would be the most secure protocol to implement for the new e-commerce website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An IT manager is estimating the mobile device budget for the upcoming year. Over the last five years, the number of devices that were replaced due to loss, damage, or theft steadily increased by 10%. Which of the following would BEST describe the estimated number of devices to be replaced next year?

a. ALE
b. ARO
c. RPO
d. SLE

A

b. ARO (Annualized Rate of Occurrence).

Here’s why:

Annualized Rate of Occurrence (ARO) refers to the expected frequency with which a loss event (such as device replacement due to loss, damage, or theft) is anticipated to occur in a given year. It is derived from historical data and trends.

In this scenario, the IT manager is using the trend of the past five years, where the number of devices replaced due to loss, damage, or theft increased by 10% annually. This percentage increase provides the ARO for the upcoming year, helping to estimate how many devices may need replacement.

ALE (Annualized Loss Expectancy) is related but calculates the expected monetary loss from the occurrence of such events over a year, factoring in the cost of each incident and the ARO.

RPO (Recovery Point Objective) and SLE (Single Loss Expectancy) are terms typically associated with disaster recovery and risk management, focusing on data loss scenarios rather than physical device replacement.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An organization is repairing the damage after an incident. Which of the following controls is being implemented?

a. Detective
b. Preventive
c. Corrective
d. Compensating

A

c. Corrective.

Corrective controls are implemented after an incident has occurred to mitigate the impact or restore affected systems and data to their normal operational state. These controls aim to fix the root cause of the incident and prevent its recurrence.

Detective controls are used to detect and identify incidents or anomalies as they occur or shortly after.

Preventive controls are implemented to prevent incidents from occurring in the first place.

Compensating controls are alternative measures implemented when an original control cannot be applied effectively.

In the context of repairing damage after an incident, the organization is actively addressing the consequences and remedying the situation, which aligns with corrective controls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A Chief Executive Officer’s (CEO) personal information was stolen in a social-engineering attack. Which of the following sources would reveal if the CEO’s personal information is for sale?

a. Automated information sharing
b. Open-source intelligence
c. The dark web
d. Vulnerability databases

A

c. The dark web.

The dark web is a part of the internet that is not indexed by traditional search engines and is often used for illicit activities, including the sale of stolen personal information.
Open-source intelligence (OSINT) can also be used to gather information from publicly available sources, but it typically focuses on publicly accessible data rather than illicit activities.
Automated information sharing and vulnerability databases are more related to technical information sharing and security vulnerabilities, rather than the sale of personal information.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following typically uses a combination of human and artificial intelligence to analyze event data and take action without intervention?

a. TTP
b. OSINT
c. SOAR
d. SIEM

A

c. SOAR (Security Orchestration, Automation, and Response).

SOAR platforms integrate security orchestration, automation, and response capabilities to automate incident response workflows.
They utilize a combination of human and artificial intelligence (AI) to analyze event data, automate repetitive tasks, and orchestrate responses to security incidents.

TTP (Tactics, Techniques, and Procedures) refer to the behavior and actions of threat actors.
OSINT (Open-Source Intelligence) refers to intelligence collected from publicly available sources.
SIEM (Security Information and Event Management) systems collect, correlate, and analyze security event data but typically require human intervention for action.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A security analyst has been tasked with creating a new WiFi network for the company. The requirements received by the analyst are as follows:

-Must be able to differentiate between users connected to WiFi
-The encryption keys need to change routinely without interrupting the users or forcing reauthentication
-Must be able to integrate with RADIUS
-Must not have any open SSIDs

Which of the following options BEST accommodates these requirements?

a. WPA2-Enterprise
b. WPA3-PSK
c. 802.11n
d. WPS

A

a. WPA2-Enterprise

Here’s how WPA2-Enterprise aligns with each requirement:

Must be able to differentiate between users connected to WiFi:
WPA2-Enterprise uses 802.1X authentication, which allows for individual user authentication through a RADIUS server. Each user has unique credentials, enabling differentiation.

The encryption keys need to change routinely without interrupting the users or forcing reauthentication:
WPA2-Enterprise supports dynamic key exchange using protocols like EAP-TLS or PEAP, which allows for frequent key rotation without requiring users to reauthenticate.

Must be able to integrate with RADIUS:
WPA2-Enterprise is designed to work with a RADIUS server for centralized authentication, authorization, and accounting.

Must not have any open SSIDs:
WPA2-Enterprise does not require an open SSID; it operates with a closed (hidden) SSID by default, enhancing security.

Therefore, WPA2-Enterprise is the most suitable choice that meets all the specified requirements effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A security administrator is trying to determine whether a server is vulnerable to a range of attacks. After using a tool, the administrator obtains the following output:

HTTP/1.0 200 OK
Content-Type: text/html
Server: Apache

root:s9fyf983#:0:1)System Operator:/:/bin/bash
daemon:*:1:1::/tmp:
user1:f1@su3FF:183.100:user:/home/users/user1:/bin/bash

Which of the following attacks was successfully implemented based on the output?

a. Memory leak
b. Race conditions
c. SQL injection
d. Directory traversal

A

d. Directory traversal

Directory traversal attacks exploit insufficient input validation in web applications to access directories and files outside the intended directory. In this scenario, sensitive information such as user credentials (root, daemon, user1) and their associated details (/bin/bash) are disclosed, indicating a potential vulnerability allowing unauthorized access to system files.

Memory leak: This involves unintended memory access that can lead to unauthorized data disclosure, but the provided output does not directly indicate a memory leak.

Race conditions: These occur when the outcome of a process depends on the timing of other uncontrollable events. The output does not indicate race conditions.

SQL injection: This attack targets databases by injecting malicious SQL code into queries, but the output does not show SQL-related errors or database content.

Directory traversal: This attack exploits insufficient input validation in web applications to access directories and files outside the intended directory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A Chief Security Officer is looking for a solution that can reduce the occurrence of customers receiving errors from back-end infrastructure when systems go offline unexpectedly. The security architect would like the solution to help maintain session persistence. Which of the following would BEST meet the requirements?

a. Reverse proxy
b. NIC teaming
c. Load balancer
d. Forward proxy

A

c. Load balancer

Load balancers distribute incoming network traffic across multiple servers to ensure no single server gets overwhelmed, thereby reducing the impact of server outages. They also help maintain session persistence by ensuring that once a client establishes a session with a server, subsequent requests from that client are routed back to the same server. This prevents disruptions in user sessions even if individual servers go offline or are taken out of service temporarily.

a. Reverse proxy:

A reverse proxy is used to handle incoming client requests and distribute them to the appropriate backend servers. While it can help with load distribution and offloading certain tasks from servers, it does not inherently provide session persistence across servers during outages. It focuses more on routing and enhancing security rather than maintaining session state.

b. NIC teaming:

NIC teaming (or network interface card teaming) involves combining multiple network interfaces into a single logical interface to increase bandwidth and redundancy. However, it primarily deals with network interface redundancy and load balancing at the network level, not at the application level where session persistence is managed.

d. Forward proxy:

A forward proxy is used to retrieve data from other servers on behalf of a client. It does not provide load balancing or session persistence features for backend infrastructure. Its primary role is to enhance security and control outbound traffic from clients within a network.

(Braindump: a. Reverse proxy )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following should an organization consider implementing in the event executives need to speak to the media after a publicized data breach?

a. Incident response plan
b. Business continuity plan
c. Communication plan
d. Disaster recovery plan

A

c. Communication plan

Explanation:

When executives need to speak to the media after a publicized data breach, it’s crucial to have a well-defined communication plan in place. A communication plan outlines how information about the incident will be communicated both internally and externally, including to the media, customers, stakeholders, and the public. It ensures that messaging is consistent, accurate, and timely, helping to manage the organization’s reputation and maintain public trust during a crisis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A well-known organization has been experiencing attacks from APTs. The organization is concerned that custom malware is being created and emailed into the company or installed on USB sticks that are dropped in parking lots. Which of the following is the BEST defense against this scenario?

a. Configuring signature-based antivirus to update every 30 minutes
b. Enforcing S/MIME for email and automatically encrypting USB drives upon insertion
c. Implementing application execution in a sandbox for unknown software
d. Fuzzing new files for vulnerabilities if they are not digitally signed

A

c. Implementing application execution in a sandbox for unknown software.

Explanation:

Sandboxing: This technique involves running potentially risky or unknown software in an isolated environment (sandbox) where its behavior can be observed without affecting the rest of the system. Sandboxing helps prevent malware from spreading or causing harm by containing its activities within a controlled environment.

Email and USB Defense: While options like configuring antivirus to update frequently (option a) and using encryption (option b) are beneficial, they do not directly mitigate the risk of custom malware that may evade signature-based detection or encryption measures. Sandboxing, on the other hand, focuses on behavior analysis, which is crucial for detecting and containing new and unknown threats, such as custom malware.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company is implementing BYOD and wants to ensure all users have access to the same cloud-based services. Which of the following would BEST allow the company to meet this requirement?

a. IaaS
b. PaaS
c. MaaS
d. SaaS

A

d. SaaS (Software as a Service).

Explanation:

SaaS (Software as a Service): This cloud computing model allows users to access software applications that are hosted and maintained by a third-party provider via the internet. Users can access these applications from any device with an internet connection, making it ideal for a BYOD environment where users bring their own devices. SaaS applications ensure consistency in service access across different devices and platforms, as the applications are centrally managed and accessed through a web browser or client application.

Other Options:
    IaaS (Infrastructure as a Service): Provides virtualized computing resources over the internet, such as virtual machines and storage, but does not directly address application access consistency.
    PaaS (Platform as a Service): Provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the underlying infrastructure. While it supports application development and deployment, it does not necessarily ensure uniform access to third-party SaaS applications.
    MaaS (Mobile as a Service): Generally refers to mobile device management services, which focus on managing mobile devices, apps, and content, but do not directly provide uniform access to cloud-based services.

Therefore, SaaS is the most suitable option for ensuring consistent access to cloud-based services across various devices in a BYOD environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

During a recent security incident at a multinational corporation a security analyst found the following logs for an account called user:

Account Login location Time (UTC) Message
user New York 9:00 am Login: user, successful
user Lost Angeles 9:01 am Login: user, successful
user Sao Paolo 9:05 am Login: user, successful
user Munich 9:12 am Login: user, successful

Which of the following account policies would BEST prevent attackers from logging in as user?

a. Impossible travel time
b. Geofencing
c. Time-based logins
d. Geolocation

A

a. Impossible travel time

Here’s why:

Impossible travel time: This policy detects and prevents logins from locations that are geographically too far apart to be possible within a given timeframe. In this case, the logins from New York, Los Angeles, Sao Paulo, and Munich all occurring within minutes of each other indicate a clear case of impossible travel time. Implementing this policy would block such suspicious login attempts based on the unrealistic travel times between the locations.

Let’s briefly review the other options:

Geofencing: Geofencing restricts access based on predefined geographic boundaries. While this can be effective for limiting logins to certain locations, it wouldn't specifically address the issue of multiple, simultaneous logins from different locations.

Time-based logins: This policy restricts logins to certain times of the day. While it can enhance security, it wouldn't prevent logins from different locations in quick succession unless those times were also unrealistic for travel.

Geolocation: Geolocation tracks the physical location from which a login attempt is made and can be used to enhance security by verifying if the login location is typical for the user. However, it does not specifically address the scenario of multiple rapid logins from different locations unless combined with an impossible travel time check.

Therefore, the impossible travel time policy is the most appropriate and effective in preventing the kind of suspicious login activity described in the logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An organization is tuning SIEM rules based off of threat intelligence reports. Which of the following phases of the incident response process does this scenario represent?

a. Lessons learned
b. Eradication
c. Recovery
d. Preparation

A

d. Preparation.

Explanation:

Preparation in the incident response process involves activities such as enhancing monitoring and detection capabilities, which includes tuning SIEM rules based on threat intelligence. By leveraging threat intelligence reports, organizations can proactively adjust their detection mechanisms (such as SIEM rules) to better identify and respond to potential threats.

This phase focuses on readiness and involves actions taken before an incident occurs to improve the organization's ability to detect, respond to, and recover from security incidents effectively.

Other Options Explained:

Lessons learned: This phase occurs after an incident has been resolved, where the organization conducts a retrospective analysis to identify what went well and what could be improved in handling similar incidents in the future.

Eradication: This phase involves the removal of threats from affected systems and networks during an active incident response process.

Recovery: This phase follows eradication and involves restoring affected systems and data to normal operation after an incident.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The database administration team is requesting guidance for a secure solution that will ensure confidentiality of cardholder data at rest only in certain fields in the database schema. The requirement is to substitute a sensitive data field with a non-sensitive field that is rendered useless if a data breach occurs. Which of the following is the BEST solution to meet the requirement?

a. Tokenization
b. Masking
c. Full disk encryption
d. Mirroring

A

a. Tokenization

Tokenization is a data security technique that replaces sensitive data with non-sensitive placeholders called tokens. These tokens are randomly generated and are mapped to the original sensitive data through a tokenization system. If a data breach occurs and the database is compromised, the tokens alone are useless without the corresponding tokenization system to decrypt them back into meaningful data. This approach ensures that sensitive cardholder data remains protected and unusable to unauthorized parties.

Why Tokenization is the Best Choice:

Confidentiality: Tokenization maintains the confidentiality of sensitive data by replacing it with tokens that have no exploitable value.

Usefulness in Breach Scenarios: In the event of a data breach, even if the database is compromised, the tokens cannot be reverse-engineered back into the original sensitive data without access to the tokenization system.

Specific Field Encryption: Tokenization allows for selective substitution of sensitive fields in the database schema while maintaining the integrity and relational aspects of the data.

Other Options Explained:

Masking: While masking hides sensitive data by partially or fully replacing it with characters (such as asterisks), it does not render the data useless in a breach scenario as the original data format remains discernible through reverse engineering.

Full Disk Encryption: This encrypts all data on the disk or storage device, but it does not selectively protect specific fields within a database schema. It protects against physical theft or unauthorized access to the storage device itself rather than individual data fields.

Mirroring: This is a technique used for redundancy and fault tolerance, ensuring data availability and recovery but does not directly address confidentiality or substitution of sensitive data fields.
22
Q

A company’s security team received notice of a critical vulnerability affecting a high-profile device within the web infrastructure. The vendor patch was just made available online but has not yet been regression tested in development environments. In the interim, firewall rules were implemented to reduce the access to the interface affected by the vulnerability. Which of the following controls does this scenario describe?

a. Deterrent
b. Compensating
c. Detective
d. Preventive

A

b. compensating controls.

Compensating controls are security measures put in place to compensate for the lack of or the inadequacy of other controls. In this case:

Vendor Patch Availability: A critical vulnerability has been identified, and the vendor has released a patch, but it has not yet been tested in development environments.

Implementation of Firewall Rules: To mitigate the risk posed by the vulnerability while awaiting the tested patch, firewall rules are deployed. These rules restrict access to the vulnerable interface, thereby reducing the attack surface and the likelihood of exploitation.

By implementing firewall rules, the security team is applying a compensating control to temporarily mitigate the risk associated with the unpatched vulnerability. This approach helps to maintain a level of security until the proper patching process can be completed, ensuring that the organization’s systems are protected from potential exploitation. Therefore, the correct answer is:

b. Compensating

This control supplements the existing security measures until a more comprehensive solution (in this case, a properly tested vendor patch) can be safely deployed

23
Q

A security analyst is reviewing the following command-line output:

Internet Address Physical Address Type
192.168.1.1 aa-bb-cc-00-11-22 dynamic
192.168.1.2 aa-bb-cc-00-11-22 dynamic
192.168.1.3 aa-bb-cc-00-11-22 dynamic
192.168.1.4 aa-bb-cc-00-11-22 dynamic
192.168.1.5 aa-bb-cc-00-11-22 dynamic
—output omittedd—
192.168.1.251 aa-bb-cc-00-11-22 dynamic
192.168.1.252 aa-bb-cc-00-11-22 dynamic
192.168.1.253 aa-bb-cc-00-11-22 dynamic
192.168.1.254 aa-bb-cc-00-11-22 dynamic
192.168.1.255 ff-ff-ff-ff-ff-ff static

Which of the following is the analyst observing?

a. ICMP spoofing
b. URL redirection
c. MAC address cloning
d. DNS poisoning

A

c. MAC address cloning.

MAC address cloning occurs when a device impersonates another by using its MAC address. In this case, all dynamically assigned IP addresses (192.168.1.1 to 192.168.1.254) are associated with the same MAC address (“aa-bb-cc-00-11-22”), suggesting that multiple devices are using the same MAC address, which is abnormal and could indicate a security issue or misconfiguration.

24
Q

A company was recently breached, Part of the company’s new cybersecurity strategy is to centralize the logs from all security devices. Which of the following components forwards the logs to a central source?

a. Log enrichment
b. Log aggregation
c. Log parser
d. Log collector

A

(Community D 92%)
d. Log collector

A log collector gathers logs from multiple sources, such as firewalls, intrusion detection systems (IDS), servers, and other network devices. It then forwards these logs to a centralized location, typically a SIEM (Security Information and Event Management) system or a log management server. This centralized collection allows for efficient monitoring, analysis, and correlation of security events across the organization’s IT infrastructure.

a. Log enrichment: Log enrichment involves enhancing the raw log data with additional contextual information to make it more meaningful and actionable. This could include adding geolocation data, user information, or categorizing logs based on severity levels.

b. Log aggregation: Log aggregation refers to the process of collecting and consolidating log data from various sources into a centralized repository. It involves gathering logs from multiple devices or systems to facilitate centralized storage and analysis.

c. Log parser: A log parser is a tool or component that interprets and extracts relevant information from raw log data. It parses the logs to identify key fields and normalize them into a structured format that can be further processed or analyzed.

In the context of the question, while all these components are important for managing and analyzing logs, d. Log collector specifically addresses the task of gathering logs from distributed sources and forwarding them to a centralized location. This centralization is crucial for effective monitoring and management of security events across the organization.

25
Q

Which of the following is the MOST likely reason for securing an air-gapped laboratory HVAC system?

a. To avoid data leakage
b. To protect surveillance logs
c. To ensure availability
d. To facilitate third-party access

A

a. To avoid data leakage

Air-gapping a system means physically isolating it from other networks that could potentially be accessed by unauthorized parties. In the context of a laboratory HVAC system, securing it with an air gap helps prevent sensitive data, such as environmental control settings or operational data, from leaking to unauthorized individuals or systems. This isolation reduces the risk of data breaches or unauthorized access attempts.

26
Q

A user forwarded a suspicious email to the security team. Upon investigation, a malicious URL was discovered. Which of the following should be done FIRST to prevent other users from accessing the malicious URL?

a. Configure the web content filter for the web address.
b. Report the website to threat intelligence partners.
c. Set the SIEM to alert for any activity to the web address.
d. Send out a corporate communication to warn all users of the malicious email.

A

a. Configure the web content filter for the web address.

Configuring the web content filter to block access to the malicious URL ensures that users within the organization are protected immediately. By adding the URL to the web content filter’s blacklist, access attempts to the malicious site can be intercepted and blocked at the network level, thereby preventing potential infections or further compromise.

The other options are also important actions but would typically follow after the immediate mitigation of blocking the URL:

b. Report the website to threat intelligence partners: This is important for broader industry awareness and potential mitigation efforts but doesn't directly prevent access for internal users.

c. Set the SIEM to alert for any activity to the web address: This helps in monitoring and detecting if any user attempts to access the URL after it has been blocked.

d. Send out a corporate communication to warn all users of the malicious email: This is crucial for awareness and educating users about the threat, but it doesn't immediately prevent access to the URL.
27
Q

A systems analyst is responsible for generating a new digital forensics chain-of-custody form. Which of the following should the analyst include in this documentation? (Choose two.)

a. The order of volatility
b. A CRC32 checksum
c. The provenance of the artifacts
d. The vendor’s name
e. The date and time
f. A warning banner

A

(Community CE 86%)
c. The provenance of the artifacts
e. The date and time

These elements are essential for ensuring a clear and traceable record of the evidence handling process.

c. The provenance of the artifacts: This includes details on where the evidence was collected, who collected it, and how it has been handled. This information is crucial for establishing the authenticity and integrity of the evidence.

e. The date and time: This provides a timestamp for when the evidence was collected or transferred, which is critical for establishing a timeline of events and maintaining the integrity of the chain-of-custody.
28
Q

An organization is migrating several SaaS applications that support SSO. The security manager wants to ensure the migration is completed securely. Which of the following application integration aspects should the organization consider before focusing into underlying implementation details? (Choose two.)

a. The back-end directory source
b. The identity federation protocol
c. The hashing method
d. The encryption method
e. The registration authority
f. The certificate authority

A

a. The back-end directory source
b. The identity federation protocol

These aspects are critical for ensuring secure and seamless integration of the applications with the SSO infrastructure.

a. The back-end directory source: This is important to determine how user identities and credentials are managed and stored. It ensures compatibility and secure communication between the SSO system and the SaaS applications.

b. The identity federation protocol: This defines how authentication and authorization data are exchanged between the SSO system and the SaaS applications. Common protocols include SAML, OAuth, and OpenID Connect, and choosing the right protocol is essential for secure and efficient integration.

By focusing on these two aspects, the organization can ensure a secure and interoperable SSO implementation for the migrated SaaS applications.

29
Q

A security analyst has been tasked with finding the maximum amount of data loss that can occur before ongoing business operations would be impacted. Which of the following terms BEST defines this metric?

a. MTTR
b. RTO
c. RPO
d. MTBF

A

c. RPO (Recovery Point Objective)

RPO (Recovery Point Objective): This metric specifies the maximum acceptable amount of data loss measured in time. It represents the point in time to which data must be recovered to resume normal business operations after a disruption. The RPO determines how often backups should be made to ensure that data can be restored to an acceptable state in the event of a data loss incident.

MTTR (Mean Time to Repair):

Definition: The average time required to repair a system or component and restore it to normal operation after a failure.
Usage: MTTR focuses on the efficiency and speed of the repair process, not on the amount of data loss or backup frequency.

RTO (Recovery Time Objective):

Definition: The maximum acceptable amount of time that a system, application, or function can be down after a failure or disaster before there is an unacceptable impact on business operations.
Usage: RTO is about the duration of downtime that can be tolerated, not about data loss or how recent the data needs to be.

MTBF (Mean Time Between Failures):

Definition: The average time between failures of a system or component.
Usage: MTBF is used to predict the reliability and lifespan of systems, indicating how frequently failures might occur, but it does not directly relate to data loss or recovery objectives.

In summary:

RPO focuses on the maximum acceptable data loss.
RTO focuses on the maximum acceptable downtime.
MTTR focuses on the average time to repair and restore.
MTBF focuses on the average time between failures.
30
Q

The IT department’s on-site developer has been with the team for many years. Each time an application is released, the security team is able to identify multiple vulnerabilities. Which of the following would BEST help the team ensure the application is ready to be released to production?

a. Limit the use of third-party libraries.
b. Prevent data exposure queries.
c. Obfuscate the source code.
d. Submit the application to QA before releasing it.

A

d. Submit the application to QA before releasing it.
Explanation:

Quality Assurance (QA):

Purpose: QA teams are responsible for systematically testing the application to identify and fix bugs, vulnerabilities, and other issues before the application is released to production. This includes performing various types of testing such as functional testing, performance testing, and security testing.
Benefit: Submitting the application to QA ensures that it undergoes a thorough review process, which can catch issues that the development team might have missed. This step is crucial for improving the overall quality and security of the application.

Why Other Options Are Less Effective:

a. Limit the use of third-party libraries:

While limiting third-party libraries can reduce the attack surface and dependencies, it doesn't directly address the specific vulnerabilities in the application code written by the developer.

b. Prevent data exposure queries:

Preventing data exposure queries is a good practice to protect sensitive data, but it is just one aspect of security and doesn't cover all possible vulnerabilities in the application.

c. Obfuscate the source code:

Obfuscation can make it harder for attackers to understand the code, but it doesn't eliminate vulnerabilities. It is more of a deterrent than a solution for improving the quality or security of the code.

Summary:

Submitting the application to a dedicated QA team for thorough testing is the most effective way to ensure that it is ready for production. QA teams can perform comprehensive testing, including security testing, which will help identify and resolve vulnerabilities before the application is released.

31
Q

During a security incident investigation, an analyst consults the company’s SIEM and sees an event concerning high traffic to a known, malicious command-and-control server. The analyst would like to determine the number of company workstations that may be impacted by this issue. Which of the following can provide this information?

a. WAF logs
b. DNS logs
c. System logs
d. Application logs

A

(Community B 80%, C 20%)

DNS logs can contain a record for every query and response. It can show the IP addresses and domain names that your system should/shouldn’t be communicating with, it can reveal malware calling out to its command-and-control server, or data transfers to non-company locations. This is one of the reasons why DNS logs are some of the most valuable logs to import into a SIEM system

(Brain dump)
b. DNS logs
Explanation:

DNS Logs:

Purpose: DNS logs record queries made by workstations to resolve domain names to IP addresses.
Benefit: By reviewing DNS logs, the analyst can identify which workstations have attempted to resolve the domain name of the command-and-control server. This helps in determining which workstations have potentially communicated with the malicious server.

Why Other Options Are Less Effective:

a. WAF logs:

Purpose: Web Application Firewall (WAF) logs monitor and filter HTTP traffic to and from web applications.
Limitation: While WAF logs can detect malicious web traffic, they are less likely to capture traffic to command-and-control servers unless the traffic involves web applications.

c. System logs:

Purpose: System logs contain information about operating system events such as user logins, system errors, and application events.
Limitation: System logs typically do not track network traffic or DNS queries, making them less useful for identifying communication with a command-and-control server.

d. Application logs:

Purpose: Application logs record events and errors related to specific applications running on the system.
Limitation: Unless the malicious traffic is related to a specific application and that application logs such network activity, these logs are unlikely to provide comprehensive information about which workstations are affected.

Summary:

DNS logs are the most relevant and useful for identifying workstations that have attempted to communicate with a known, malicious command-and-control server. They provide clear evidence of which machines have resolved the server’s domain name, indicating potential infection or compromise.

32
Q

A company has a flat network that is deployed in the cloud. Security policy states that all production and development servers must be segmented. Which of the following should be used to design the network to meet the security requirements?

a. CASB
b. VPC
c. Perimeter network
d. WAF

A

b. VPC (Virtual Private Cloud)
Explanation:

VPC (Virtual Private Cloud):

Purpose: A VPC allows you to create isolated network segments within a cloud provider's infrastructure.
Benefit: You can create multiple subnets within a VPC and control the traffic between them using network access control lists (ACLs) and security groups. This enables the segmentation of production and development environments, aligning with the security policy.

Why Other Options Are Less Effective:

a. CASB (Cloud Access Security Broker):

Purpose: A CASB provides visibility and control over data and security policies for cloud services.
Limitation: While a CASB enhances security for cloud services, it does not inherently provide network segmentation capabilities required to isolate production and development servers.

c. Perimeter network:

Purpose: A perimeter network, or DMZ (Demilitarized Zone), is used to expose external-facing services to the internet while protecting the internal network.
Limitation: A perimeter network focuses on securing external access rather than internal segmentation between different environments such as production and development.

d. WAF (Web Application Firewall):

Purpose: A WAF protects web applications by filtering and monitoring HTTP traffic.
Limitation: While a WAF is useful for protecting web applications from specific types of attacks, it does not provide the necessary network segmentation to isolate production and development servers.

Summary:

A VPC is the most suitable choice for designing a network that requires segmentation of production and development servers in a cloud environment. It allows for the creation of isolated subnets and provides the necessary tools to control and secure traffic between them, ensuring compliance with the security policy.

33
Q

A new plug-and-play storage device was installed on a PC in the corporate environment. Which of the following safeguards will BEST help to protect the PC from malicious files on the storage device?

a. Change the default settings on the PC.
b. Define the PC firewall rules to limit access.
c. Encrypt the disk on the storage device.
d. Plug the storage device in to the UPS.

A

(Community A: 52%, C : 42%)
a. Change the default settings on the PC.
Option A protects the PC. Modifying the default settings on the PC which includes enabling auto-scanning of connected devices before use and preventing unauthorized software execution; can protect from malicious files on storage devices.

Option B is incorrect. Firewalls control network traffic, they don’t directly protect against malicious files.
Option C is incorrect! FDE protects data on the storage device from unauthorized use, but we want to protect the PC from malicious files, not the storage device.
Option D is irrelevant, it’s related to protection against power outages.

ChatGPT :
a. Change the default settings on the PC.
Explanation:

Change the default settings on the PC:

Purpose: By changing the default settings, you can control how the PC responds to plug-and-play devices. This includes disabling auto-run features and ensuring that any new storage device is scanned for malware before files are accessed.
Benefit: This minimizes the risk of automatically executing malicious files from the storage device and provides a layer of protection by requiring manual intervention or a malware scan before the device can be used.

Why Other Options Are Less Effective:

b. Define the PC firewall rules to limit access:

Purpose: Firewall rules control network traffic but do not directly address threats posed by files on a local storage device.
Limitation: While useful for network security, firewall rules do not prevent malicious files from being executed once the storage device is connected.

c. Encrypt the disk on the storage device:

Purpose: Encryption protects data on the storage device from unauthorized access.
Limitation: Encryption does not prevent malicious files from being executed. If the device is already infected, encryption does not mitigate the risk of the malware spreading.

d. Plug the storage device into the UPS:

Purpose: A UPS (Uninterruptible Power Supply) provides power backup and protects against power surges.
Limitation: This has no impact on the security of files on the storage device. It only ensures power stability for the PC.

Summary:

Changing the default settings on the PC to disable auto-run features and ensure thorough scanning of the storage device before access provides the best protection against malicious files. This proactive approach helps prevent malware from automatically executing and spreading from the plug-and-play device.

(Braindump : c. Encrypt the disk on the storage device.)

34
Q

A company is adopting a BYOD policy and is looking for a comprehensive solution to protect company information on user devices. Which of the following solutions would BEST support the policy?

a. Mobile device management
b. Full-device encryption
c. Remote wipe
d. Biometrics

A

a. Mobile device management
Explanation:

Mobile device management (MDM):

Purpose: MDM solutions provide comprehensive management and security for mobile devices. They enable administrators to enforce security policies, deploy applications, monitor devices, and protect data.
Features:
    Security Policies: Enforce password policies, encryption, and other security measures.
    Remote Management: Remotely wipe or lock devices if they are lost or stolen.
    Application Control: Control which applications can be installed and used.
    Data Protection: Separate personal and company data, ensuring company information is secure.

Why Other Options Are Less Comprehensive:

b. Full-device encryption:

Purpose: Encrypts the entire device to protect data from unauthorized access.
Limitation: While important for data security, it does not provide comprehensive management or other protective measures needed for BYOD policies.

c. Remote wipe:

Purpose: Allows administrators to remotely delete data from a lost or stolen device.
Limitation: This is a useful feature but only one aspect of device management. It does not cover the full range of security and management needs.

d. Biometrics:

Purpose: Provides secure authentication using fingerprint, facial recognition, or other biometric methods.
Limitation: Enhances security for device access but does not address data protection, application control, or other management aspects.

Summary:

Mobile Device Management (MDM) provides a comprehensive approach to managing and securing user devices in a BYOD environment. It offers a wide range of features essential for protecting company information, enforcing security policies, and ensuring compliance with organizational standards.

35
Q

A company wants to modify its current backup strategy to minimize the number of backups that would need to be restored in case of data loss. Which of the following would be the BEST backup strategy to implement?

a. Incremental backups followed by differential backups
b. Full backups followed by incremental backups
c. Delta backups followed by differential backups
d. Incremental backups followed by delta backups
e. Full backups followed by differential backups

A

e. Full backups followed by differential backups
Explanation:

Full Backups:

Purpose: A full backup captures the entire data set, creating a complete copy of all data.
Benefit: Simplifies the restoration process as it provides a single, comprehensive backup.

Differential Backups:

Purpose: A differential backup captures all the data that has changed since the last full backup.
Benefit: During restoration, you only need the last full backup and the most recent differential backup, minimizing the number of backups to restore.

Why This Strategy is Best:

Restoration Process: To restore data, you start with the full backup and then apply the latest differential backup. This reduces the complexity and time required for restoration compared to incremental backups, where you would need to restore the last full backup and every subsequent incremental backup in sequence.
Backup Frequency: Differential backups can be taken more frequently than full backups, providing a good balance between backup size and restoration ease.

Why Other Options Are Less Effective:

a. Incremental backups followed by differential backups:

This is an uncommon and complex strategy that would not simplify restoration.

b. Full backups followed by incremental backups:

Requires restoring the full backup plus each incremental backup since the last full backup, increasing the number of restores needed.

c. Delta backups followed by differential backups:

Delta backups capture only changes at the byte level, which can be very efficient but also more complex and not as widely used as incremental and differential backups.

d. Incremental backups followed by delta backups:

This is a complex strategy with a potentially high number of backups to restore, making the process cumbersome.

Summary:

Full backups followed by differential backups provide an efficient and simplified restoration process, minimizing the number of backups that need to be restored in the event of data loss.

36
Q

The compliance team requires an annual recertification of privileged and non-privileged user access. However, multiple users who left the company six months ago still have access. Which of the following would have prevented this compliance violation?

a. Account audits
b. AUP
c. Password reuse
d. SSO

A

a. Account audits
Explanation:

Account Audits:

Purpose: Regularly auditing user accounts ensures that access rights are up-to-date and aligned with current business needs.
Process: During account audits, administrators can identify inactive or terminated user accounts and promptly revoke their access rights.
Benefits:
    Compliance: Helps in maintaining compliance with organizational policies and regulatory requirements that mandate regular access reviews.
    Security: Reduces the risk of unauthorized access by ensuring that only active employees have access to company resources.
    Efficiency: Streamlines access management processes by keeping user access permissions current.

Why Other Options Are Less Appropriate:

b. AUP (Acceptable Use Policy):

AUP defines acceptable behavior for users but does not directly manage user access or perform audits.

c. Password Reuse:

Password reuse policies aim to prevent users from using the same passwords across multiple accounts but do not address the issue of terminated users retaining access.

d. SSO (Single Sign-On):

SSO simplifies access management by allowing users to authenticate once to access multiple applications, but it does not directly manage or audit user accounts.

Summary:

Account audits are crucial for ensuring that access rights are aligned with current employment status and business needs. By conducting regular audits, organizations can proactively identify and address compliance violations such as terminated users retaining access rights, thereby enhancing overall security and compliance posture.

37
Q

A company recently experienced a data breach and the source was determined to be an executive who was charging a phone in a public area. Which of the following would MOST likely have prevented this breach?

a. A firewall
b. A device pin
c. A USB data blocker
d. Biometrics

A

c. A USB data blocker
Explanation:

USB Data Blocker:

Purpose: USB data blockers prevent data exchange when a device is plugged into an unknown or potentially compromised USB port.
Functionality: They allow charging while blocking the data transfer pins, preventing unauthorized access and data theft.
Scenario Relevance: In the described scenario, if the executive had used a USB data blocker while charging the phone in a public area, it would have prevented unauthorized access and potential data breach through the USB connection.

Why Other Options Are Less Appropriate:

a. Firewall:

Firewalls are network security devices that monitor and control incoming and outgoing network traffic. They are not directly related to preventing unauthorized data access via USB connections.

b. Device PIN:

A device PIN provides security for unlocking the device itself but does not protect against unauthorized data transfer when connected to a potentially compromised USB port.

d. Biometrics:

Biometrics (such as fingerprint or facial recognition) provide authentication for unlocking the device but do not prevent unauthorized data transfer through a USB connection.

Summary:

Using a USB data blocker is the most effective measure in this scenario because it directly addresses the specific threat of unauthorized data access and transfer when charging devices in public or untrusted environments. This solution helps mitigate the risk of data breaches stemming from compromised USB ports or chargers, which are common attack vectors in public spaces.

38
Q

The manager who is responsible for a data set has asked a security engineer to apply encryption to the data on a hard disk. The security engineer is an example of a __________.

a. data controller.
b. data owner.
c. data custodian.
d data processor.

A

c. data custodian.
Explanation:

Data Controller: Typically refers to the individual or entity that determines the purposes and means of processing personal data.
Data Owner: Refers to the individual or entity that has ultimate responsibility and control over the data, including its use and protection.
Data Custodian: Refers to the individual or entity responsible for the storage, maintenance, and protection of data. They manage and handle the data according to the instructions provided by the data owner or controller.
Data Processor: Refers to an entity or person that processes data on behalf of the data controller, following their instructions.

In this case, the security engineer is directly handling and securing the data by applying encryption on the hard disk. This role aligns with that of a data custodian, who is responsible for safeguarding data and ensuring its integrity and security as per organizational policies and guidelines.

39
Q

An organization with a low tolerance for user inconvenience wants to protect laptop hard drives against loss or data theft. Which of the following would be the MOST acceptable?

a. SED
b. HSM
c. DLP
d. TPM

A

a. SED (Self-Encrypting Drive).
Explanation:

SED (Self-Encrypting Drive): SEDs automatically encrypt data on the drive using hardware-based encryption, which is transparent to the user and does not significantly impact performance. They protect data at rest, ensuring that if a laptop is lost or stolen, the data on the drive remains encrypted and inaccessible without the proper authentication credentials.

HSM (Hardware Security Module): HSMs are used for cryptographic key management and secure transaction processing. While they can be part of a security solution, they are not typically used directly to protect laptop hard drives against loss or data theft.

DLP (Data Loss Prevention): DLP solutions monitor and control the movement of sensitive data within and outside of an organization's network. They focus on preventing accidental or intentional data leaks rather than protecting data on individual laptop hard drives.

TPM (Trusted Platform Module): TPM is a hardware-based security feature that stores cryptographic keys and provides secure boot capabilities. It enhances platform security but is more focused on ensuring the integrity of the system's boot process and securing cryptographic operations rather than specifically protecting data on hard drives.

Therefore, considering the requirement for minimal user inconvenience and effective protection against data loss or theft from laptop hard drives, SED (Self-Encrypting Drive) would be the most suitable choice.

40
Q

After segmenting the network, the network manager wants to control the traffic between the segments. Which of the following should the manager use to control the network traffic?

a. A DMZ
b. A VPN
c. A VLAN
d. An ACL

A

d. An ACL (Access Control List).
Explanation:

DMZ (Demilitarized Zone): A DMZ is a network segment that separates an internal network from an external network, typically used to host services accessible from the internet while adding an extra layer of security.

VPN (Virtual Private Network): A VPN creates a secure encrypted tunnel over a public network (usually the internet), allowing remote users to securely connect to a private network as if they were directly connected to it.

VLAN (Virtual Local Area Network): A VLAN logically divides a physical network into multiple virtual networks, enabling isolation of traffic within different segments of the network. However, VLANs alone do not provide traffic control between segments; they segregate traffic.

ACL (Access Control List): An ACL is a set of rules configured on a network device (such as a router or firewall) that filters traffic based on various criteria (such as source/destination IP addresses, ports, protocols). ACLs allow or deny traffic between network segments based on defined rules, providing granular control over network traffic flow.

Since the goal is to control traffic between already segmented networks, ACLs are the most appropriate choice. They enable the network manager to specify exactly what traffic is allowed or denied between different segments based on specific criteria, thereby enhancing security and network management.

41
Q

Which of the following BEST describes when an organization utilizes a ready-to-use application from a cloud provider?

a. IaaS
b. SaaS
c. PaaS
d. XaaS

A

b. SaaS (Software as a Service)
Explanation:

IaaS (Infrastructure as a Service): Provides virtualized computing resources over the internet. Users typically manage their own operating systems, applications, and data, but the cloud provider manages the infrastructure, such as virtualization, servers, storage, and networking.

SaaS (Software as a Service): Delivers software applications over the internet on a subscription basis. The applications are hosted and managed by the service provider. Users access the application through a web browser without needing to install or maintain any software locally.

PaaS (Platform as a Service): Provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application.

XaaS (Everything as a Service): A broad term that encompasses all types of services delivered over the internet, including IaaS, SaaS, PaaS, and other types of services.

In this context, since the organization is utilizing a ready-to-use application directly from a cloud provider, SaaS is the most appropriate choice as it directly addresses the consumption of software applications delivered as a service over the internet.

42
Q

Which of the following BEST helps to demonstrate integrity during a forensic investigation?

a. Event logs
b. Encryption
c. Hashing
d. Snapshots

A

c. Hashing
Explanation:

Event logs: Event logs record events that occur on a system or network, providing a chronological record of activities. While they are crucial for tracking actions and incidents, they do not directly ensure the integrity of data itself.

Encryption: Encryption is used to protect data confidentiality by converting plaintext into ciphertext, making it unreadable without the correct decryption key. While encryption protects data from unauthorized access, it does not provide a means to verify the integrity of the data itself.

Hashing: Hashing involves generating a fixed-size output (hash value) from input data of any size. The hash value uniquely represents the input data. Hash functions are designed so that even a small change in the input data results in a significantly different hash value. During a forensic investigation, hashing is used to verify data integrity. If the hash value of the original data matches the hash value calculated during the investigation, it confirms that the data has not been altered.

Snapshots: Snapshots capture the state of a system or data at a specific point in time. They are useful for restoring systems to a previous state but do not inherently verify the integrity of data.

Therefore, hashing is the most appropriate choice as it provides a reliable method to verify data integrity during forensic investigations by comparing hash values of original and acquired data to ensure data integrity and detect any unauthorized changes.

43
Q

Which of the following would be MOST effective to contain a rapidly spreading attack that is affecting a large number of organizations?

a. Machine learning
b. DNS sinkhole
c. Blocklist
d. Honeypot

A

(Community : B 100%)
b. DNS sinkhole

A DNS (Domain Name System) sinkhole is a technique used to redirect malicious or unwanted traffic to a non-existent or controlled destination. By redirecting the traffic to a sinkhole, organizations can effectively contain the spread of the attack by preventing the malicious traffic from reaching its intended targets.

When an attack is rapidly spreading and affecting multiple organizations, a DNS sinkhole can be deployed at a network level to block access to malicious domains or IP addresses associated with the attack. This prevents infected systems from establishing connections with the attacker’s infrastructure, effectively containing the attack and limiting its impact on other organizations.

ChatGPT
b. DNS sinkhole
Explanation:

Machine learning: While machine learning can be effective in identifying patterns and anomalies, it is more suitable for detecting and predicting threats rather than containing ongoing attacks.

DNS sinkhole: A DNS sinkhole redirects malicious traffic to a designated IP address, effectively isolating the attack and preventing infected devices from communicating with malicious domains or servers. This containment mechanism can significantly reduce the impact of a widespread attack by disrupting the malicious infrastructure's ability to operate.

Blocklist: A blocklist contains known malicious IP addresses, domains, or URLs. While it can prevent access to known threats, it may not be sufficient to contain a rapidly spreading attack unless the specific malicious entities are well-documented and can be quickly added to the blocklist.

Honeypot: A honeypot is a decoy system designed to lure attackers and gather information about their methods. While useful for learning about attack techniques, honeypots do not directly contain or mitigate ongoing attacks affecting a large number of organizations.

Therefore, in the context of containing a rapidly spreading attack across multiple organizations, DNS sinkhole is the most appropriate choice as it can quickly disrupt the attack’s infrastructure and prevent further spread by redirecting malicious traffic to a controlled environment.

44
Q

A network administrator has been alerted that web pages are experiencing long load times. After determining it is not a routing or DNS issue, the administrator logs in to the router, runs a command, and receives the following output:

CPU 0 percent busy, from 300 sec ago
1 sec ave: 99 percent busy
5 sec ave: 97 percent busy
1 min ave: 83 percent busy

Which of the following is the router experiencing?

a. DDoS attack
b. Memory leak
c. Buffer overflow
d. Resource exhaustion

A

d. Resource exhaustion
Explanation:

DDoS attack: While a Distributed Denial of Service (DDoS) attack can lead to resource exhaustion, the output specifically indicates high CPU usage, which could be a result of many types of resource exhaustion scenarios, not just DDoS attacks.

Memory leak: This typically causes gradual degradation over time and leads to running out of memory rather than immediate high CPU usage.

Buffer overflow: This usually results in program crashes or unexpected behavior rather than high CPU usage.

Resource exhaustion: This is a general term for when a system’s resources (CPU, memory, etc.) are maxed out, leading to performance issues. The router's CPU is extremely busy (99% in the last second, 97% in the last 5 seconds, and 83% over the last minute), which fits the description of resource exhaustion.

Given the high CPU usage in the output, the router is likely experiencing resource exhaustion.

45
Q

The Chief Executive Officer (CEO) of an organization would like staff members to have the flexibility to work from home anytime during business hours, including during a pandemic or crisis. However, the CEO is concerned that some staff members may take advantage of the flexibility and work from high-risk countries while on holiday or outsource work to a third-party organization in another country. The Chief Information Officer (CIO) believes the company can implement some basic controls to mitigate the majority of the risk. Which of the following would be BEST to mitigate the CEO’s concerns? (Choose two.)

a. Geolocation
b. Time-of-day restrictions
c. Certificates
d. Tokens
e. Geotagging
f. Role-based access controls

A

a. Geolocation
e. Geotagging

a. Geolocation

Geolocation can restrict access based on the geographical location of the user. This would ensure that employees can only access the company’s network from approved locations and prevent access from high-risk countries.

e. Geotagging

Geotagging can provide additional data about the location of the user’s device when they access company resources. This can help verify that employees are working from approved locations and not from high-risk areas or unauthorized countries.
Explanation:

Time-of-day restrictions: This would limit when employees can access the system, not where they access it from.
Certificates: While certificates enhance security, they do not address the location concerns.
Tokens: Tokens can help with authentication but do not mitigate concerns about location.
Role-based access controls: RBAC controls access based on user roles but does not address the issue of location or third-party outsourcing.

Using geolocation and geotagging together provides a robust approach to ensuring employees are working from permissible locations, thus addressing the CEO’s concerns effectively.

(Braindump : a. Geolocation
b. Time-of-day restrictions)

46
Q

While checking logs, a security engineer notices a number of end users suddenly downloading files with the .tar.gz extension. Closer examination of the files reveals they are PE32 files. The end users state they did not initiate any of the downloads. Further investigation reveals the end users all clicked on an external email containing an infected MHT file with an href link a week prior. Which of the following is MOST likely occurring?

a. A RAT was installed and is transferring additional exploit tools.
b. The workstations are beaconing to a command-and-control server.
c. A logic bomb was executed and is responsible for the data transfers.
d. A fileless virus is spreading in the local network environment

A

a. A RAT was installed and is transferring additional exploit tools.
Explanation:

A RAT (Remote Access Trojan) can be installed on users' workstations when they click on an infected link or file. Once installed, the RAT allows an attacker to remotely control the infected machines and download additional malicious files or tools. The sudden download of PE32 files (Portable Executable files for Windows) disguised as .tar.gz (a compressed archive format) suggests that additional exploit tools or payloads are being downloaded to the compromised systems.

The workstations are beaconing to a command-and-control server: While beaconing to a C2 server is a common activity for compromised machines, the specific context of downloading PE32 files disguised as .tar.gz indicates more than just beaconing; it suggests active downloading of malicious files, which is more aligned with RAT behavior.

A logic bomb was executed and is responsible for the data transfers: Logic bombs are typically time-based or event-based triggers that execute malicious code. They do not generally involve downloading files from the internet unless specifically designed to do so.

A fileless virus is spreading in the local network environment: Fileless malware operates in memory and does not leave files on the disk, making it harder to detect. The scenario described involves downloading and identifying PE32 files, which are not indicative of fileless malware behavior.

Given the symptoms and actions described, the most likely occurrence is that a RAT was installed and is now being used to transfer additional exploit tools to the compromised systems.

47
Q

A business is looking for a cloud service provider that offers a la carte services, including cloud backups, VM elasticity, and secure networking. Which of the following cloud service provider types should the business engage?

a. IaaS
b. PaaS
c. XaaS
d. SaaS

A

a. IaaS
Explanation:

IaaS (Infrastructure as a Service): This type of cloud service provider offers fundamental computing resources such as virtual machines (VMs), storage, and networking on a pay-as-you-go basis. Businesses can select specific services they need (a la carte), such as cloud backups, VM elasticity (scaling up or down as needed), and secure networking. This flexibility allows businesses to tailor their infrastructure according to their specific requirements.

PaaS (Platform as a Service): This provides a platform allowing customers to develop, run, and manage applications without dealing with the underlying infrastructure. PaaS is more focused on application development and may not offer the level of control over backups, VM management, and networking that the business is seeking.

XaaS (Anything as a Service): This is a broad category that includes any type of service delivered over the internet. While it encompasses IaaS, PaaS, and SaaS, it's not a specific service model and doesn't directly address the specific needs described.

SaaS (Software as a Service): This delivers software applications over the internet, on a subscription basis. SaaS providers manage the infrastructure and platforms that run the applications, but it does not provide the level of control over backups, VM management, and networking that IaaS offers.

Therefore, for a business looking for cloud backups, VM elasticity, and secure networking on an a la carte basis, IaaS is the most appropriate choice.

48
Q

A research company discovered that an unauthorized piece of software has been detected on a small number of machines in its lab. The researchers collaborate with other machines using port 445 and on the Internet using port 443. The unauthorized software is starting to be seen on additional machines outside of the lab and is making outbound communications using HTTPS and SMB. The security team has been instructed to resolve the problem as quickly as possible while causing minimal disruption to the researchers. Which of the following contains the BEST course of action in this scenario?

a. Update the host firewalls to block outbound SMB.
b. Place the machines with the unapproved software in containment.
c. Place the unauthorized application in a blocklist.
d. Implement a content filter to block the unauthorized software communication.

A

b. Place the machines with the unapproved software in containment.
Explanation:

Containment: Isolating the machines with the unauthorized software is the quickest and most effective way to prevent the spread of the software while allowing the researchers to continue their work. This action minimizes disruption by keeping unaffected machines operational while containing the potential threat.

Update the host firewalls to block outbound SMB: This could help to some extent, but it would not address the outbound HTTPS communications, which could still allow the unauthorized software to communicate and spread.

Place the unauthorized application in a blocklist: This can be part of the solution, but it may not be as immediate or effective in stopping the spread if the software is already running on multiple machines. Moreover, creating and distributing a blocklist could take some time and may not be comprehensive if the unauthorized software has multiple variants.

Implement a content filter to block the unauthorized software communication: This could also help, but it might require more time to configure properly and may not immediately stop the spread if the software uses encrypted communications that are harder to filter effectively.

By containing the affected machines, the security team can prevent further spread and analyze the software in a controlled environment, then proceed with additional steps such as removal, blocklisting, or firewall updates as necessary.

49
Q

A security analyst has been reading about a newly discovered cyberattack from a known threat actor. Which of the following would BEST support the analyst’s review of the tactics, techniques, and protocols the threat actor was observed using in previous campaigns?

a. Security research publications
b. The MITRE ATT&CK framework
c. The Diamond Model of Intrusion Analysis
d. The Cyber Kill Chain

A

b. The MITRE ATT&CK framework
Explanation:

The MITRE ATT&CK framework: This framework provides a comprehensive and detailed knowledge base of adversary tactics, techniques, and procedures (TTPs) used in cyberattacks. It is specifically designed to help security professionals understand and analyze the methods used by threat actors in previous campaigns, making it the best tool for the analyst's needs.

Security research publications: While valuable, they may not provide a systematic and structured analysis of tactics and techniques across different threat actors and campaigns.

The Diamond Model of Intrusion Analysis: This model is useful for understanding the relationships between adversaries, capabilities, infrastructure, and victims, but it does not provide the same detailed catalog of TTPs as the MITRE ATT&CK framework.

The Cyber Kill Chain: This model outlines the stages of a cyberattack but does not provide the same level of detail on specific tactics, techniques, and protocols used by threat actors as the MITRE ATT&CK framework does.
50
Q

A security analyst is hardening a network infrastructure. The analyst is given the following requirements:

*Preserve the use of public IP addresses assigned to equipment on the core router.
*Enable “in transport” encryption protection to the web server with the strongest ciphers.

Which of the following should the analyst implement to meet these requirements? (Choose two.)

a. Configure VLANs on the core router.
b. Configure NAT on the core router.
c. Configure BGP on the core router.
d. Enable AES encryption on the web server.
e. Enable 3DES encryption on the web server.
f. Enable TLSv2 encryption on the web server.

A

(Community : BF 63%, CF 29%)

(Braindump : b. Configure NAT on the core router. f. Enable TLSv2 encryption on the web server.)

d. Enable AES encryption on the web server.
f. Enable TLSv2 encryption on the web server.

Here’s why these choices are appropriate:

Enable AES encryption on the web server: AES (Advanced Encryption Standard) is a strong encryption algorithm widely used to secure data in transit. Enabling AES ensures that data between clients and the web server is encrypted with a highly secure cipher.

Enable TLSv2 encryption on the web server: TLS (Transport Layer Security) version 1.2 or higher (commonly referred to as TLSv2) is the current standard for secure communication over networks. It provides robust encryption, ensuring that data in transit is protected with the strongest available encryption protocols.

Let’s briefly review the other options:

Configure VLANs on the core router: VLANs are used to segment network traffic but do not address the requirements of preserving public IP addresses or enabling encryption.

Configure NAT on the core router: Network Address Translation (NAT) would change the public IP addresses to private ones for internal use, which does not meet the requirement of preserving the use of public IP addresses.

Configure BGP on the core router: Border Gateway Protocol (BGP) is used for routing between autonomous systems on the internet. It does not address encryption or the specific requirement of using public IP addresses on equipment.

Enable 3DES encryption on the web server: While 3DES is an encryption algorithm, it is considered weaker and less efficient compared to AES. Thus, it does not meet the requirement of using the strongest ciphers.

Therefore, enabling AES encryption and TLSv2 on the web server will meet the requirements of providing strong “in transport” encryption and preserving the use of public IP addresses.