601-650 Flashcards

1
Q

Which of the following has been implemented when a host-based firewall on a legacy Linux system allows connections from only specific internal IP addresses?

A. Compensating control
B. Network segmentation
C. Transfer of risk
D. SNMP traps

A

A. Compensating control
Community : A (64%, B 36%)

Here’s why:

Compensating Control: This is a security measure that is implemented to satisfy the requirements of a security policy that cannot be met with the primary control. In this case, allowing connections only from specific internal IP addresses serves as a compensating control to enhance security on a legacy system that may not support more modern security measures.

Network Segmentation: This involves dividing a network into smaller segments or subnetworks to improve security and manageability. While controlling access based on IP addresses can be part of network segmentation, the described scenario is specifically about a host-based firewall, not the overall network architecture.

Transfer of Risk: This refers to shifting the risk to another party, often through insurance or outsourcing. Implementing a host-based firewall rule does not transfer risk; it mitigates risk.

SNMP Traps: Simple Network Management Protocol (SNMP) traps are notifications sent from a network device to a management system, indicating a significant event. They are not related to firewall rules or access control.

Therefore, the implementation of specific internal IP address allowances in a host-based firewall on a legacy Linux system is best described as a Compensating Control.

(Brain dump: B. Network segmentation )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An attacker tricks a user into providing confidential information. Which of the following describes this form of malicious reconnaissance?

A. Phishing
B. Social engineering
C. Typosquatting
D. Smishing

A

B. Social engineering

Here’s why:

Phishing: Phishing is a specific type of social engineering attack where the attacker sends fraudulent emails or messages that appear to come from a legitimate source, with the aim of tricking the user into providing confidential information. While phishing is a subset of social engineering, the broader term is more applicable to the general act of tricking users.

Social Engineering: This encompasses a wide range of manipulative tactics used to deceive individuals into divulging confidential information. It can include phishing, pretexting, baiting, and other methods where the attacker exploits human psychology.

Typosquatting: Typosquatting involves registering domain names that are similar to legitimate websites, often relying on common typing errors made by users. It aims to deceive users into visiting the malicious website but does not directly involve tricking users into providing information through direct interaction.

Smishing: Smishing is a form of phishing that involves sending fraudulent SMS (text) messages to trick users into providing confidential information. It is a specific type of social engineering attack using SMS.

Given that the question refers to the broader concept of tricking a user into providing confidential information, Social Engineering is the most accurate answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A large bank with two geographically dispersed data centers is concerned about major power disruptions at both locations. Every day each location experiences very brief outages that last for a few seconds. However, during the summer a high risk of intentional under-voltage events that could last up to an hour exists, particularly at one of the locations near an industrial smelter. Which of the following is the best solution to reduce the risk of data loss?

A. Dual supply
B. Generator
C. PDU
D. Daily backups

A

B. Generator

Here’s why:

Dual Supply: Having a dual power supply ensures redundancy in the power source, but if both sources are affected by the same power disruptions or intentional under-voltage events, this solution alone may not be sufficient.

Generator: A generator can provide a reliable backup power source during extended power outages or intentional under-voltage events. This ensures that critical systems can continue to operate for longer periods, thereby reducing the risk of data loss.

PDU (Power Distribution Unit): While a PDU helps in distributing power to multiple devices efficiently, it does not address the issue of power outages or under-voltage events.

Daily Backups: While daily backups are essential for data recovery and minimizing data loss, they do not prevent disruptions in real-time operations or data loss that can occur between backups during extended power outages.

Given the scenario with frequent brief outages and the high risk of longer intentional under-voltage events, implementing a Generator would provide continuous power during these disruptions, thereby ensuring the operation of data centers and reducing the risk of data loss.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following examples would be best mitigated by input sanitization?

A. script > alert (“Warning!”); < /script
B. nmap -p- 10.11.1.130
C. Email message: “Click this link to get your free gift card.”
D. Browser message: “Your connection is not private.”

A

A. alert(“Warning!”);

Here’s why:

alert("Warning!");: This is an example of a Cross-Site Scripting (XSS) attack, where malicious scripts are injected into web pages viewed by other users. Input sanitization can effectively prevent such attacks by ensuring that any user-provided input is properly encoded and does not contain executable code.

nmap -p- 10.11.1.130: This is a command used for network scanning. While it represents a potential security risk, it is more related to network security and should be mitigated through network access controls and monitoring rather than input sanitization.

Email message: "Click this link to get your free gift card.": This example represents a phishing attack. Mitigating phishing typically involves user education, email filtering, and anti-phishing technologies rather than input sanitization.

Browser message: "Your connection is not private.": This message indicates a potential issue with an SSL/TLS certificate or a man-in-the-middle attack. Addressing this issue involves ensuring proper SSL/TLS configurations and certificate management, not input sanitization.

Therefore, input sanitization is most directly applicable to mitigating the risks associated with the alert(“Warning!”); example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An organization would like to store customer data on a separate part of the network that is not accessible to users on the mam corporate network. Which of the following should the administrator use to accomplish this goal?

A. Segmentation
B. Isolation
C. Patching
D. Encryption

A

A. Segmentation

Here’s why:

Segmentation: Network segmentation involves dividing a network into smaller, distinct subnetworks or segments. By doing this, an administrator can isolate customer data on a separate network segment that is not accessible from the main corporate network, thereby enhancing security and reducing the risk of unauthorized access.

Isolation: While similar to segmentation, isolation typically refers to completely separating systems to ensure they have no connectivity. While effective, it is often more extreme and less flexible than segmentation, which allows for controlled and secure interactions between segments if needed.

Patching: Patching involves updating software to fix vulnerabilities and improve security. While important for maintaining the security of systems, patching does not address the specific need to separate and restrict access to customer data within the network.

Encryption: Encryption protects data by making it unreadable to unauthorized users. While crucial for data security, especially for data at rest and in transit, it does not solve the issue of network access control and separation of data storage.

Therefore, Segmentation is the best approach to achieve the goal of storing customer data on a separate part of the network that is not accessible to users on the main corporate network.

(Brain dump : B. Isolation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company is adding a clause to its AUP that states employees are not allowed to modify the operating system on mobile devices. Which of the following vulnerabilities is the organization addressing?

A. Cross-site scripting
B. Buffer overflow
C. Jailbreaking
D. Side loading

A

C. Jailbreaking

Here’s why:

Cross-site scripting (XSS): This vulnerability involves injecting malicious scripts into web pages viewed by other users. It is not directly related to modifying the operating system on mobile devices.

Buffer overflow: This vulnerability occurs when more data is written to a buffer than it can hold, potentially leading to arbitrary code execution. While serious, it is not specifically addressed by prohibiting modifications to the operating system on mobile devices.

Jailbreaking: Jailbreaking is the process of removing restrictions imposed by the operating system on mobile devices, typically to allow the installation of unauthorized applications or modifications. Prohibiting modifications to the operating system directly addresses the risk of jailbreaking, which can expose devices to security vulnerabilities and malicious software.

Side loading: Side loading refers to installing applications on a device from unofficial sources outside of the official app store. While related to security, it is not directly about modifying the operating system itself, but rather about how applications are installed.

Therefore, the organization is addressing the vulnerability of Jailbreaking by adding this clause to its AUP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is expanding its threat surface program and allowing individuals to security test the company’s internet-facing application. The company will compensate researchers based on the vulnerabilities discovered.
Which of the following best describes the program the company is setting up?

A. Open-source intelligence
B. Bug bounty
C. Red team
D. Penetration testing

A

B. Bug bounty

Here’s why:

Open-source intelligence (OSINT): OSINT involves collecting and analyzing publicly available information to identify potential security threats. It does not typically involve compensating individuals for discovering specific vulnerabilities in an application.

Bug bounty: A bug bounty program invites individuals, often external security researchers, to test the company's applications for vulnerabilities. The company compensates these researchers based on the vulnerabilities they discover and report. This description matches the scenario provided.

Red team: A red team consists of security professionals who simulate real-world attacks to test the defenses of an organization. While red teaming is a comprehensive security testing approach, it is usually conducted by an internal or contracted team rather than through an open program with external participants.

Penetration testing: Penetration testing involves authorized, simulated attacks on a system to identify security weaknesses. Penetration tests are typically conducted by hired security professionals or specialized firms, not through an open call to external researchers with compensation based on discovered vulnerabilities.

Therefore, the program described in the scenario is a Bug bounty program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An organization experiences a cybersecurity incident involving a command-and-control server. Which of the following logs should be analyzed to identify the impacted host? (Choose two.)

A. Application
B. Authentication
C. Error
D. Network
E. Firewall
F. System

A

(Community DE 63%, DF 19%, BD 19%)

Network logs (Option D): These logs can help identify network connections to the command-and-control server and provide information about source IP addresses (the impacted host) and destination IP addresses (the command-and-control server).

Firewall logs (Option E): Firewall logs also track network traffic and can provide valuable information about source and destination IP addresses, helping identify the impacted host and its communication with the command-and-control server.

Chat GPT :
D. Network
E. Firewall

Here’s why:

Network Logs: These logs contain information about network traffic, including connections made to and from internal and external IP addresses. Analyzing network logs can help identify communications between the impacted host and the command-and-control server.

Firewall Logs: Firewall logs record allowed and blocked network traffic. These logs can provide insights into unauthorized or suspicious connections to external IP addresses associated with the C2 server, helping to pinpoint the affected host.

The other options are less relevant in this context:

Application Logs: These logs track events related to specific applications, which might not directly reveal communication with a C2 server unless the application itself is compromised.

Authentication Logs: These logs track login attempts and authentication events. While useful for identifying unauthorized access, they are less likely to directly reveal communication with a C2 server.

Error Logs: These logs capture errors and warnings generated by various systems and applications. They are not typically focused on network activity or communications.

System Logs: These logs include a variety of system-level events and operations. While they can be useful for understanding overall system activity, they are less likely to directly indicate communications with a C2 server compared to network and firewall logs.

Therefore, Network and Firewall logs are the most relevant for identifying the impacted host in a command-and-control server incident.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An administrator assists the legal and compliance team with ensuring information about customer transactions is archived for the proper time period. Which of the following data policies is the administrator carrying out?

A. Compromise
B. Retention
C. Analysis
D. Transfer
E. Inventory

A

B. Retention

Here’s why:

Compromise: This term generally refers to a security incident where data has been accessed or altered without authorization. It is not related to archiving information for a specific time period.

Retention: Data retention policies define how long different types of data should be kept before being deleted or archived. Ensuring that customer transaction information is archived for the proper time period is directly related to data retention.

Analysis: Data analysis involves examining data to extract useful information and insights. This does not involve archiving data for specific time periods.

Transfer: Data transfer policies govern the movement of data between locations or systems. This is not directly related to the archiving of data for compliance purposes.

Inventory: Data inventory involves keeping a catalog of all data assets within an organization. While important for understanding what data exists, it does not address how long data should be kept or archived.

Therefore, the administrator is carrying out a Retention policy by ensuring information about customer transactions is archived for the proper time period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following are the MOST likely vectors for the unauthorized or unintentional inclusion of vulnerable code in a software company’s final software releases? (Choose two.)

A. Unsecure protocols
B. Use of penetration-testing utilities
C. Weak passwords
D. Included third-party libraries
E. Vendors/supply chain
F. Outdated anti-malware software

A

D. Included third-party libraries
E. Vendors/supply chain

Here’s why:

Unsecure protocols: While using insecure protocols can lead to vulnerabilities, they are not typically vectors for the inclusion of vulnerable code in software releases themselves. They are more related to communication security.

Use of penetration-testing utilities: These tools are used to test for vulnerabilities, not introduce them. While improper use could expose vulnerabilities, they are not a primary vector for including vulnerable code in final software releases.

Weak passwords: Weak passwords are a security risk for unauthorized access but are not directly related to the inclusion of vulnerable code in software releases.

Included third-party libraries: Using third-party libraries is a common practice in software development. However, these libraries can contain vulnerabilities or malicious code that can be unintentionally included in the final software release.

Vendors/supply chain: The software supply chain involves multiple vendors and sources that contribute to the final product. Vulnerabilities or malicious code can be introduced through these external entities, either intentionally or unintentionally.

Outdated anti-malware software: While having outdated anti-malware software can increase the risk of infection and attacks, it is not directly related to the inclusion of vulnerable code in software releases.

Therefore, the most likely vectors are Included third-party libraries and Vendors/supply chain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An organization has a growing workforce that is mostly driven by additions to the sales department. Each newly hired salesperson relies on a mobile device to conduct business. The Chief Information Officer (CIO) is wondering if the organization may need to scale down just as quickly as it scaled up. The CIO is also concerned about the organization’s security and customer privacy. Which of the following would be BEST to address the CIO’s concerns?

A. Disallow new hires from using mobile devices for six months.
B. Select four devices for the sales department to use in a CYOD model.
C. Implement BYOD for the sales department while leveraging the MDM.
D. Deploy mobile devices using the COPE methodology.

A

(Brain dump: C. Implement BYOD for the sales department while leveraging the MDM.)

(Community : C 78%, D 17%)
BYOD (Bring Your Own Device) allows employees to use their own mobile devices for work, which can reduce costs and increase productivity. MDM software can be used to manage and secure the devices, ensuring that they are compliant with policies and that sensitive data is protected.

(ChatGPT)
D. Deploy mobile devices using the COPE (Corporate-Owned, Personally-Enabled) methodology.

Here’s why COPE is a suitable choice:

COPE Methodology: COPE allows the organization to maintain control over the mobile devices while still enabling flexibility for employees to use them for both work and personal purposes. This model ensures that the devices are owned and managed by the organization, which enhances security and privacy controls.

Security and Privacy: With COPE, the organization can enforce security policies and configurations on the devices, ensuring they meet compliance requirements and protect customer data. This approach mitigates risks associated with using personal devices (BYOD) or random device selection (CYOD).

Scalability: COPE provides scalability because the organization can manage a fleet of corporate-owned devices more effectively as the workforce grows or shrinks. It allows for streamlined provisioning, management, and decommissioning of devices as needed.

Comparatively:

Disallowing new hires from using mobile devices for six months (Option A) may hinder productivity and isn’t scalable or sustainable in a modern business environment where mobility is essential.

Selecting four devices for the sales department in a Choose Your Own Device (CYOD) model (Option B) could limit flexibility and might not address security concerns as effectively as COPE.

Implementing Bring Your Own Device (BYOD) for the sales department (Option C) introduces more security risks due to the diversity of devices and potential lack of control over employee-owned devices, even with Mobile Device Management (MDM).

Therefore, deploying mobile devices using the COPE methodology is the best option to balance security, privacy, scalability, and flexibility for the growing sales department within the organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company has limited storage space available and an online presence that cannot be down for more than four hours. Which of the following backup methodologies should the company implement to allow for the FASTEST database restore time in the event of a failure, while being mindful of the limited available storage space?

A. Implement full tape backups every Sunday at 8:00 p.m. and perform nightly tape rotations.
B. Implement differential backups every Sunday at 8:00 p.m. and nightly incremental backups at 8:00 p.m.
C. Implement nightly full backups every Sunday at 8:00 p.m.
D. Implement full backups every Sunday at 8:00 p.m. and nightly differential backups at 8:00 p.m.

A

D. Implement full backups every Sunday at 8:00 p.m. and nightly differential backups at 8:00 p.m.

Here’s why this option is the most suitable:

Full Backups: Performing full backups every Sunday ensures that a complete copy of the database is captured weekly. This serves as a baseline for recovery operations.

Nightly Differential Backups: Differential backups capture all changes made since the last full backup. They are faster to perform and require less storage space compared to full backups. In the event of a failure, restoring from a full backup followed by the latest differential backup allows for faster recovery compared to incremental backups.

Fastest Restore Time: With this approach, the company can restore the database by first restoring the last full backup and then applying the latest differential backup. This minimizes the restore time because only the changes since the last full backup need to be applied.

Limited Storage Space: While full backups consume more storage space, performing them weekly and complementing them with nightly differentials strikes a balance between storage requirements and recovery speed. Incremental backups, as in option B, would require applying all incremental backups since the last full backup during a restore, potentially lengthening the restore time compared to differential backups.

Therefore, option D (full backups every Sunday and nightly differential backups) is the best choice for achieving the fastest database restore time while managing limited storage space effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A startup company is using multiple SaaS and IaaS platforms to stand up a corporate infrastructure and build out a customer-facing web application. Which of the following solutions would be BEST to provide security, manageability, and visibility into the platforms?

A. SIEM
B. DLP
C. CASB
D. SWG

A

C. CASB (Cloud Access Security Broker)

Here’s why CASB is the most appropriate choice:

Security: CASB solutions are specifically designed to provide security controls and policies for cloud applications and services. They enable the enforcement of security policies such as data encryption, access control, and threat protection across multiple cloud platforms.

Manageability: CASB solutions offer centralized visibility and management of cloud services. They provide insights into user activities, data usage patterns, and potential security threats across all integrated cloud platforms.

Visibility: CASB solutions enhance visibility by monitoring and auditing activities across SaaS and IaaS platforms. They provide detailed logs and reports that help identify compliance issues, anomalous behavior, and unauthorized access attempts.

Comparatively:

SIEM (Security Information and Event Management) solutions focus on aggregating and analyzing security event data from various sources across an organization's IT infrastructure. While valuable for overall security monitoring, they may not provide specific controls or visibility tailored for cloud applications and services.

DLP (Data Loss Prevention) solutions are crucial for protecting sensitive data by monitoring and preventing its unauthorized disclosure. While important, DLP primarily focuses on data protection rather than managing and securing cloud services.

SWG (Secure Web Gateway) solutions are used for controlling and securing web traffic, including enforcing security policies for web browsing and filtering malicious content. However, SWG solutions are not designed to provide comprehensive security and management capabilities for cloud applications and services.

Therefore, CASB would be the best solution to provide security, manageability, and visibility into the SaaS and IaaS platforms used by the startup company.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following technical controls is BEST suited for the detection and prevention of buffer overflows on hosts?

A. DLP
B. HIDS
C. EDR
D. NIPS

A

(Community : C 82% , B 18%)
C. “…buffer overflows on hosts.” EDR is the Endpoint Detection and Response. HIDS can not prevent any buffer overflows.

ChatGPT
C. EDR (Endpoint Detection and Response)

Here’s why EDR is the most appropriate choice:

Endpoint Detection and Response (EDR): EDR solutions are designed to monitor and analyze endpoint activities in real time. They can detect suspicious behaviors, including those indicative of buffer overflow attacks, by monitoring system calls, process executions, and memory usage.

Buffer Overflows: Buffer overflows occur when a program writes data beyond the allocated buffer size, potentially leading to code execution or crashes that attackers can exploit. EDR solutions can detect abnormal memory usage patterns and unauthorized changes to memory segments, which are indicative of buffer overflow attempts.

Comparatively:

DLP (Data Loss Prevention) focuses on preventing the unauthorized transmission of sensitive data and is not specifically designed to detect or prevent buffer overflow attacks.

HIDS (Host-based Intrusion Detection System) can detect unauthorized access or changes to files, but it may not provide the detailed behavioral analysis needed to detect and prevent buffer overflows.

NIPS (Network Intrusion Prevention System) is oriented towards monitoring network traffic and preventing network-based attacks, such as DDoS or network-based exploits, rather than detecting and preventing buffer overflows on individual hosts.

Therefore, EDR is the best technical control suited for the detection and prevention of buffer overflows on hosts, due to its capabilities in real-time monitoring, behavioral analysis, and response to endpoint activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A consultant is configuring a vulnerability scanner for a large, global organization in multiple countries. The consultant will be using a service account to scan systems with administrative privileges on a weekly basis, but there is a concern that hackers could gain access to the account and pivot throughout the global network.
Which of the following would be BEST to help mitigate this concern?

A. Create different accounts for each region, each configured with push MFA notifications.
B. Create one global administrator account and enforce Kerberos authentication.
C. Create different accounts for each region, limit their logon times, and alert on risky logins.
D. Create a guest account for each region, remember the last ten passwords, and block password reuse.

A

(Brain dump : C. Create different accounts for each region, limit their logon times, and alert on risky logins.)

Community : (C 70%, A 30%)
C - it’s a service account, can’t MFA

ChatGPT
A. Create different accounts for each region, each configured with push MFA notifications.

Here’s why this option is the most effective:

Multiple Accounts: By creating separate accounts for each region, the organization limits the scope of access. Compromising one account would not grant access to systems in other regions, thereby reducing the potential impact of a breach.

MFA (Multi-Factor Authentication) with Push Notifications: Enforcing MFA adds an additional layer of security beyond just a password. Push notifications enhance security by requiring a user to approve access attempts on their mobile device, reducing the risk of unauthorized access even if the account credentials are compromised.

Regional Segmentation: Limiting accounts to specific regions helps contain potential breaches and reduces the attack surface across the global network.

Comparatively:

Option B (Create one global administrator account and enforce Kerberos authentication) increases the risk because compromising this single global account would provide unrestricted access across all regions.

Option C (Create different accounts for each region, limit their logon times, and alert on risky logins) is better than having a single global account but does not explicitly address the need for multi-factor authentication, which is highly effective in preventing unauthorized access.

Option D (Create a guest account for each region, remember the last ten passwords, and block password reuse) focuses on password management rather than access control and does not address the need for separate accounts with regional segmentation and MFA.

Therefore, Option A (Create different accounts for each region, each configured with push MFA notifications) is the best choice to mitigate the concern of unauthorized access to the vulnerability scanner service account and potential pivoting throughout the global network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

While troubleshooting a firewall configuration, a technician determines that a “deny any” policy should be added to the bottom of the ACL. The technician updates the policy, but the new policy causes several company servers to become unreachable. Which of the following actions would prevent this issue?

A. Documenting the new policy in a change request and submitting the request to change management
B. Testing the policy in a non-production environment before enabling the policy in the production network
C. Disabling any intrusion prevention signatures on the “deny any” policy prior to enabling the new policy
D. Including an “allow any” policy above the “deny any” policy

A

B. Testing the policy in a non-production environment before enabling the policy in the production network.

Here’s why this option is the most appropriate:

Testing in Non-Production Environment: Before implementing any major changes, especially ones involving a "deny any" policy which blocks all traffic, it is crucial to test the changes in a controlled, non-production environment. This allows the technician to observe any unintended consequences, such as servers becoming unreachable, without impacting live services.

Change Management: While documenting the change request (Option A) and submitting it to change management are important steps for maintaining documentation and accountability, they do not directly prevent the immediate impact on production servers. Testing in a non-production environment provides an opportunity to identify and resolve issues before they affect live services.

Disabling Intrusion Prevention Signatures (Option C) is not relevant in this context as it pertains to a different security control and does not address the root cause of the servers becoming unreachable due to the "deny any" policy.

Including an "allow any" policy above the "deny any" policy (Option D) would negate the effect of the "deny any" policy and compromise security, as it would allow all traffic regardless of ACL rules.

Therefore, testing the policy in a non-production environment (Option B) is the most effective action to take to prevent the issue of servers becoming unreachable while troubleshooting firewall configurations. This approach ensures that changes are validated and any unintended consequences are addressed before applying them in a production network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A network technician is installing a guest wireless network at a coffee shop. When a customer purchases an item, the password for the wireless network is printed on the receipt so the customer can log in. Which of the following will the technician MOST likely configure to provide the highest level of security with the least amount of overhead?

A. WPA-EAP
B. WEP-TKIP
C. WPA-PSK
D. WPS-PIN

A

C. WPA-PSK (Wi-Fi Protected Access - Pre-Shared Key)

Here’s why WPA-PSK is the most suitable choice:

Ease of Configuration: WPA-PSK is straightforward to configure and manage. The same pre-shared key (password) is used for all clients accessing the guest wireless network, which aligns with the requirement of printing the password on receipts for customers.

Security: While WEP (Wired Equivalent Privacy) and WPS (Wi-Fi Protected Setup) are less secure and have known vulnerabilities, WPA-PSK provides stronger security through encryption and authentication mechanisms. It ensures that unauthorized users cannot easily access the network without the shared key.

Overhead: WPA-PSK does not require additional infrastructure or server-side configuration (unlike WPA-EAP, which involves a RADIUS server for authentication), making it a low-overhead solution suitable for a coffee shop environment.

Comparatively:

WPA-EAP (Wi-Fi Protected Access - Extensible Authentication Protocol) involves a more complex setup with a RADIUS server and provides individualized credentials for each user, which is more suitable for enterprise environments rather than a coffee shop setting.

WEP-TKIP (Wired Equivalent Privacy - Temporal Key Integrity Protocol) is an outdated protocol with significant security weaknesses and should not be used for securing modern wireless networks.

WPS-PIN (Wi-Fi Protected Setup - Personal Identification Number) is a simplified method for connecting devices to a wireless network but is also vulnerable to attacks and not suitable for providing the highest level of security.

Therefore, WPA-PSK is the best choice to provide a balance of security, ease of configuration, and minimal overhead for securing the guest wireless network at the coffee shop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following ISO standards is certified for privacy?

A. ISO 9001
B. ISO 27002
C. ISO 27701
D. ISO 31000

A

C. ISO 27701

Here’s why:

ISO 27701: This standard specifies requirements and provides guidance for establishing, implementing, maintaining, and continually improving a Privacy Information Management System (PIMS). It is an extension to ISO 27001 (Information Security Management System) and ISO 27002 (Code of practice for information security controls), focusing specifically on privacy management.

ISO 9001: This standard pertains to quality management systems and does not specifically address privacy management.

ISO 27002: This standard provides guidelines and best practices for information security controls, but it does not focus exclusively on privacy management.

ISO 31000: This standard provides principles and guidelines for risk management, applicable across various organizational contexts, but it does not specifically address privacy management.

Therefore, ISO 27701 is the ISO standard that is certified for privacy, as it specifically deals with Privacy Information Management Systems (PIMS) and extends the requirements of ISO 27001 to include privacy controls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

An organization suffered an outage, and a critical system took 90 minutes to come back online. Though there was no data loss during the outage, the expectation was that the critical system would be available again within 60 minutes. Which of the following is the 60-minute expectation an example of?

A. MTBF
B. RPO
C. MTTR
D. RTO

A

D. RTO (Recovery Time Objective)

Here’s why:

Recovery Time Objective (RTO): RTO is the targeted duration within which a business process or system must be restored after a disruption (such as an outage or failure) in order to avoid unacceptable consequences. In this case, the organization expected the critical system to be back online within 60 minutes to meet business continuity requirements.

Mean Time Between Failures (MTBF): MTBF refers to the average time between failures of a system or component. It measures reliability rather than recovery time.

Recovery Point Objective (RPO): RPO specifies the maximum tolerable amount of data loss in time before a disruption. It focuses on data recovery rather than system recovery time.

Mean Time to Repair (MTTR): MTTR measures the average time taken to repair a failed component or system and restore it to normal operational status. It is related to the time spent repairing a system during an outage rather than the overall recovery time objective.

Therefore, RTO accurately describes the expectation of having the critical system back online within 60 minutes after the outage, aligning with business continuity and operational requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A security analyst needs to be proactive in understanding the types of attacks that could potentially target the company’s executives. Which of the following intelligence sources should the security analyst review?

A. Vulnerability feeds
B. Trusted automated exchange of indicator information
C. Structured threat information expression
D. Industry information-sharing and collaboration groups

A

D. Industry information-sharing and collaboration groups

Here’s why this option is the most appropriate:

Industry Information-Sharing and Collaboration Groups: These groups facilitate the exchange of threat intelligence and best practices among organizations within the same industry or sector. They often provide timely information about emerging threats, attack trends, and specific targeting tactics observed in the industry. This can help the security analyst stay informed about potential threats targeting executives and tailor defenses accordingly.

Comparatively:

Vulnerability Feeds (Option A) primarily provide information about software vulnerabilities and patches rather than specific threat intelligence related to targeted attacks on executives.

Trusted Automated Exchange of Indicator Information (Option B) refers to automated systems that share indicators of compromise (IOCs) and threat data between trusted entities. While useful for detecting known threats, it may not focus specifically on executive-level threats.

Structured Threat Information Expression (STIX) (Option C) is a standardized language for expressing threat information but does not in itself provide intelligence specific to executive-level attacks.

Therefore, industry information-sharing and collaboration groups are the most relevant intelligence sources for the security analyst to understand and mitigate potential threats targeting the company’s executives effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

An enterprise has hired an outside security firm to conduct penetration testing on its network and applications.
The firm has agreed to pay for each vulnerability that is discovered. Which of the following BEST represents the type of testing that will occur?

A. Bug bounty
B. Black-box
C. Gray-box
D. White-box
E. Red-team

A

A. Bug bounty

Here’s why:

Bug Bounty: Bug bounty programs involve inviting external security researchers or firms to discover vulnerabilities in an organization's systems or applications. These researchers are typically incentivized by monetary rewards for each valid vulnerability they report. This approach encourages independent testing and discovery of vulnerabilities in a controlled and incentivized manner.

Penetration Testing Types Comparison:

    Black-box (Option B): This type involves testing where the tester has no prior knowledge of the network or systems being tested. Payment for vulnerabilities is not a defining characteristic of black-box testing.

    Gray-box (Option C): Gray-box testing involves partial knowledge of the internal workings of the systems being tested. Like black-box testing, it does not typically involve payment per vulnerability as part of the arrangement.

    White-box (Option D): White-box testing provides full knowledge of the systems, including internal architecture and source code. It is not typically associated with external firms paying for vulnerabilities but rather a comprehensive assessment by internal or contracted security teams.

    Red-team (Option E): Red-team exercises simulate realistic attacks to evaluate the effectiveness of security defenses. While they may involve external firms, they are not structured around payment per vulnerability discovered as in bug bounty programs.

Therefore, bug bounty best represents the type of testing where an outside security firm is hired to discover vulnerabilities and is paid for each vulnerability found during the testing process.

22
Q

A security analyst needs to determine how an attacker was able to use User3 to gain a foothold within a company’s network. The company’s lockout policy requires that an account be locked out for a minimum of 15 minutes after three unsuccessful attempts. While reviewing the log files, the analyst discovers the following:

3/16/20 3:31:10 AM Audit Failure : CompanyNetwork\User1 Unknown username or bad password
3/16/20 3:31:11 AM Audit Failure : CompanyNetwork\User1 Unknown username or bad password
3/16/20 3:31:12 AM Audit Failure : CompanyNetwork\User1 Unknown username or bad password
3/16/20 3:31:13 AM Audit Failure : CompanyNetwork\User1 locked out
3/16/20 3:31:14 AM Audit Failure : CompanyNetwork\User2 Unknown username or bad password
3/16/20 3:31:15 AM Audit Failure : CompanyNetwork\User2 Unknown username or bad password
3/16/20 3:31:16 AM Audit Failure : CompanyNetwork\User2 Unknown username or bad password
3/16/20 3:31:18 AM Audit Failure : CompanyNetwork\User2 locked out
3/16/20 3:31:19 AM Audit Failure : CompanyNetwork\User3 Unknown username or bad password
3/16/20 3:32:40 AM Audit Failure : CompanyNetwork\User4 Unknown username or bad password
3/16/20 3:33:25 AM Audit Failure : CompanyNetwork\User4 Successful logon

Which of the following attacks MOST likely occurred?

A. Dictionary
B. Credential-stuffing
C. Password-spraying
D. Brute-force

A

C. Password-spraying

Here’s the reasoning:

Password-spraying: In a password-spraying attack, the attacker attempts a few common passwords or a single password against multiple user accounts. The goal is to avoid triggering lockout policies that would occur if multiple incorrect passwords were tried for the same account. The attacker typically tries one password across many accounts to avoid detection.

In the provided logs:

User1 experiences three consecutive failed login attempts followed by a lockout.
User2 experiences three consecutive failed login attempts followed by a lockout.
User3 experiences one failed login attempt.

User4’s log shows a successful login after several failed attempts, which could indicate the attacker was successful in gaining access after a password-spraying attempt.

Reasoning against other options:

Dictionary (Option A): Dictionary attacks involve trying multiple passwords for a single user account, which would likely result in triggering the lockout policy after several failed attempts. This is not consistent with the observed pattern in the logs.

Credential-stuffing (Option B): Credential-stuffing involves using known username/password pairs obtained from previous breaches. It typically targets a large number of accounts but usually involves many more failed attempts in a short time frame than seen here.

Brute-force (Option D): Brute-force attacks involve trying many possible combinations of passwords until the correct one is found. They usually result in triggering lockout policies due to the high number of failed attempts, which is not evident in the provided logs.

Therefore, based on the log entries and the behavior described, password-spraying is the most likely type of attack that occurred, where the attacker tried a single password across multiple user accounts to avoid triggering account lockout policies.

23
Q

Users have been issued smart cards that provide physical access to a building. The cards also contain tokens that can be used to access information systems. Users can log in to any thin client located throughout the building and see the same desktop each time. Which of the following technologies are being utilized to provide these capabilities? (Choose two.)

A. COPE
B. VDI
C. GPS
D. TOTP
E. RFID
F. BYOD

A

B. VDI (Virtual Desktop Infrastructure)

VDI allows users to access a centralized desktop environment (virtual desktop) from any thin client within the building. This ensures consistency of the desktop environment regardless of which thin client is used.

E. RFID (Radio Frequency Identification)

RFID is used in the smart cards issued to users for physical access to the building. These cards also contain tokens that can be used for accessing information systems, likely through a form of authentication mechanism (such as TOTP - Time-Based One-Time Passwords).

Here’s a breakdown of the options:

COPE (Corporate-Owned, Personally Enabled): COPE refers to a corporate-owned device that users can personalize to some extent. It's not directly related to the described capabilities of accessing a consistent desktop environment and using smart cards.

VDI (Virtual Desktop Infrastructure): VDI allows for centralized management of desktops, providing users with the same desktop experience from any location within the building.

GPS (Global Positioning System): GPS is used for location tracking and navigation, not related to desktop virtualization or smart card access.

TOTP (Time-Based One-Time Passwords): TOTP is a form of two-factor authentication that can be used with tokens stored on smart cards for accessing information systems.

RFID (Radio Frequency Identification): RFID is used in the smart cards for physical access and potentially for authentication purposes.

BYOD (Bring Your Own Device): BYOD allows employees to use personal devices for work, which is not directly related to the described capabilities of accessing a consistent desktop environment and using smart cards.

Therefore, VDI and RFID are the technologies being utilized to provide users with the capability to access a consistent desktop environment from any thin client and use smart cards for physical access and system authentication.

24
Q

A security engineer needs to implement an MDM solution that complies with the corporate mobile device policy.
The policy states that in order for mobile users to access corporate resources on their devices, the following requirements must be met:
Mobile device OSs must be patched up to the latest release.
A screen lock must be enabled (passcode or biometric).
Corporate data must be removed if the device is reported lost or stolen.
Which of the following controls should the security engineer configure? (Choose two.)

A. Containerization
B. Storage segmentation
C. Posturing
D. Remote wipe
E. Full-device encryption
F. Geofencing

A

C. Posturing

Posturing refers to the ability to assess and enforce compliance with security policies on mobile devices. It typically involves checking the device’s compliance status, such as OS patch level and screen lock status, before allowing access to corporate resources.

D. Remote wipe

Remote wipe allows the security team to remotely erase corporate data from a device that is reported lost or stolen, thereby ensuring that sensitive information does not fall into unauthorized hands.

Here’s how these controls align with the policy requirements:

Mobile device OSs must be patched up to the latest release: Posturing capabilities can verify the patch level of the device OS and ensure that it meets the latest release requirements before granting access to corporate resources.

A screen lock must be enabled (passcode or biometric): Posturing can check if a screen lock (passcode or biometric) is enabled on the device. Additionally, device management settings can enforce this requirement.

Corporate data must be removed if the device is reported lost or stolen: Remote wipe functionality allows the security team to remotely delete corporate data from a device that is lost or stolen, maintaining data security and compliance with the policy.

Comparatively:

Containerization (Option A): Containerization isolates corporate data in a separate encrypted container on the device but does not directly enforce OS patching or screen lock requirements.

Storage segmentation (Option B): This term typically refers to dividing storage resources for different purposes and is not directly related to enforcing mobile device security policies.

Full-device encryption (Option E): While important for data security, full-device encryption does not specifically address the policy requirements related to OS patching, screen lock, and remote data wipe.

Geofencing (Option F): Geofencing defines geographical boundaries for device usage but is not directly related to enforcing OS patching, screen lock, or remote data wipe policies.

Therefore, posturing and remote wipe are the controls that the security engineer should configure to ensure compliance with the corporate mobile device policy.

25
Q

A systems administrator needs to implement an access control scheme that will allow an object’s access policy to be determined by its owner. Which of the following access control schemes BEST fits the requirements?

A. Role-based access control
B. Discretionary access control
C. Mandatory access control
D. Attribute-based access control

A

B. Discretionary access control (DAC)

Here’s why DAC is the best fit for the requirements:

Discretionary Access Control (DAC): In DAC, access decisions are based on the discretion of the object's owner (or creator). The owner of an object can determine who has access to the object and what permissions those users have. This aligns directly with the requirement that the object's access policy is determined by its owner.

Role-based access control (RBAC) assigns access rights based on roles rather than individual users or owners. While roles can be assigned certain permissions, the owner does not necessarily dictate access policies directly.

Mandatory Access Control (MAC) is typically used in environments where access control is centrally determined based on security labels or classifications, not by individual owners.

Attribute-based access control (ABAC) uses attributes of users, objects, and the environment to make access control decisions. While flexible, ABAC does not explicitly tie access control decisions to an object's owner as in DAC.

Therefore, Discretionary Access Control (DAC) is the access control scheme that best allows the owner of an object to determine its access policy, making it the most suitable choice for this scenario.

26
Q

Which of the following security concepts should an e-commerce organization apply for protection against erroneous purchases?

A. Privacy
B. Availability
C. Integrity
D. Confidentiality

A

C. Integrity

Here’s why:

Integrity: Integrity ensures that data remains accurate, complete, and unaltered. In the context of an e-commerce organization, integrity mechanisms help prevent unintended changes to transactional data, ensuring that purchases are processed accurately and errors are minimized. This can involve techniques such as data validation, checksums, and transaction logging to detect and prevent unauthorized or erroneous changes to purchase orders and transaction records.

While availability (B) ensures that systems and resources are accessible and reliable for legitimate users, confidentiality (D) protects sensitive information from unauthorized access, and privacy (A) ensures that personal information is handled appropriately, integrity specifically addresses the accuracy and reliability of transactional data, which is crucial in preventing erroneous purchases in an e-commerce setting.

27
Q

A network administrator needs to build out a new datacenter, with a focus on resiliency and uptime. Which of the following would BEST meet this objective? (Choose two.)

A. Dual power supply
B. Off-site backups
C. Automatic OS upgrades
D. NIC teaming
E. Scheduled penetration testing
F. Network-attached storage

A

A. Dual power supply
D. NIC teaming
Explanation:

A. Dual power supply: Ensures that if one power supply fails, the other can take over, thereby preventing downtime due to power supply failure.

D. NIC teaming: Involves combining multiple network interface cards (NICs) to act as a single interface, providing redundancy and improved network performance. If one NIC fails, the other can handle the traffic, thus enhancing network resiliency and uptime.

Rationale for Other Options:

B. Off-site backups: Important for data recovery and protection against disasters, but it doesn't directly contribute to the immediate resiliency and uptime of the data center.

C. Automatic OS upgrades: While keeping systems up to date is important, automatic upgrades can sometimes introduce instability or require reboots, which may impact uptime.

E. Scheduled penetration testing: Crucial for security but does not directly contribute to the physical resiliency and uptime of the data center.

F. Network-attached storage (NAS): Useful for centralized storage, but it doesn't directly address network or power resiliency, which are critical for uptime.

Summary:

Dual power supplies and NIC teaming are directly related to ensuring the resiliency and uptime of a data center by providing redundancy for power and network connectivity, respectively.

28
Q

A company’s Chief Information Officer (CIO) is meeting with the Chief Information Security Officer (CISO) to plan some activities to enhance the skill levels of the company’s developers. Which of the following would be MOST suitable for training the developers?

A. A capture-the-flag competition
B. A phishing simulation
C. Physical security training
D. Basic awareness training

A

A. A capture-the-flag competition

Here’s why:

Capture-the-flag (CTF) competition: CTFs are cybersecurity competitions where participants solve challenges related to various aspects of cybersecurity, such as cryptography, web security, reverse engineering, and more. Developers can greatly benefit from CTFs as they involve hands-on problem-solving that reinforces technical skills and encourages critical thinking and collaboration.

Phishing simulation (Option B): Phishing simulations are primarily focused on educating users about identifying and avoiding phishing attacks. While important for overall security awareness, they may not directly enhance the technical skills of developers unless the simulations are tailored to include aspects of secure coding practices or specific developer-related security challenges.

Physical security training (Option C): Physical security training focuses on securing physical assets and facilities. While important for comprehensive security awareness, it is not directly related to enhancing the technical skills of developers.

Basic awareness training (Option D): Basic awareness training typically covers foundational security concepts and practices. While useful for general understanding, it may not provide the depth of technical skill enhancement that developers would benefit from.

Therefore, a capture-the-flag competition would be the most suitable option for training developers, as it provides practical, hands-on experience that can significantly improve their cybersecurity skills in areas relevant to their roles.

29
Q

A cybersecurity administrator has a reduced team and needs to operate an on-premises network and security infrastructure efficiently. To help with the situation, the administrator decides to hire a service provider. Which of the following should the administrator use?

A. SDP
B. AAA
C. IaaS
D. MSSP
E. Microservices

A

D. MSSP (Managed Security Service Provider)

Here’s why MSSP is the appropriate choice:

Managed Security Service Provider (MSSP): MSSPs offer outsourced monitoring and management of security devices and systems. They can provide 24/7 security monitoring, threat detection and response, vulnerability management, and other security services. This helps alleviate the workload on the internal team while benefiting from specialized expertise and resources of the MSSP.

Let’s briefly consider the other options for clarity:

SDP (Software-Defined Perimeter): SDP is a security framework that dynamically creates a "segment of one" between the user and the resources they access. It's a technology rather than a service provider.

AAA (Authentication, Authorization, and Accounting): AAA refers to a framework for controlling access to computer resources. It does not relate directly to hiring a service provider for managing network and security infrastructure.

IaaS (Infrastructure as a Service): IaaS provides virtualized computing resources over the internet. It’s a cloud computing service model, not related to managed security services.

Microservices: Microservices are a software development technique for building applications as a suite of small, loosely coupled services. This is unrelated to hiring a service provider for network and security infrastructure management.

Therefore, MSSP (Managed Security Service Provider) is the best choice for the cybersecurity administrator to efficiently manage the on-premises network and security infrastructure with a reduced team.

30
Q

Which of the following threat vectors would appear to be the most legitimate when used by a malicious actor to impersonate a company?

A. Phone call
B. Instant message
C. Email
D. Text message

A

C. Email

Here’s why:

Email: Malicious actors commonly use phishing emails to impersonate legitimate companies. They can craft emails that appear to come from trusted sources such as official company email addresses or domains. These emails often contain convincing language, logos, and links that mimic legitimate communication from the company. Email is a widely used vector for distributing phishing attempts because it can be mass-mailed to many targets, increasing the likelihood of success.

While phone calls (Option A), instant messages (Option B), and text messages (Option D) can also be used for impersonation, emails are often perceived as more legitimate due to their ability to closely replicate official communication formats and styles. They can include headers, footers, and other elements that mimic the company’s branding and communication standards, making them potentially more convincing to recipients. Therefore, email is the most likely threat vector to appear legitimate when used for impersonation by a malicious actor targeting a company.

31
Q

The SOC is reviewing processes and procedures after a recent incident. The review indicates it took more than 30 minutes to determine that quarantining an infected host was the best course of action. This allowed the malware to spread to additional hosts before it was contained. Which of the following would be BEST to improve the incident response process?

A. Updating the playbooks with better decision points
B. Dividing the network into trusted and untrusted zones
C. Providing additional end-user training on acceptable use
D. Implementing manual quarantining of infected hosts

A

A. Updating the playbooks with better decision points

Here’s why:

Updating the playbooks with better decision points: Playbooks are procedural documents that guide incident response teams through various steps to detect, respond to, and recover from security incidents. By updating the playbooks with better decision points, the SOC can provide clearer guidance on when and how to quarantine infected hosts effectively. This ensures that incident responders have specific criteria and actions to follow, reducing the time it takes to make critical decisions during an incident.

Let’s briefly consider the other options for clarity:

Dividing the network into trusted and untrusted zones (Option B): While network segmentation can help isolate critical systems from less trusted areas, it doesn't directly address the process of improving incident response decision-making or speed.

Providing additional end-user training on acceptable use (Option C): End-user training is important for overall security awareness, but it focuses more on preventing incidents rather than improving incident response processes after an incident has occurred.

Implementing manual quarantining of infected hosts (Option D): Manual quarantining may introduce delays and human error, especially in large environments. Automation or clearer decision points in playbooks are generally more effective for improving response times and accuracy.

Therefore, updating the playbooks with better decision points is the best option to enhance the incident response process by providing clearer, more effective guidance to SOC analysts, thereby reducing response times and mitigating the impact of incidents like malware spreading to additional hosts.

32
Q

A security administrator checks the table of a network switch, which shows the following output:

VLAN Physical Address Type Port
1 001a:42ff:5113 Dynamic GE0/5
1 0faa:abcf:ddee Dynamic GE0/5
1 c6a9:6b16:758e Dynamic GE0/5
1 a3aa:b6a3:1212 Dynamic GE0/5
1 8025:2ad8:bfac Dynamic GE0/5
1 b839:f995:a00a Dynamic GE0/5

Which of the following is happening to this switch?

A. MAC flooding
B. DNS poisoning
C. MAC cloning
D. ARP poisoning

A

A. MAC flooding

Here’s why:

MAC flooding: MAC flooding is a technique used in network attacks where an attacker floods the switch with fake MAC addresses, causing the switch to enter into a state where it forwards traffic to all ports (known as flooding mode) instead of just the appropriate port. This can overwhelm the switch's MAC address table and allow the attacker to intercept network traffic intended for other hosts, potentially enabling various types of attacks.

In the scenario described:

The VLAN is 1, indicating it's the default VLAN.
Multiple MAC addresses (all dynamic) are associated with the same port (GE0/5).

This behavior suggests that the switch’s MAC address table is being flooded with multiple MAC addresses on the same port, which is characteristic of MAC flooding attacks.

Let’s briefly consider the other options for clarity:

DNS poisoning (Option B): DNS poisoning involves corrupting DNS cache data to redirect DNS requests. It does not involve manipulation of MAC addresses on a switch.

MAC cloning (Option C): MAC cloning refers to copying a legitimate MAC address to impersonate a device on a network. This scenario involves multiple different MAC addresses, not cloning a single MAC address.

ARP poisoning (Option D): ARP poisoning involves manipulating ARP (Address Resolution Protocol) tables to associate attacker's MAC address with legitimate IP addresses. It does not directly involve flooding the switch's MAC address table.

Therefore, based on the provided information, MAC flooding is the most likely activity occurring on the switch, where multiple dynamic MAC addresses are associated with the same port, potentially disrupting normal network operation and security.

33
Q

An organization just experienced a major cyberattack incident. The attack was well coordinated, sophisticated, and highly skilled. Which of the following targeted the organization?

A. Shadow IT
B. An insider threat
C. A hacktivist
D. An advanced persistent threat

A

D. An advanced persistent threat (APT)

Here’s why:

Advanced Persistent Threat (APT): APTs are characterized by attackers who have the capability and resources to conduct prolonged and targeted attacks against specific organizations. They typically employ advanced techniques, including reconnaissance, social engineering, and zero-day exploits to gain unauthorized access and remain undetected within the target network for extended periods. APT attackers are often state-sponsored or financially motivated groups with significant technical expertise and resources.

Let’s briefly review the other options for clarity:

Shadow IT (Option A): Shadow IT refers to IT systems or solutions used within an organization without explicit approval or oversight. It does not describe a type of sophisticated cyberattack.

An insider threat (Option B): An insider threat involves individuals within the organization who misuse their access privileges to compromise security. While insider threats can be sophisticated, they are not necessarily characterized by external coordination and highly skilled attackers.

A hacktivist (Option C): Hacktivists are individuals or groups who use hacking techniques for political or social activism purposes. Their attacks are typically motivated by ideology rather than being highly coordinated and sophisticated in nature.

Therefore, given the description of the cyberattack as well-coordinated, sophisticated, and highly skilled, the most appropriate choice is D. An advanced persistent threat (APT), which aligns with the characteristics of a targeted and persistent cyber threat actor with advanced capabilities and resources.

34
Q

A smart switch has the ability to monitor electrical levels and shut off power to a building in the event of power surge or other fault situation. The switch was installed on a wired network in a hospital and is monitored by the facilities department via a cloud application. The security administrator isolated the switch on a separate VLAN and set up a patching routine. Which of the following steps should also be taken to harden the smart switch?

A. Set up an air gap for the switch.
B. Change the default password for the switch
C. Place the switch in a Faraday cage.
D. Install a cable lock on the switch.

A

B. Change the default password for the switch.

Here’s why this is the correct choice:

Changing Default Passwords: Smart switches often come with default passwords that are well-known and easily exploited if not changed. By setting a strong, unique password, you prevent unauthorized access to the switch, which is crucial for security.

Now, let’s evaluate the other options:

A. Set up an air gap for the switch: While physically isolating the switch can enhance security, it's not always practical in a networked environment where monitoring and control via a cloud application are required. Moreover, VLAN isolation already provides a level of segregation.

C. Place the switch in a Faraday cage: Faraday cages are used to shield electromagnetic interference and would not directly address security concerns related to access control or password protection.

D. Install a cable lock on the switch: While physically securing the switch with a cable lock can prevent physical theft or tampering, it does not directly address the need for securing access through passwords, which is more critical in a networked environment.

Therefore, B. Change the default password for the switch is the best step to take alongside VLAN isolation and regular patching routines to enhance the security of the smart switch in the hospital’s network.

35
Q

A company recently set up an e-commerce portal to sell its products online. The company wants to start accepting credit cards for payment, which requires compliance with a security standard. Which of the following standards must the company comply with before accepting credit cards on its e-commerce platform?

A. PCI DSS
B. ISO 22301
C. ISO 27001
D. NIST CSF

A

A. PCI DSS (Payment Card Industry Data Security Standard)

Here’s why PCI DSS is the correct choice:

PCI DSS is specifically designed to ensure the secure handling of credit card information during transactions. It sets forth requirements for organizations that handle credit card payments to protect cardholder data, maintain a secure network, implement strong access control measures, regularly monitor and test networks, and maintain an information security policy.

Now, let’s briefly explain why the other options are not correct:

B. ISO 22301: ISO 22301 is a standard for business continuity management systems, focusing on ensuring organizations can continue operating during and after disruptions. It does not specifically address the security requirements for handling credit card transactions.

C. ISO 27001: ISO 27001 is a widely recognized international standard for information security management systems (ISMS). While it covers comprehensive information security practices, it does not specifically focus on the requirements for securing credit card transactions as PCI DSS does.

D. NIST CSF (NIST Cybersecurity Framework): The NIST CSF provides a framework to improve cybersecurity across various sectors but does not have specific guidelines for handling credit card transactions similar to PCI DSS.

Therefore, A. PCI DSS is the standard that the company must comply with to ensure the security of credit card transactions on its e-commerce platform.

36
Q

A security analyst is investigating an incident that was first reported as an issue connecting to network shares and the Internet. While reviewing logs and tool output, the analyst sees the following:

IP address Physical address
10.0.0.1 00-18-21-ad-24-bc
10.0.0.114 01-31-a3-cd-23-ab
10.0.0.115 00-18-21-ad-24-bc
10.0.0.116 00-19-08-ba-07-da
10.0.0.117 01-12-21-ca-11-ad

Which of the following attacks has occurred?

A. IP conflict
B. Pass-the-hash
C. MAC flooding
D. Directory traversal
E. ARP poisoning

A

E. ARP poisoning

Here’s the reasoning:

ARP poisoning (also known as ARP spoofing) involves manipulating ARP (Address Resolution Protocol) tables on a local area network to associate the attacker's MAC address with the IP address of another device (often the default gateway). This can lead to traffic being redirected through the attacker's machine, allowing for interception and potential modification of data.

In the given logs:

IP address 10.0.0.1 is associated with MAC address 00-18-21-ad-24-bc.
IP address 10.0.0.115 is also associated with MAC address 00-18-21-ad-24-bc.

This inconsistency suggests that there are multiple devices claiming the same IP address (10.0.0.1 and 10.0.0.115) but with different MAC addresses. This aligns with the behavior typically seen in ARP poisoning attacks, where the attacker sends falsified ARP messages to associate their MAC address with a legitimate IP address on the network.

Let’s briefly review why the other options are less likely:

A. IP conflict: While IP conflict could cause connectivity issues, it typically involves two legitimate devices with the same IP address, each having its own MAC address. The provided scenario shows multiple IP addresses being associated with the same MAC address, which is not typical of a straightforward IP conflict.

B. Pass-the-hash: This attack involves capturing hashed credentials from a compromised system and reusing them to authenticate as that user elsewhere. It's not related to the MAC and IP address associations observed in the logs.

C. MAC flooding: MAC flooding involves overwhelming a switch's MAC address table to facilitate a man-in-the-middle attack. The scenario does involve MAC addresses, but the issue here is inconsistent IP-to-MAC address mappings rather than flooding the switch's MAC table.

D. Directory traversal: This attack involves accessing files and directories outside the intended directory structure. It's unrelated to the ARP and MAC address issues described in the logs.

Therefore, based on the provided information, E. ARP poisoning is the most likely attack that has occurred, given the discrepancies between IP and MAC address associations observed in the logs.

37
Q

Which of the following allows for functional test data to be used in new systems for testing and training purposes to protect the real data?

A. Data encryption
B. Data masking
C. Data deduplication
D. Data minimization

A

B. Data masking

Here’s why:

Data masking involves transforming sensitive data in a way that it looks realistic but is not real. This allows functional testing, training, and other non-production purposes to proceed without exposing real sensitive information. For example, masking credit card numbers by replacing digits with other characters or symbols.

Let’s briefly explain why the other options are not correct:

A. Data encryption: While data encryption secures data by encoding it in a way that only authorized parties can decrypt and access it, it does not inherently protect the use of functional test data. Encrypted data, when decrypted, still reveals real sensitive information.

C. Data deduplication: Data deduplication refers to the process of eliminating duplicate copies of data, which reduces storage needs and enhances data integrity but does not protect data for testing purposes.

D. Data minimization: Data minimization involves reducing the amount of personal data collected and processed to only what is necessary for the specified purpose. It is a principle of data protection but does not address the specific need to protect real data when using functional test data.

Therefore, B. Data masking is the method that allows for functional test data to be used in new systems for testing and training purposes while protecting the real data from exposure.

38
Q

A company recently moved sensitive videos between on-premises, company-owned websites. The company then learned the videos had been uploaded and shared to the Internet. Which of the following would MOST likely allow the company to find the cause?

A. Checksums
B. Watermarks
C. Order of volatility
D. A log analysis
E. A right-to-audit clause

A

D. A log analysis
Explanation:

Checksums: Useful for verifying data integrity but won't directly identify the cause of videos being uploaded and shared.

Watermarks: Typically used for identifying ownership or origin of content but may not reveal the cause of unauthorized sharing.

Order of volatility: Refers to the sequence in which data should be preserved or collected during an investigation. While important, it's more about preserving evidence than identifying the cause of the incident.

A log analysis: This involves reviewing logs from various systems (such as web servers, file servers, access logs) to trace the movement and access of the videos. It can help identify who accessed the videos, from where, and possibly how they were shared online.

A right-to-audit clause: This could potentially allow the company to audit the systems and logs of third parties involved, but it's more about legal rights than direct technical investigation.

Therefore, D. A log analysis would be the most appropriate step to investigate and identify the cause of the sensitive videos being uploaded and shared on the internet after their movement between company-owned websites.

39
Q

A public relations team will be taking a group of guests on a tour through the facility of a large e- commerce company. The day before the tour, the company sends out an email to employees to ensure all whiteboards are cleaned and all desks are cleared. The company is MOST likely trying to protect against:

A. loss of proprietary information.
B. damage to the company’s reputation.
C. social engineering.
D. credential exposure.

A

(Community : A 75%, D 25%)
The company is likely taking precautions to avoid the accidental disclosure of sensitive or proprietary information. Whiteboards and desks can often contain information regarding internal processes, projects, or data that the company would not want to be disclosed to the public or competitors. By cleaning whiteboards and clearing desks, the company is reducing the risk of unintentional exposure of proprietary information during the facility tour.

ChatGPT :
A. loss of proprietary information.
Explanation:

Loss of proprietary information: Cleaning whiteboards and clearing desks can help prevent sensitive or proprietary information from being inadvertently exposed to visitors who are not authorized to see such information. This precaution ensures that confidential details or strategic plans are not visible, thereby protecting the company's proprietary information.

Damage to the company's reputation: While keeping desks and whiteboards tidy can indirectly protect the company's reputation by presenting a professional image, the primary concern here seems to be about preventing the exposure of sensitive information rather than directly guarding against reputational damage.

Social engineering: While tidying up could mitigate some risks associated with social engineering by reducing visible information that could be exploited, this is not the primary focus in this scenario.

Credential exposure: Cleaning desks and whiteboards would not directly address the exposure of credentials; this is more related to secure handling of access credentials and passwords.

Therefore, the most likely reason for the company’s action is to prevent loss of proprietary information by ensuring that sensitive details are not visible during the facility tour.

40
Q

Which of the following BEST explains the reason why a server administrator would place a document named password.txt on the desktop of an administrator account on a server?

A. The document is a honeyfile and is meant to attract the attention of a cyberintruder.
B. The document is a backup file if the system needs to be recovered.
C. The document is a standard file that the OS needs to verify the login credentials.
D. The document is a keylogger that stores all keystrokes should the account be compromised.

A

A. The document is a honeyfile and is meant to attract the attention of a cyberintruder.

Here’s why:

Honeyfile: A honeyfile is a deceptive file intentionally placed in a system to attract and monitor unauthorized access attempts. By naming a file such as password.txt, the administrator is creating a file that appears to contain sensitive information (like passwords), enticing potential intruders to interact with it. This allows the administrator to detect and monitor unauthorized access attempts and potentially gather information about the intruder's methods.

Let’s briefly review why the other options are less likely in this context:

B. Backup file: A backup file would typically not be named password.txt unless it's specifically designated for storing passwords, which is highly unusual and insecure practice.

C. Standard OS file: Operating systems do not typically require a file named password.txt on the desktop for login credential verification. Verification processes are usually handled through more secure and internal mechanisms.

D. Keylogger: A keylogger is malicious software used to record keystrokes without the user's knowledge. A keylogger would not be openly placed as a file named password.txt on the desktop, as this would not effectively capture keystrokes covertly.

Therefore, A. The document is a honeyfile and is meant to attract the attention of a cyberintruder aligns with the strategy of using deception to monitor and potentially thwart unauthorized access attempts on a server.

41
Q

A security analyst needs to generate a server certificate to be used for 802.1X and secure RDP connections.
The analyst is unsure what is required to perform the task and solicits help from a senior colleague. Which of the following is the FIRST step the senior colleague will most likely tell the analyst to perform to accomplish this task?

A. Create an OCSP.
B. Generate a CSR.
C. Create a CRL.
D. Generate a .pfx file.

A

B. Generate a CSR (Certificate Signing Request).

Here’s why:

CSR (Certificate Signing Request): A CSR is a request sent to a Certificate Authority (CA) to apply for a digital identity certificate. It includes information such as the organization's details and the public key that will be included in the certificate.

When setting up a server certificate for 802.1X (used for secure network authentication) and secure RDP connections (Remote Desktop Protocol), the CSR is the initial step because it defines the details and requirements for the certificate that will be issued by the CA. The CSR is typically generated on the server where the certificate will be installed, ensuring that the certificate is tied to the correct server and its public key.

Let’s briefly review why the other options are less likely as the FIRST step:

A. Create an OCSP: OCSP (Online Certificate Status Protocol) is used to check the revocation status of a certificate. It comes into play after the certificate has been issued and is used to ensure its validity. Creating OCSP is not the initial step in requesting a certificate.

C. Create a CRL (Certificate Revocation List): A CRL is a list of certificates that have been revoked by the CA before their expiration date. Like OCSP, creating a CRL is not the initial step in requesting a new certificate.

D. Generate a .pfx file: A .pfx (Personal Information Exchange) file is a format for storing a certificate and its associated private key. While .pfx files are used to store and transfer certificates, generating a .pfx file comes after obtaining the server certificate, which is initiated by generating a CSR.

Therefore, B. Generate a CSR is the correct first step to initiate the process of obtaining a server certificate for 802.1X and secure RDP connections.

42
Q

When selecting a technical solution for identity management, an architect chooses to go from an in-house solution to a third-party SaaS provider. Which of the following risk management strategies is this an example of?

A. Acceptance
B. Mitigation
C. Avoidance
D. Transference

A

D. Transference

Here’s why:

Transference in risk management involves shifting the responsibility for managing a risk to another party. By opting for a third-party SaaS provider for identity management, the architect is transferring the operational and security risks associated with maintaining an in-house solution to the SaaS provider. This includes aspects such as data security, compliance, scalability, and ongoing maintenance.

Let’s briefly review the other options to clarify why they are less applicable in this scenario:

A. Acceptance: Acceptance involves acknowledging the existence of a risk without taking specific action to mitigate it. Transitioning to a SaaS provider is an active decision to address the risks associated with in-house management rather than simply accepting them.

B. Mitigation: Mitigation involves taking actions to reduce the impact or likelihood of a risk. While moving to a SaaS provider may reduce certain risks compared to an in-house solution, the primary action here is transferring the risk rather than mitigating it directly.

C. Avoidance: Avoidance means eliminating the risk altogether by avoiding the activity that gives rise to the risk. Transitioning to a SaaS provider does not avoid identity management; rather, it changes how it is managed.

Therefore, D. Transference accurately describes the risk management strategy in this scenario where the architect chooses to move identity management responsibilities from an in-house solution to a third-party SaaS provider.

43
Q

Which of the following describes the BEST approach for deploying application patches?

A. Apply the patches to systems in a testing environment, then to systems in a staging environment, and finally to production systems.

B. Test the patches in a staging environment, develop against them in the development environment, and then apply them to the production systems.

C. Test the patches in a test environment, apply them to the production systems, and then apply them to a staging environment.

D. Apply the patches to the production systems, apply them in a staging environment, and then test all of them in a testing environment.

A

A. Apply the patches to systems in a testing environment, then to systems in a staging environment, and finally to production systems.

Here’s why this approach is generally considered the best practice:

Testing Environment: Patches should first be applied to a testing environment where their impact can be thoroughly assessed. This allows for testing against various configurations and potential interactions with existing systems and applications without risking disruption to production.

Staging Environment: After successful testing in the testing environment, the patches should be applied to a staging environment that closely mimics the production environment. This further verifies their compatibility and ensures that the deployment process itself is well understood and can be replicated in production.

Production Systems: Finally, once the patches have been successfully tested in both testing and staging environments, they can be applied to production systems. This staged deployment approach minimizes the risk of unforeseen issues affecting critical operations and allows for any necessary adjustments based on the testing and staging experiences.

Let’s briefly review why the other options are less optimal:

B. Test the patches in a staging environment, develop against them in the development environment, and then apply them to the production systems: This option suggests developing against patches after testing in staging, which could lead to complications if the patches behave differently in development than in staging or production.

C. Test the patches in a test environment, apply them to the production systems, and then apply them to a staging environment: Applying patches directly to production before testing in staging increases the risk of disruptions or issues that could have been identified and mitigated in a staging environment.

D. Apply the patches to the production systems, apply them in a staging environment, and then test all of them in a testing environment: This approach risks deploying untested patches directly to production, which can lead to downtime or other critical issues if the patches have unforeseen consequences.

Therefore, A. Apply the patches to systems in a testing environment, then to systems in a staging environment, and finally to production systems is the best approach for deploying application patches, ensuring thorough testing and minimizing the risk of disruption to production systems.

44
Q

A small business just recovered from a ransomware attack against its file servers by purchasing the decryption keys from the attackers. The issue was triggered by a phishing email and the IT administrator wants to ensure it does not happen again. Which of the following should the IT administrator do FIRST after recovery?

A. Scan the NAS for residual or dormant malware and take new daily backups that are tested on a frequent basis.
B. Restrict administrative privileges and patch all systems and applications.
C. Rebuild all workstations and install new antivirus software.
D. Implement application whitelisting and perform user application hardening.

A

B. Restrict administrative privileges and patch all systems and applications.

Here’s why:

Restrict Administrative Privileges: Limiting administrative privileges reduces the likelihood of malware being able to execute with elevated permissions, which can mitigate the impact of future attacks, including ransomware.

Patch All Systems and Applications: Ensuring all systems and applications are up-to-date with the latest security patches closes known vulnerabilities that attackers often exploit, such as those used in phishing attacks to deliver ransomware.

Let’s briefly review why the other options are less suitable as the FIRST action:

A. Scan the NAS for residual or dormant malware and take new daily backups that are tested on a frequent basis: While scanning for malware and implementing robust backup strategies are important measures, they are secondary to addressing immediate vulnerabilities through privilege restriction and patching.

C. Rebuild all workstations and install new antivirus software: While rebuilding workstations and installing antivirus software are good practices for enhancing security posture, they are not as immediately critical as restricting privileges and patching to prevent immediate re-exploitation.

D. Implement application whitelisting and perform user application hardening: Application whitelisting and user application hardening are effective security measures but are typically implemented after foundational security measures like privilege restriction and patching.

Therefore, B. Restrict administrative privileges and patch all systems and applications should be the FIRST action taken by the IT administrator to mitigate the risk of another ransomware attack following recovery from the phishing-triggered incident.

45
Q

After entering a username and password, an administrator must draw a gesture on a touch screen. Which of the following demonstrates what the administrator is providing?

A. Multifactor authentication
B. Something you can do
C. Biometrics
D. Two-factor authentication

A

B. Something you can do

(Community B 53%, D 28%, A 19%)
Explanation:

Multifactor authentication (MFA) typically involves combining two or more different factors for authentication, such as something you know (password), something you have (smartphone token), or something you are (biometric trait).

Something you can do refers to an authentication factor that relies on actions or behaviors that are specific to the user. In this case, drawing a gesture on a touch screen is a unique action that the user can perform, and it serves as an additional factor to verify the user's identity.

Biometrics typically refers to using physiological or behavioral characteristics (such as fingerprints, retina scans, or voice recognition) for authentication, which is not directly related to drawing a gesture on a touch screen.

Two-factor authentication (2FA) refers specifically to using two different factors for authentication, such as a password (something you know) and a smartphone token (something you have), but it doesn't directly address the action of drawing a gesture.

Therefore, the administrator is providing something you can do by drawing a gesture on the touch screen, which adds an additional layer of authentication beyond just the username and password (something you know).

46
Q

An analyst visits an Internet forum looking for information about a tool. The analyst finds a thread that appears to contain relevant information. One of the posts says the following:

Hello everyone,
I am having the same problem with my server. Can you help me?
onload=sqlexec();

Thank you,
Joe

Which of the following BEST describes the attack that was attempted against the forum readers?

A. SQLi attack
B. DLL attack
C. XSS attack
D. API attack

A

C. XSS (Cross-Site Scripting) attack

Here’s why:

XSS (Cross-Site Scripting) attack: This type of attack occurs when a malicious script is injected into a web application. In this case, the script is embedded in the forum post itself and is designed to execute sqlexec() when the  tag is loaded. This script could potentially perform actions like stealing session cookies, redirecting users to malicious websites, or in some cases, exploiting vulnerabilities to execute code on the victim's browser.

Let’s briefly review why the other options are less likely:

A. SQLi (SQL Injection) attack: SQL injection attacks target databases through input fields that allow attackers to manipulate SQL queries. The provided script is not attempting to directly interact with a database through SQL queries.

B. DLL (Dynamic Link Library) attack: DLL attacks involve exploiting vulnerabilities in dynamically linked libraries, which is not evident from the provided script.

D. API attack: API attacks involve exploiting vulnerabilities in APIs (Application Programming Interfaces), which is not directly related to executing malicious JavaScript in a web browser.

Therefore, the attack described in the forum post is most accurately categorized as a C. XSS (Cross-Site Scripting) attack, where the intention is to execute unauthorized JavaScript code on the victims’ browsers who visit the forum and view the post containing the script.

47
Q

A root cause analysis reveals that a web application outage was caused by one of the company’s developers uploading a newer version of the third-party libraries that were shared among several applications. Which of the following implementations would be BEST to prevent this issue from reoccurring?

A. CASB
B. SWG
C. Containerization
D. Automated failover

A

(Community : C 82%, B 18%)
C. Containerization

Here’s why containerization is the best choice:

Containerization involves encapsulating an application and its dependencies into a container, ensuring that it runs uniformly and consistently across different computing environments. By containerizing applications, each application and its dependencies, including third-party libraries, are isolated from other applications and their environments.

Preventing Unintended Updates: With containerization, each application can have its own set of containers with specific versions of third-party libraries. This isolation prevents unintended updates to shared libraries from affecting other applications. Developers can specify and control which versions of libraries are used by each application independently.

Let’s briefly review why the other options are less suitable:

A. CASB (Cloud Access Security Broker): CASBs are used for enforcing security policies across cloud services. While useful for securing cloud-based applications and data, CASBs do not directly address the issue of inadvertent updates to shared libraries.

B. SWG (Secure Web Gateway): SWGs provide security controls for web traffic, including monitoring and filtering traffic for security threats. However, they do not prevent inadvertent updates to shared libraries within applications.

D. Automated Failover: Automated failover systems are designed to automatically switch to a standby system or resource when the primary system fails. While important for ensuring availability, failover mechanisms do not prevent application outages caused by unintended updates to shared libraries.

Therefore, C. Containerization is the most effective implementation to prevent issues caused by inadvertent updates to shared third-party libraries, ensuring application stability and consistency across environments.

48
Q

The Chief Security Officer (CSO) at a major hospital wants to implement SSO to help improve security in the environment and protect patient data, particularly at shared terminals. The Chief Risk Officer (CRO) is concerned that training and guidance have not been provided to frontline staff, and a risk analysis has not been performed. Which of the following is the MOST likely cause of the CRO’s concerns?

A. SSO would simplify username and password management, making it easier for hackers to guess accounts.
B. SSO would reduce password fatigue, but staff would still need to remember more complex passwords.
C. SSO would reduce the password complexity for frontline staff.
D. SSO would reduce the resilience and availability of systems if the identity provider goes offline.

A

D. SSO would reduce the resilience and availability of systems if the identity provider goes offline.

Here’s why this is the case:

Resilience and Availability: Single Sign-On (SSO) relies on an identity provider (IdP) to authenticate users across multiple applications and services. If the IdP experiences downtime or becomes unavailable, users may be unable to access the systems they need to perform their tasks. This dependency on a single point of authentication introduces a potential single point of failure that could disrupt critical hospital operations, affecting patient care and overall system availability.

This concern aligns with the CRO’s focus on risk analysis and ensuring that frontline staff are adequately trained and guided on the implications and potential risks associated with implementing SSO. It emphasizes the need to assess and mitigate risks related to system resilience and availability, especially in a healthcare environment where uninterrupted access to patient data and critical applications is essential.

Let’s briefly review why the other options are less likely to be the cause of the CRO’s concerns:

A. SSO would simplify username and password management, making it easier for hackers to guess accounts: While this is a valid security consideration, it is not directly related to the CRO's concerns about risk analysis, training, and system resilience.

B. SSO would reduce password fatigue, but staff would still need to remember more complex passwords: This option addresses usability concerns rather than the specific risks highlighted by the CRO related to system availability and the impact on frontline operations.

C. SSO would reduce the password complexity for frontline staff: While reducing password complexity can impact security, it does not directly address the concerns about training, guidance, and risk analysis related to system resilience.

Therefore, D. SSO would reduce the resilience and availability of systems if the identity provider goes offline is the most likely cause of the CRO’s concerns regarding the implementation of SSO in the hospital environment.

49
Q

A software developer needs to perform code-execution testing, black-box testing, and non- functional testing on a new product before its general release. Which of the following BEST describes the tasks the developer is conducting?

A. Verification
B. Validation
C. Normalization
D. Staging

A

B. Validation

Here’s why:

Validation in software testing refers to the process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. This includes ensuring that the software meets functional and non-functional requirements through various testing methodologies.

Let’s briefly define the terms in the context provided:

Code-execution testing: This typically involves unit testing, integration testing, and system testing to verify that the code behaves as expected and meets functional requirements.

Black-box testing: This testing method examines the functionality of an application without peering into its internal structures or workings. It verifies that the software functions correctly from the end-user's perspective.

Non-functional testing: This includes testing aspects of the software that do not relate to specific behaviors or functions, such as performance testing, usability testing, security testing, and scalability testing.

These activities collectively aim to validate the software’s functionality, performance, and compliance with requirements before its general release to ensure it meets the intended specifications and user expectations.

Therefore, B. Validation best describes the tasks the developer is conducting in preparation for the product’s general release.

50
Q

Joe, a user at a company, clicked an email link that led to a website that infected his workstation. Joe was connected to the network, and the virus spread to the network shares. The protective measures failed to stop this virus, and it has continued to evade detection. Which of the following should a security administrator implement to protect the environment from this malware?

A. Install a definition-based antivirus.
B. Implement an IDS/IPS.
C. Implement a heuristic behavior-detection solution.
D. Implement CASB to protect the network shares.

A

(Brain dump: C. Implement a heuristic behavior-detection solution. )

(Community : C 72%, B 24%.
“The protective measures failed to stop this virus, and it has continued to evade detection”… Heuristic scanning (or analysis) Heuristic scanning does not rely on a virus dictionary. Instead, it looks for suspicious behavior—anything that does not fit into a “normal” pattern of behavior for the operating system (OS) and applications running on the system being protected)

Chat GPT :
B. Implement an IDS/IPS (Intrusion Detection System/Intrusion Prevention System).

Here’s why:

IDS/IPS: These systems are designed to monitor network traffic for suspicious activity or known patterns of malicious behavior. An IDS detects potential security breaches and alerts the administrator, while an IPS can proactively block or mitigate detected threats. This capability is crucial in identifying and stopping malware propagation within the network, such as the one that infected Joe's workstation and spread to network shares.

Let’s briefly review why the other options are less suitable in this context:

A. Install a definition-based antivirus: While antivirus software is essential, especially for detecting known malware signatures, the scenario suggests that the malware evaded existing protective measures, indicating it might be a new or unknown variant. Definition-based antivirus alone may not be effective against sophisticated or zero-day attacks.

C. Implement a heuristic behavior-detection solution: Heuristic analysis can detect previously unknown malware by identifying suspicious patterns or behaviors. However, its effectiveness depends on the sophistication of the malware and the robustness of the heuristic algorithms. In this scenario, where the malware has evaded detection, it may suggest the need for broader network monitoring capabilities provided by an IDS/IPS.

D. Implement CASB to protect the network shares: Cloud Access Security Broker (CASB) solutions are typically used to secure cloud services and applications. While they can monitor and control access to cloud-based resources, they are not directly applicable to protecting on-premises network shares from malware that has already infected the internal network.

Therefore, B. Implement an IDS/IPS is the best choice to enhance the network’s capability to detect and prevent further spread of malware that has already infiltrated the network shares after infecting Joe’s workstation.