Braindumps.701-750 Flashcards

1
Q

A recent malware outbreak across a subnet included successful rootkit installations on many PCs, ensuring persistence by rendering remediation efforts ineffective. Which of the following would best detect the presence of a rootkit in the future?

A. FDE
B. NIDS
C. EDR
D. DLP

A

C. EDR (Endpoint Detection and Response)

Explanation:

Endpoint Detection and Response (EDR) solutions are specifically designed to detect and respond to advanced threats, including rootkits. EDR tools continuously monitor endpoints for suspicious activity and provide detailed visibility into the behavior of applications and processes, making them highly effective at detecting rootkits. They can identify anomalies, provide alerting, and facilitate in-depth investigations, which are crucial for detecting and addressing rootkits that aim to achieve persistence.

The other options do not directly address the detection of rootkits:

A. FDE (Full Disk Encryption): FDE encrypts the data on the disk, but it does not detect or prevent rootkits.
B. NIDS (Network Intrusion Detection System): NIDS monitors network traffic for suspicious activity, but it may not be effective in detecting rootkits, which operate at the endpoint level.
D. DLP (Data Loss Prevention): DLP focuses on preventing data breaches by monitoring and controlling data transfers, but it is not designed to detect rootkits.

Therefore, EDR is the best choice for detecting the presence of rootkits in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An organization is building a single virtual environment that will host customer applications and data that require availability at all times. The data center that is hosting the environment will provide generator power and ISP services. Which of the following is the best solution to support the organization’s requirement?

A. NIC teaming
B. Cloud backups
C. A load balancer appliance
D. UPS

A

Community : D 62%, C 24%
Chat GPT :
D. UPS (Uninterruptible Power Supply)

Explanation:

In the scenario described, where the organization’s single virtual environment hosting customer applications and data requires constant availability, a UPS (Uninterruptible Power Supply) is the best solution to support this requirement. Here’s why:

Power Continuity: A UPS provides continuous power supply to critical infrastructure, such as servers and networking equipment, during electrical power disruptions or outages. This ensures that the virtual environment remains operational even if there are short-term power interruptions.

Protection Against Power Surges: UPS units also protect against power spikes and surges, which can damage sensitive electronic equipment.

Smooth Transition to Generator Power: In cases where the primary power source fails and backup generators kick in, a UPS acts as a bridge to ensure there is no interruption in power supply during the transition.

Stabilization of Power Quality: UPS systems also regulate and stabilize the voltage and frequency of the power supplied to the virtual environment, which can help prevent issues that may arise from inconsistent power quality.

While options like NIC teaming, cloud backups, and load balancer appliances are important in their respective contexts (network redundancy, data protection, and traffic distribution for high availability), they do not directly address the need for continuous power supply and protection against outages, which is crucial for maintaining availability in the described environment. Therefore, UPS is the most appropriate solution for ensuring continuous availability of the virtual environment in this scenario.

(Brain dump : C. A load balancer appliance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A new company wants to avoid channel interference when building a WLAN. The company needs to know the radio frequency behavior, identify dead zones, and determine the best place for access points. Which of the following should be done first?

A. Configure heat maps.
B. Utilize captive portals.
C. Conduct a site survey.
D. Install Wi-Fi analyzers.

A

C. Conduct a site survey.

Explanation:

When setting up a WLAN (Wireless Local Area Network), especially to avoid channel interference and ensure optimal coverage and performance, the first step should be to conduct a site survey. Here’s why:

Radio Frequency Behavior: A site survey involves analyzing the radio frequency (RF) behavior in the physical space where the WLAN will be deployed. This includes assessing existing RF interference, noise sources, and other signals that could affect WLAN performance.

Identify Dead Zones: By conducting a site survey, you can identify areas with poor or no coverage (dead zones) where additional access points may be required to ensure comprehensive coverage.

Optimal Access Point Placement: The survey helps in determining the best locations for installing access points (APs) to maximize coverage while minimizing interference between APs. It takes into account factors like building layout, construction materials, and potential sources of interference.

Heat Maps and Wi-Fi Analyzers: Heat maps and Wi-Fi analyzers are tools used during and after the site survey to visualize signal strength and interference levels. However, these tools are utilized effectively after the initial site survey has been conducted to validate the survey findings and fine-tune the WLAN deployment.

Therefore, conducting a site survey is the crucial first step to gather data on RF behavior, identify dead zones, and strategically place access points to optimize WLAN performance and minimize interference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The following IP information was provided to internal auditors to help assess organizational security:

User traffic subnet : 10.2.5.0/16
File Server : 10.2.2.7
Internal Linux Web server : 10.3.9.9
SQL Server : 10.3.15.82
HR server : 10.9.8.14
Firewall : 10.1.1.1

Which of the following tools would most likely be used to perform network reconnaissance and help understand what is accessible to all users? (Choose two.)

A. ipconfig
B. ping
C. chmod
D. netstat
E. traceroute
F. route

A

Community : (DE 47%, BE 45%)
D. netstat gives you what is listening
E. traceroute finds the path.
ping only does the host availability not the service.

(Chat GPT & Brain dump : B, E)
B. ping

Ping is used to determine whether a specific IP address is reachable and to measure the round-trip time for packets sent from the local host to a destination computer.

E. traceroute

Traceroute is used to display the route and measure transit delays of packets across an IP network. It shows the path packets take to reach a specified destination.

Here’s how these tools apply to the scenario provided:

Ping: It can be used to determine if the servers (File Server, Internal Linux Web Server, SQL Server, HR Server) are reachable from the user traffic subnet (10.2.5.0/16). For example, by pinging 10.2.2.7 (File Server) from a machine in the user traffic subnet, one can verify if it responds, indicating accessibility.

Traceroute: This tool can be used to trace the path packets take from the user traffic subnet to each of the servers (File Server, Internal Linux Web Server, SQL Server, HR Server). It helps in understanding the network hops and potential access points between the user subnet and these servers.

Tools that are less relevant or not typically used for this purpose in this context:

A. ipconfig: This command displays the IP configuration for Windows machines, focusing on local network settings rather than external network reconnaissance.
C. chmod: This command is used to change file permissions in Unix-like operating systems, not for network reconnaissance.
D. netstat: Netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. While useful for network troubleshooting, it doesn’t provide the path or accessibility understanding like ping and traceroute.
F. route: This command shows/manipulates the IP routing table. It helps manage routing tables, not for network reconnaissance in terms of path and accessibility checking like ping and traceroute.

Therefore, ping and traceroute are the most appropriate tools for performing network reconnaissance and understanding what resources are accessible from the user traffic subnet based on the IP information provided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A software company adopted the following processes before releasing software to production:
Peer review
Static code scanning
Signing
A considerable number of vulnerabilities are still being detected when code is executed on production. Which of the following security tools can improve vulnerability detection on this environment?

A. File integrity monitoring for the source code
B. Dynamic code analysis tool
C. Encrypted code repository
D. Endpoint detection and response solution

A

B. Dynamic code analysis tool

Here’s why a dynamic code analysis tool is the most appropriate choice:

Complement to Static Analysis: While static code scanning (static analysis) helps identify vulnerabilities by analyzing the code without executing it, dynamic code analysis (dynamic analysis) examines the code during runtime. It can detect vulnerabilities that may only manifest during execution or interaction with other components.

Detection of Runtime Vulnerabilities: Dynamic analysis tools can detect issues such as memory leaks, input validation flaws, insecure configurations, and other runtime-specific vulnerabilities. These might not be evident during static analysis but can affect the application when it is running in a production environment.

Continuous Monitoring: Unlike static analysis, which is typically performed during development or before deployment, dynamic analysis can provide continuous monitoring of the application in its operational state. This helps in detecting vulnerabilities that might emerge over time or due to changes in the environment.

Feedback Loop for Developers: Dynamic analysis tools often provide real-time feedback to developers about vulnerabilities discovered during runtime. This allows for quicker identification and remediation of issues, improving overall application security.

Therefore, implementing a dynamic code analysis tool would enhance the company’s ability to detect vulnerabilities in the production environment, complementing the existing static code scanning and peer review processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A security analyst needs to harden access to a network. One of the requirements is to authenticate users with smart cards. Which of the following should the analyst enable to best meet this requirement?

A. CHAP
B. PEAP
C. MS-CHAPv2
D. EAP-TLS

A

D. EAP-TLS (Extensible Authentication Protocol - Transport Layer Security)

Here’s why EAP-TLS is suitable for smart card authentication:

Mutual Authentication: EAP-TLS supports mutual authentication, which means both the client (user with a smart card) and the server (network or authentication server) authenticate each other using digital certificates. This aligns well with the security requirements for smart card-based authentication.

Certificate-based Authentication: Smart cards typically store digital certificates that authenticate the user to the network. EAP-TLS facilitates the use of these certificates for secure authentication without transmitting sensitive information like passwords over the network.

Strong Security: EAP-TLS utilizes TLS (Transport Layer Security), which provides strong encryption and integrity protection during the authentication process. This ensures that the authentication exchange between the client and server is secure against eavesdropping and tampering.

In contrast, the other options:

CHAP (Challenge Handshake Authentication Protocol) and MS-CHAPv2 (Microsoft Challenge Handshake Authentication Protocol version 2) are primarily used for password-based authentication and do not directly support smart card authentication.

PEAP (Protected Extensible Authentication Protocol) is an authentication protocol that supports various inner authentication methods, including EAP-TLS. However, PEAP itself does not provide the direct support needed for smart card-based authentication; rather, it encapsulates other EAP methods like EAP-TLS within a secure tunnel.

Therefore, EAP-TLS is the most appropriate choice to enable smart card authentication while ensuring strong security and compliance with smart card deployment requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A penetration-testing firm is working with a local community bank to create a proposal that best fits the needs of the bank. The bank’s information security manager would like the penetration test to resemble a real attack scenario, but it cannot afford the hours required by the penetration-testing firm. Which of the following would best address the bank’s desired scenario and budget?

A. Engage the penetration-testing firm’s red-team services to fully mimic possible attackers.
B. Give the penetration tester data diagrams of core banking applications in a known-environment test.
C. Limit the scope of the penetration test to only the system that is used for teller workstations.
D. Provide limited networking details in a partially known-environment test to reduce reconnaissance efforts.

A

D. Provide limited networking details in a partially known-environment test to reduce reconnaissance efforts.
Explanation:

A. Engage the penetration-testing firm's red-team services to fully mimic possible attackers:
    While this option would provide a very realistic attack scenario, red-team services are typically very comprehensive and can be quite costly due to the extensive time and effort required to fully mimic attackers. This might not fit within the bank's budget constraints.

B. Give the penetration tester data diagrams of core banking applications in a known-environment test:
    This would save time on reconnaissance but may not fully resemble a real attack scenario because the tester would have more information than a typical attacker would.

C. Limit the scope of the penetration test to only the system that is used for teller workstations:
    Limiting the scope too much might not provide a comprehensive assessment of the bank's overall security posture, and could miss critical vulnerabilities in other parts of the network.

D. Provide limited networking details in a partially known-environment test to reduce reconnaissance efforts:
    This option strikes a balance between providing a realistic attack scenario and controlling costs. By giving limited details, the penetration testers can focus on testing without spending excessive time on reconnaissance, thereby simulating a more realistic attack within a constrained budget.

Conclusion:

Option D allows the bank to simulate a realistic attack scenario while controlling costs by reducing the time spent on the reconnaissance phase. This approach aligns well with the bank’s need to balance realism with budget constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A security analyst is reviewing SIEM logs during an ongoing attack and notices the following:

http://company.com/get.php?f=/etc/passwd
https://company.com/..%2F..%2F..%2F..%2Fetc%2Fshadow
https://company.com/../../../../etc/passwd

Which of the following best describes the type of attack?

A. SQLi
B. CSRF
C. API attacks
D. Directory traversal

A

D. Directory traversal

Explanation:

Directory traversal attacks involve an attacker manipulating a URL to access files and directories that are stored outside the web root folder. The aim is to access restricted files, such as configuration files, password files, or other sensitive information on the server.

The log entries show attempts to access the /etc/passwd and /etc/shadow files by including sequences like ../../../../, which is a typical pattern in directory traversal attacks. This pattern attempts to navigate up the directory structure to reach the root directory and then access the specified files.

The other options are not applicable based on the provided log entries:

A. SQLi (SQL injection): This involves inserting or manipulating SQL queries in input fields to execute arbitrary SQL commands. There are no SQL commands or database interactions in the provided logs.
B. CSRF (Cross-Site Request Forgery): This exploits the trust that a web application has in a user's browser, typically involving actions made on behalf of an authenticated user. The logs do not indicate any actions being performed on behalf of a user.
C. API attacks: These involve exploiting vulnerabilities in an application's API. The logs do not show interactions with an API or attempts to exploit API endpoints.

Thus, the attack observed in the logs is best categorized as a directory traversal attack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A certificate vendor notified a company that recently invalidated certificates may need to be updated. Which of the following mechanisms should a security administrator use to determine whether the certificates installed on the company’s machines need to be updated?

A. SCEP
B. OCSP
C. CSR
D. CRL

A

(Community : B 65%, D35%)
OCSP (Online Certificate Status Protocol) is a protocol used to check the revocation status of digital certificates. When a certificate vendor notifies a company that certificates may need to be updated, it often implies that there could be a revocation or expiration issue with the certificates. OCSP allows the security administrator to check the real-time status of the certificates by querying the issuing Certificate Authority (CA) or an OCSP responder to verify whether the certificates are still valid.
CRL (Certificate Revocation List) is a list of revoked certificates published by a CA. While CRLs contain information about revoked certificates, they may not always provide real-time status updates, unlike OCSP.

ChatGPT & Braindump :
D. CRL (Certificate Revocation List)

Explanation:

CRL (Certificate Revocation List): A CRL is a list of certificates that have been revoked by the certificate authority (CA) before their expiration date and should no longer be trusted. By checking the CRL, the security administrator can determine if any of the installed certificates have been revoked and therefore need to be updated.

The other options are not directly related to checking the status of installed certificates:

A. SCEP (Simple Certificate Enrollment Protocol): This is used for certificate enrollment and issuing certificates, not for checking their revocation status.
B. OCSP (Online Certificate Status Protocol): OCSP is a protocol used to obtain the revocation status of a single certificate in real-time. While OCSP can be used to check the revocation status, it is more commonly used for checking individual certificates on the fly rather than reviewing all installed certificates. However, OCSP could be a complementary method if real-time checks are needed.
C. CSR (Certificate Signing Request): A CSR is a request sent to a CA to apply for a digital identity certificate. It does not pertain to checking the status of existing certificates.

Therefore, the CRL is the most appropriate mechanism for determining whether the certificates need to be updated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A recent vulnerability scan revealed multiple servers have non-standard ports open for applications that are no longer in use. The security team is working to ensure all devices are patched and hardened. Which of the following would the security team perform to ensure the task is completed with minimal impact to production?

A. Enable HIDS on all servers and endpoints.
B. Disable unnecessary services.
C. Configure the deny list appropriately on the NGFW.
D. Ensure the antivirus is up to date.

A

B. Disable unnecessary services.

Explanation:

Disabling unnecessary services will directly address the issue of non-standard ports being open for applications that are no longer in use. This action will help close security gaps and reduce the attack surface without significantly impacting production, as it targets only the services and applications that are confirmed to be no longer needed.

Here’s why the other options are less suitable in this specific scenario:

A. Enable HIDS on all servers and endpoints: While Host-based Intrusion Detection Systems (HIDS) are important for monitoring and detecting suspicious activities, enabling them does not directly address the issue of unnecessary services and open ports.

C. Configure the deny list appropriately on the NGFW: Configuring a deny list on the Next-Generation Firewall (NGFW) can help block traffic to and from the non-standard ports. However, this does not remove the underlying issue of unnecessary services running on the servers. It is better to disable those services entirely to reduce the risk.

D. Ensure the antivirus is up to date: Keeping antivirus software up to date is important for overall security, but it does not address the specific issue of open ports and unnecessary services directly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An employee fell for a phishing scam, which allowed an attacker to gain access to a company PC. The attacker scraped the PC’s memory to find other credentials. Without cracking these credentials, the attacker used them to move laterally through the corporate network. Which of the following describes this type of attack?

A. Privilege escalation
B. Buffer overflow
C. SQL injection
D. Pass-the-hash

A

D. Pass-the-hash

Explanation:

In a pass-the-hash attack, an attacker extracts hashed credentials from one computer and uses them to authenticate on other computers within the network without cracking the hash. This method allows the attacker to move laterally through the network by using the hashed credentials to gain access to other systems.

Here’s why the other options are less suitable:

A. Privilege escalation: This involves gaining higher-level permissions than those initially granted, which is not the primary focus of the attack described.
B. Buffer overflow: This involves exploiting a program's vulnerability to execute arbitrary code, which does not match the scenario described.
C. SQL injection: This involves manipulating SQL queries to gain unauthorized access to a database, which is not related to the attack on credentials in this scenario.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following is a common source of unintentional corporate credential leakage in cloud environments?

A. Code repositories
B. Dark web
C. Threat feeds
D. State actors
E. Vulnerability databases

A

A. Code repositories

Explanation:

Code repositories, such as those hosted on platforms like GitHub, GitLab, or Bitbucket, are a common source of unintentional corporate credential leakage. Developers sometimes accidentally commit credentials, API keys, or other sensitive information to these repositories. If these repositories are public or improperly secured, unauthorized individuals can access and exploit this information. This makes code repositories a significant risk factor for credential leakage in cloud environments.

Here’s why the other options are less suitable:

B. Dark web: The dark web is a place where leaked credentials may be traded, but it is not a source of unintentional leakage.
C. Threat feeds: These provide information about potential threats but are not a source of credential leakage.
D. State actors: These are entities that might exploit leaked credentials, not a source of unintentional leakage.
E. Vulnerability databases: These catalog vulnerabilities but do not typically contain leaked credentials directly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is designing the layout of a new data center so it will have an optimal environmental temperature.
Which of the following must be included? (Choose two.)

A. An air gap
B. A cold aisle
C. Removable doors
D. A hot aisle
E. An IoT thermostat
F. A humidity monitor

A

B. A cold aisle
D. A hot aisle

Explanation:

To achieve optimal environmental temperature in a data center, it is crucial to design the layout with proper airflow management. The “cold aisle/hot aisle” configuration is a standard best practice in data center design for maintaining appropriate cooling and temperature control. Here’s a detailed look at why these options are the most relevant:

B. A cold aisle: This is where the cold air is supplied to cool the front of the equipment racks. The cold air is typically directed from the raised floor or directly from cooling units.

D. A hot aisle: This is where the hot air expelled from the back of the equipment racks is collected. The hot aisle is usually aligned with the return air pathways to the cooling units.

By arranging equipment in cold aisle/hot aisle configurations, a data center can ensure that cold air is efficiently used to cool equipment and that hot air is effectively removed from the environment, maintaining an optimal temperature.

Here’s why the other options are less relevant in this context:

A. An air gap: While useful in certain cooling strategies, it is not a standard or primary method for data center cooling.

C. Removable doors: These are not typically related to temperature control but more to physical access and maintenance.

E. An IoT thermostat: While helpful for monitoring, it is not a design element of the data center layout.

F. A humidity monitor: Important for overall environmental control, but not specifically related to the design layout for temperature management.

Thus, for maintaining optimal environmental temperature, incorporating a cold aisle and a hot aisle is essential.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A privileged user at a company stole several proprietary documents from a server. The user also went into the log files and deleted all records of the incident. The systems administrator has just informed investigators that other log files are available for review. Which of the following did the administrator most likely configure that will assist the investigators?

A. Memory dumps
B. The syslog server
C. The application logs
D. The log retention policy

A

B. The syslog server

Explanation:

A syslog server is typically used to collect and store log data from multiple devices and systems in a centralized location. By configuring a syslog server, the systems administrator ensures that log data is copied and stored separately from the local system logs. This makes it much harder for a privileged user to cover their tracks completely by deleting local log files, as the logs would still be available on the syslog server.

Here’s a more detailed look at why this is the most likely helpful configuration:

A. Memory dumps: These are snapshots of the system’s memory at a point in time and are not typically used for storing ongoing log files. They are more useful for diagnosing crashes or debugging applications.

C. The application logs: While these could be useful, they are often stored on the same server where the application is running. If the user deleted local logs, the application logs on the same server might also be deleted.

D. The log retention policy: This ensures logs are kept for a certain period, but it does not prevent logs from being deleted locally if not combined with central logging.

By having a syslog server configured, log entries are sent to a centralized and often secure location where they can be reviewed even if the local logs have been tampered with or deleted. This setup is essential for incident response and forensic investigations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Local guidelines require that all information systems meet a minimum security baseline to be compliant. Which of the following can security administrators use to assess their system configurations against the baseline?

A. SOAR playbook
B. Security control matrix
C. Risk management framework
D. Benchmarks

A

D. Benchmarks

Explanation:

Benchmarks, such as those provided by the Center for Internet Security (CIS) or other similar organizations, offer detailed guidance and best practices for securing various information systems and ensuring they meet a minimum security baseline. These benchmarks can be used to assess system configurations against a set of standardized security controls and best practices.

Here’s a more detailed look at why benchmarks are the most appropriate choice:

A. SOAR playbook: A Security Orchestration, Automation, and Response (SOAR) playbook is used to automate and orchestrate incident response activities, not specifically for assessing system configurations against a security baseline.

B. Security control matrix: While a security control matrix can help map controls to various requirements, it is not specifically used to assess system configurations. It’s more of a tool for tracking and ensuring all necessary controls are implemented.

C. Risk management framework: This provides a structured approach to managing risk and includes processes for identifying, assessing, and mitigating risks. However, it is broader and more strategic, not specifically focused on assessing system configurations against a baseline.

Benchmarks provide practical and specific criteria for evaluating whether systems comply with security standards and baselines. They typically include detailed configuration settings and recommendations for securing various types of systems and applications, making them the best tool for this purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company’s public-facing website, https://www.organization.com, has an IP address of 166.18.75.6. However, over the past hour the SOC has received reports of the site’s homepage displaying incorrect information. A quick nslookup search shows https://www.organization.com is pointing to 151.191.122.115. Which of the following is occurring?

A. DoS attack
B. ARP poisoning
C. DNS spoofing
D. NXDOMAIN attack

A

C. DNS spoofing

Explanation:

DNS spoofing (also known as DNS cache poisoning) is an attack where the attacker corrupts the DNS resolver cache by inserting false DNS information. This results in redirecting traffic from the legitimate IP address to a malicious IP address. In this case, the company’s website should resolve to 166.18.75.6, but it is resolving to 151.191.122.115, indicating that a DNS spoofing attack is likely occurring.

Here’s why the other options are less appropriate:

A. DoS attack: A Denial of Service (DoS) attack aims to make a service unavailable by overwhelming it with traffic. This does not typically involve changing DNS records or redirecting traffic.

B. ARP poisoning: Address Resolution Protocol (ARP) poisoning involves sending falsified ARP messages over a local network to associate the attacker's MAC address with the IP address of a legitimate device. This attack is localized to a specific network and would not explain the incorrect IP address being resolved over the internet.

D. NXDOMAIN attack: This attack targets the non-existence of domains, causing legitimate queries to fail by returning an NXDOMAIN (non-existent domain) response. This does not explain why the domain resolves to an incorrect IP address.

The incorrect IP address pointing to the company’s domain strongly suggests that DNS spoofing is occurring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An employee receives an email stating the employee won the lottery. The email includes a link that requests a name, mobile phone number, address, and date of birth be provided to confirm employee’s identity before sending the prize. Which of the following best describes this type of email?

A. Spear phishing
B. Whaling
C. Phishing
D. Vishing

A

C. Phishing

Explanation:

Phishing is a type of cyberattack where attackers send fraudulent emails or messages that appear to be from legitimate sources, attempting to trick recipients into providing personal information such as names, phone numbers, addresses, and dates of birth.

Here’s why the other options are less appropriate:

A. Spear phishing: This is a targeted form of phishing aimed at a specific individual or organization, often using personalized information. The given scenario does not indicate that the email is targeted specifically at the employee using personal details, but rather a generic "you won the lottery" scam.

B. Whaling: This is a type of phishing attack that specifically targets high-profile individuals such as executives or senior management. The scenario does not indicate that the email targets a high-profile individual.

D. Vishing: This stands for "voice phishing" and involves fraudulent phone calls to obtain personal information. The scenario involves an email, not a phone call.

Thus, the scenario described is best categorized as phishing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company currently uses passwords for logging in to company-owned devices and wants to add a second authentication factor. Per corporate policy, users are not allowed to have smartphones at their desks. Which of the following would meet these requirements?

A. Smart card
B. PIN code
C. Knowledge-based question
D. Secret key

A

A. Smart card

Explanation:

In the scenario where users are not allowed to have smartphones at their desks but need a second authentication factor, a smart card is a suitable solution. Smart cards are physical devices that typically contain embedded chips capable of storing cryptographic keys and certificates. Here’s why it fits the requirement:

Smart card: A smart card can be inserted into a card reader attached to the device or computer. It provides the second factor of authentication by requiring something the user has (the physical card) in addition to the password (something the user knows).

PIN code: While a PIN code can be used with a smart card for authentication, it alone doesn't satisfy the requirement for a second factor because it falls under "something the user knows," similar to a password.

Knowledge-based question: This is also something the user knows, typically used as a password recovery or reset method rather than a second factor for regular authentication.

Secret key: This is typically used in cryptographic contexts but doesn't provide a second factor of authentication on its own without additional components like a smart card or token.

Therefore, a smart card is the most appropriate choice for adding a second authentication factor while adhering to the policy that prohibits smartphones at desks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The Chief Technology Officer of a local college would like visitors to utilize the school’s Wi-Fi but must be able to associate potential malicious activity to a specific person. Which of the following would best allow this objective to be met?

A. Requiring all new. on-site visitors to configure their devices to use WPS
B. Implementing a new SSID for every event hosted by the college that has visitors
C. Creating a unique PSK for every visitor when they arrive at the reception area
D. Deploying a captive portal to capture visitors’ MAC addresses and names

A

D. Deploying a captive portal to capture visitors’ MAC addresses and names

Explanation:

A captive portal is a web page that intercepts and redirects a user’s attempt to access the network. It requires the user to complete certain actions before granting access, such as agreeing to terms of service or providing authentication credentials. Here’s how it fits the scenario described:

Capturing visitor information: A captive portal can be configured to collect specific information from visitors, such as their MAC addresses and names, before granting access to the school's Wi-Fi network. This information can help in associating potential malicious activity with specific individuals if needed for investigation purposes.

Compliance with policies: By requiring visitors to go through the captive portal, the college ensures that each visitor is identifiable, which aligns with the CTO's objective of associating network activity with specific persons.

Now, let’s briefly consider why the other options may not be as suitable:

A. Requiring all new on-site visitors to configure their devices to use WPS: While WPS (Wi-Fi Protected Setup) is a convenient method for device configuration, it does not inherently provide identification of individual users or capture their information for tracking purposes.

B. Implementing a new SSID for every event hosted by the college that has visitors: Creating new SSIDs for each event can be cumbersome to manage and does not directly solve the problem of identifying and associating malicious activity with specific users.

C. Creating a unique PSK for every visitor when they arrive at the reception area: While this approach can provide individualized access credentials, it does not inherently capture visitor information needed for tracking and associating network activity.

Therefore, deploying a captive portal to capture visitors’ MAC addresses and names is the best option to meet the CTO’s objective of allowing visitors to use the school’s Wi-Fi while being able to associate potential malicious activity with specific individuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following is most likely associated with introducing vulnerabilities on a corporate network by the deployment of unapproved software?

A. Hacktivists
B. Script kiddies
C. Competitors
D. Shadow IT

A

D. Shadow IT

Explanation:

Shadow IT refers to the use of IT systems, software, and services within an organization without explicit approval or oversight from the IT department or management. When employees or departments deploy unapproved software or services, it can introduce several vulnerabilities and risks to the corporate network, including:

Security Vulnerabilities: Unapproved software may not undergo rigorous security testing or updates, leading to vulnerabilities that can be exploited by attackers.

Data Loss: Some shadow IT solutions may not have adequate data protection measures, leading to potential data breaches or leaks.

Compliance Risks: Using unapproved software may violate organizational policies, industry regulations, or legal requirements, exposing the organization to compliance risks.

Interoperability Issues: Shadow IT solutions may not integrate well with existing corporate systems, leading to operational disruptions or compatibility issues.

In contrast, the other options are less directly associated with introducing vulnerabilities through unapproved software deployment:

A. Hacktivists: Typically focus on political or social causes rather than deploying unapproved software within a corporate network.

B. Script Kiddies: Inexperienced individuals who use existing scripts or tools to exploit vulnerabilities but are not directly associated with introducing software to the network.

C. Competitors: While competitors may engage in corporate espionage or targeted attacks, they are less likely to introduce vulnerabilities by deploying unapproved software directly.

Therefore, shadow IT is the most likely option associated with introducing vulnerabilities on a corporate network due to the unauthorized deployment of software or services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A cybersecurity incident response team at a large company receives notification that malware is present on several corporate desktops. No known indicators of compromise have been found on the network. Which of the following should the team do first to secure the environment?

A. Contain the impacted hosts.
B. Add the malware to the application blocklist.
C. Segment the core database server.
D. Implement firewall rules to block outbound beaconing.

A

A. Contain the impacted hosts.

Here’s why:

Containment is crucial to prevent further spread of the malware within the corporate network. By isolating or quarantining the impacted desktops, the team can limit the malware's ability to communicate or spread to other systems.

Adding the malware to the application blocklist (option B) might be a subsequent step but not the immediate priority. Before blocking the malware, the team needs to ensure they understand its behavior and impact fully.

Segmenting the core database server (option C) and implementing firewall rules to block outbound beaconing (option D) are important measures for preventing future incidents and minimizing damage but are not as urgent as containing the current infection.

Therefore, A. Contain the impacted hosts is the most immediate action the team should take to secure the environment and prevent further harm.

22
Q

An administrator receives the following network requirements for a data integration with a third-party vendor:

Port 443 allowed OUTGOING to www.vendorsite.com
Port 21 allowed OUTGOING to fs1.vendorsite.com
Port 22 allowed OUTGOING to fs2.vendorsite.com
Port 8080 allowed OUTGOING to www2.vendorsite.com

Which of the following is the most appropriate response for the administrator to send?

A. FTP is an insecure protocol and should not be used.
B. Port 8080 is a non-standard port and should be blocked.
C. SSH protocol version 1 is obsolete and should not be used.
D. Certificate stapling on port 443 is a security risk that should be mitigated.

A

A. FTP is an insecure protocol and should not be used.
Explanation:

A. FTP is an insecure protocol and should not be used:
    FTP (Port 21) is considered insecure because it transmits data, including credentials, in plain text. It is recommended to use a more secure protocol like FTPS (FTP Secure) or SFTP (SSH File Transfer Protocol) instead.

B. Port 8080 is a non-standard port and should be blocked:
    While Port 8080 is indeed a non-standard port for HTTP traffic, it is often used for web services and applications. Blocking it without understanding its specific use case may not be appropriate.

C. SSH protocol version 1 is obsolete and should not be used:
    While this is true, the requirement only mentions using Port 22 for SSH, not specifying SSH version 1. Most modern systems use SSH version 2, so this response may not address the immediate security concern.

D. Certificate stapling on port 443 is a security risk that should be mitigated:
    Certificate stapling is actually a security improvement for TLS/SSL connections as it reduces the need for clients to contact the Certificate Authority (CA) directly, enhancing performance and security. This is not a relevant or accurate concern in the context of the provided requirements.

Conclusion:

Given the context of the network requirements, the primary concern should be the use of FTP, which is inherently insecure. Therefore, the administrator should recommend against using FTP and suggest a more secure alternative.

23
Q

A security administrator manages five on-site APs. Each AP uses different channels on a 5GHz network. The administrator notices that another access point with the same corporate SSID on an overlapping channel was created. Which of the following attacks most likely occurred?

A. Jamming
B. NFC attacks
C. Disassociation
D. Bluesnarfing
E. Evil twin

A

E. Evil twin.

Here’s why:

An evil twin attack involves an attacker setting up a rogue wireless access point with the same SSID as a legitimate network. In this case, the attacker has created an access point with the same corporate SSID on an overlapping channel.

This attack aims to trick users and devices into connecting to the rogue AP instead of the legitimate one. Once connected, the attacker can intercept traffic, perform man-in-the-middle attacks, and potentially gain unauthorized access to sensitive information.

Jamming (option A) involves disrupting wireless communication by emitting radio frequency signals to interfere with legitimate signals, but it does not create a rogue AP.

NFC attacks (option B), disassociation (option C), and bluesnarfing (option D) are unrelated to the creation of a rogue AP with the same SSID.

Therefore, based on the scenario described, E. Evil twin is the most likely attack that occurred.

24
Q

A security team has been alerted to a flood of incoming emails that have various subject lines and are addressed to multiple email inboxes. Each email contains a URL shortener link that is redirecting to a dead domain. Which of the following is the best step for the security team to take?

A. Create a blocklist for all subject lines.
B. Send the dead domain to a DNS sinkhole.
C. Quarantine all emails received and notify all employees.
D. Block the URL shortener domain in the web proxy.

A

Community : B 55%, D 45%)
B. Send the dead domain to a DNS sinkhole.

Blocking the URL shortener domain in the web proxy (option D) could be an effective measure to prevent access to the malicious URLs. However, it’s important to consider the possibility that the attackers could use different URL shortener services or domains in the future. Therefore, while blocking the specific URL shortener domain may provide temporary relief, it might not address the root cause of the issue comprehensively.

On the other hand, sending the dead domain to a DNS sinkhole (option B) effectively prevents any traffic attempting to reach the malicious domain, regardless of the method used to access it. This approach is more proactive and robust, as it targets the destination domain itself rather than relying on blocking specific intermediaries. It provides a broader defense against potential future threats involving similar tactics

ChatGPT & Braindump :
D. Block the URL shortener domain in the web proxy.

Here’s the reasoning:

The emails contain URL shortener links that redirect to a dead domain. Despite the dead domain, the URL shortener itself could still potentially be used for malicious purposes in the future.

Blocking the URL shortener domain in the web proxy prevents users within the organization from accessing any malicious content that might be associated with the URL shortener links, even if the current links redirect to a dead domain.

Creating a blocklist for all subject lines (option A) might not be effective as attackers can easily change subject lines to evade detection.

Sending the dead domain to a DNS sinkhole (option B) is less effective in this case because the immediate threat lies with the URL shortener domain, not the dead domain it redirects to.

Quarantining all emails (option C) and notifying all employees could be considered, but blocking the URL shortener domain is a more direct and proactive step to prevent potential threats.

Therefore, D. Block the URL shortener domain in the web proxy is the best immediate action for the security team to take to mitigate the risk posed by these emails.

25
Q

https://exampracticetests.com/c/SY0-601/comptia-security-sy0-601-question725/

Simulation
a newly purchased corporate WAP needs to be conifgured in the MOST secure manner possible

INSTRUCTIONS

plase click on the below items on the network diagram and configure them accordingly

  • WAP
  • DHCP Server
  • AAA Server
  • Wireless Controller
  • LDAP Server

WAP connected to :
DHCP SERVER 192.168.60.10
AAA SERVER 192.168.1.20
WIRELESS CONTROLLER 172.28.1.30
LDAP SERVER 10.10.20.20

Wireless Access Point : Basic Wireless Settings
Wireless Network Mode : MIXED, B ONLY, G ONLY
Wireless Network Name (SSID) : DEFAULT
Wireless Channel: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Wireless SSID Broadcast : enable , disable

Wireless Access Point : Wireless Security
Security Mode: Disabled, WEP, WPA Enterprise, WPA Personal, WPA2 Enterprise, WPA2 Personal, RADIUS

DHCP Server
IP : 192.168.60.10
Netmask 255.255.255.0
DG : 192.168.60.1
Range : 10.50.7.0 - 10.50.8.255
DNS Server : 192.168.30.4, 192.168.40.4
Reserved : A1-27-CA-23-45-76-E3 10.50.7.5
Reserved : B3-47-A3-18-E7-7D-E2 10.50.7.6
Domain : corporatenet
Port : 67

AAA Server
IP : 192.168.1.20
Netmask 255.255.255.0
DG : 192.168.1.1
Secret : corporatenet
Realm : wirelessnet
Port : 1812

Wireless Controller
IP : 172.28.1.30
Netmask 255.255.255.0
DG : 172.28.1.1
Admin User : root
Admin Password : corporatenett
WPA Key : supersecret
Port : 1212

LDAP Server
IP : 10.10.20.20
Netmask 255.255.255.0
DG : 10.10.20.1
Domain : corporatenet
Tree Name : wirelessnet
Bind Password : secretpass
Port : 389

A

Select the following in wireless access point:
Wireless Network Mode: -MIXED
Wireless Channel -11
Security Mode: WPA2 Enterprise

26
Q

DRAG DROP
A security engineer is setting up passwordless authentication for the first time.

INSTRUCTIONS
Drag and drop the MINIMUM set of commands to set this up and verify that it works. Commands may only be used once, and not all will be used.
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.

Select and Place:

Commands :
chmod 777 ~/.ssh/authorized_keys
openssl x509 -in ~/ssh/id_rsa.pub -text -noout
ssh root@server
scp ~/.ssh/id_rsa user@server:.ssh/authorized_keys
ssh-keygen -t rsa
ssh-copyid -i ~/.ssh/id_rsa.pub user@sever
ssh -i ~/.ssh/id_rsa user@server

SSH Client ?

A

ssh-keygen -t rsa
ssh-copyid -i ~/.ssh/id_rsa.pub user@sever
ssh -i ~/.ssh/id_rsa user@server

27
Q

A data administrator is configuring authentication for a SaaS application and would like to reduce the number of credentials employees need to maintain. The company prefers to use domain credentials to access new SaaS applications. Which of the following methods would allow this functionality?

A. SSO
B. LEAP
C. MFA
D. PEAP

A

A. SSO (Single Sign-On)

Here’s why:

Single Sign-On (SSO) allows users to authenticate once with their domain credentials (like Active Directory credentials) and then gain access to multiple applications without needing to re-enter their credentials for each application.

SSO integrates with existing identity management systems, such as Active Directory, to provide a seamless login experience across different applications.

LEAP (option B) and PEAP (option D) are authentication protocols typically used in wireless networks and are not related to SaaS application authentication.

MFA (Multi-Factor Authentication) (option C) enhances security by requiring multiple forms of verification for access, but it does not necessarily reduce the number of credentials employees need to maintain for different applications.

Therefore, A. SSO is the method that aligns with the goal of using domain credentials to access new SaaS applications while reducing the credential management overhead for employees.

28
Q

Which of the following would be best suited for constantly changing environments?

A. RTOS
B. Containers
C. Embedded systems
D. SCADA

A

B. Containers

Here’s why:

Containers are lightweight, portable, and scalable units of software that package up code and all its dependencies. They provide consistent environments across different infrastructure setups, making them ideal for rapidly changing environments where scalability and flexibility are crucial.

Containers can be quickly deployed, updated, and scaled horizontally, making them well-suited for dynamic environments that require frequent changes or updates.

RTOS (Real-Time Operating System) (option A) is designed for systems that require precise timing and predictable response times, typically in embedded systems and critical applications, but it is not specifically geared towards constantly changing environments.

Embedded systems (option C) are specialized systems designed to perform specific tasks within constrained environments, often with limited resources and fixed functionality. They are not inherently suited for constantly changing environments.

SCADA (Supervisory Control and Data Acquisition) (option D) systems are used for monitoring and controlling industrial processes, and while they can handle real-time data and updates, they are typically deployed in stable and controlled environments rather than constantly changing ones.

Therefore, B. Containers are the best choice among the options provided for environments that undergo frequent changes and updates.

29
Q

A newly identified network access vulnerability has been found in the OS of legacy IoT devices. Which of the following would best mitigate this vulnerability quickly?

A. Insurance
B. Patching
C. Segmentation
D. Replacement

A

C. Segmentation
Explanation:

A. Insurance:
    While insurance can help mitigate financial losses after an incident, it does not directly address the technical vulnerability itself.

B. Patching:
    Patching is generally the best long-term solution to fix vulnerabilities. However, for legacy IoT devices, patches may not be readily available, or applying them might require significant time and resources.

C. Segmentation:
    Segmentation involves isolating the vulnerable devices from the rest of the network. This can be done quickly and effectively to prevent potential exploitation of the vulnerability from spreading or affecting critical parts of the network.

D. Replacement:
    Replacing the legacy IoT devices is another effective solution but it is likely to be time-consuming and costly, making it less suitable for a quick mitigation.

Conclusion:

Segmentation is the quickest and most effective immediate measure to mitigate the vulnerability in legacy IoT devices. It can limit the potential damage by isolating these devices from the main network and other critical systems, thereby reducing the risk of exploitation.

30
Q

The local administrator account for a company’s VPN appliance was unexpectedly used to log in to the remote management interface. Which of the following would have prevented this from happening?

A. Using least privilege
B. Changing the default password
C. Assigning individual user IDs
D. Implementing multifactor authentication

A

(Community B 44%, D 33%, A 22%)

B. Changing the default password

Here’s why:

Changing the default password is a fundamental security practice that helps prevent unauthorized access using default credentials. Many devices, including VPN appliances, come with default passwords that are well-known and easily exploited if not changed.

Using least privilege (option A) involves granting users only the permissions necessary to perform their tasks, but it does not directly prevent unauthorized access if default credentials are used.

Assigning individual user IDs (option C) helps in accountability and tracking user actions but does not prevent unauthorized access if default credentials are used.

Implementing multifactor authentication (option D) adds an additional layer of security by requiring multiple forms of verification, but it would not have prevented access if the default password was used without any additional authentication factors.

Therefore, B. Changing the default password is the measure that would have directly prevented the unexpected use of the local administrator account to log into the VPN appliance’s remote management interface.

31
Q

https://exampracticetests.com/c/SY0-601/comptia-security-sy0-601-question731/

HOTSPOT
You are a security administrator investigating a potential infection on a network.
INSTRUCTIONS
Click on each host and firewall. Review all logs to determine which host originated the infection and then identify if each remaining host is clean or infected.

If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.
Hot Area:

R&D Network connects to firewall (Int A)
Engineering Network connects to firewall (Int B)
Internet connected to a firewall (Int C)

R&D Network (int A)
192.168.10.22 Origin, Infected, Clean
192.168.10.37 Origin, Infected, Clean
192.168.10.41 Origin, Infected, Clean

Engineering Network (int B)
10.10.9.12 Origin, Infected, Clean
10.10.9.18 Origin, Infected, Clean

A

R&D Network (int A)
192.168.10.22 Infected
192.168.10.37 Clean
192.168.10.41 Clean

Engineering Network (int B)
10.10.9.12 Clean
10.10.9.18 Infected

32
Q

Which of the following describes the exploitation of an interactive process to gain access to restricted areas?

A. Persistence
B. Port scanning
C. Privilege escalation
D. Pharming

A

C. Privilege escalation

Here’s why:

Privilege escalation refers to the exploitation of a vulnerability or flaw in an interactive process to gain higher privileges than originally authorized. This allows an attacker to access restricted areas or perform actions that are typically not allowed under normal circumstances.

Persistence (option A) refers to techniques used by attackers to maintain access to a system or network over an extended period, often through malware or backdoors.

Port scanning (option B) is the process of scanning a computer or network to find open ports and assess the security posture.

Pharming (option D) is a cyber attack where an attacker redirects a website's traffic to a fake website to steal information.

Therefore, the exploitation of an interactive process to gain access to restricted areas is best described by C. Privilege escalation.

33
Q

https://exampracticetests.com/c/SY0-601/comptia-security-sy0-601-question733/

SIMULATION

A company recently added a DR site and is redesigning the network. Users at the DR site are having issues browsing websites.

INSTRUCTIONS
Click on each firewall to do the following :
1. Deny cleartext web traffic
Ensure secure management protocols are used
3. Resolve issues at the DR site

The ruleset order cannot be modified due to outside constraints.

A

https://exampracticetests.com/c/SY0-601/comptia-security-sy0-601-question733/

34
Q

DRAG DROP
A data owner has been tasked with assigning proper data classifications and destruction methods for various types of data contained within the environment.

INSTRUCTIONS
From the options below, drag each item to its appropriate classification as well as the MOST appropriate form of disposal.
If at any time you would like to bring back the initial state of the simulation, please click the Reset All button.

Select and Place:

Drag & Drop

  1. Bound copies of internal audit reports from a private company
  2. Copies of financial audit reports from exchange-traded organizations on a flash drive
  3. Database containing driver’s license information on a reusable backup tape
  4. Decommissioned mechanical hard drive containing application source code.
  5. Employee records on an SSD.
  6. Paper-based customer records, which include medical data

Drop into the following :
Data Classification :
PII
PHI
Intellectual Property
Corporate Confidential
Public

Data Destruction Method :
Degaussing and Multi-Pass Wipe
Physical Destruction via Shredding

A

Community :
PII&raquo_space; 3
PHI&raquo_space; 6
IP&raquo_space; 4
CC&raquo_space; 1,5
Public> 2
—————–
Degaussing&raquo_space; 3,4
Shredding&raquo_space; 1,2,5,6

(Brain dump)
Data Classification :
PII - 5. Employee records on an SSD.
PHI - 6. Paper-based customer records, which include medical data
Intellectual Property - 2. Copies of financial audit reports from exchange-traded organizations on a flash drive
Corporate Confidential - 2. Copies of financial audit reports from exchange-traded organizations on a flash drive
Public - 1. Bound copies of internal audit reports from a private company

Data Destruction Method :
Degaussing and Multi-Pass Wipe - 3. Database containing driver’s license information on a reusable backup tape
Physical Destruction via Shredding - 4. Decommissioned mechanical hard drive containing application source code.

35
Q

A security analyst is assessing several company firewalls. Which of the following tools would the analyst most likely use to generate custom packets to use during the assessment?

A. hping
B. Wireshark
C. PowerShell
D. netstat

A

A. hping

Here’s why:

hping is a command-line tool that allows users to generate custom ICMP, UDP, and TCP packets. It is particularly useful for crafting packets with specific characteristics or payloads to test firewall rules and responses.

Wireshark (option B) is a network protocol analyzer that captures and displays packets but does not generate custom packets.

PowerShell (option C) is a scripting language and command-line shell used for task automation and configuration management in Windows environments, but it is not primarily used for generating custom packets.

netstat (option D) is a command-line tool used to display network connections, routing tables, interface statistics, etc., but it does not generate packets.

Therefore, A. hping is the tool that the security analyst would most likely use to generate custom packets for assessing company firewalls.

36
Q

A local business was the source of multiple instances of credit card theft. Investigators found that most payments at this business were made at self-service kiosks. Which of the following is the most likely cause of the exposed credit card Information?

A. Insider threat
B. RAT
C. Backdoor
D. Skimming
E. NFC attack

A

D. Skimming

Here’s why:

Skimming involves the unauthorized capture of credit card information by attaching a device to a legitimate payment terminal or ATM. In the case of self-service kiosks, criminals can install skimming devices or overlays on card readers to capture card details as customers swipe or insert their cards.

Skimming devices are designed to be discreet and capture card data, including the card number, expiration date, and sometimes the cardholder's name. This information can then be used for fraudulent transactions or sold on the dark web.

Insider threat (option A) refers to malicious or negligent actions by individuals within an organization who misuse their access to cause harm, but skimming typically involves external criminals exploiting vulnerabilities in payment systems.

RAT (Remote Access Trojan) (option B) and backdoor (option C) are types of malicious software or techniques that can allow unauthorized access to systems but are not typically associated with credit card theft at self-service kiosks.

NFC attack (option E) involves exploiting vulnerabilities in Near Field Communication technology to gain unauthorized access or steal information, but it's less likely in this context compared to physical skimming devices on kiosks.

Therefore, D. Skimming is the most likely cause of the exposed credit card information at the self-service kiosks in the local business.

37
Q

An employee recently resigned from a company. The employee was responsible for managing and supporting weekly batch jobs over the past five years. A few weeks after the employee resigned, one of the batch jobs failed and caused a major disruption. Which of the following would work best to prevent this type of incident from reoccurring?

A. Job rotation
B. Retention
C. Outsourcing
D. Separation of duties

A

(Brain dump: A. Job rotation)
(Community : A 71%, D 29%)

D. Separation of duties

Here’s why:

Separation of duties involves dividing responsibilities among multiple employees so that no single person has complete control over a critical process or function. In this case, if responsibilities for managing and supporting batch jobs had been divided among multiple employees or roles, the resignation of one person would not have led to such a significant disruption.

Job rotation (option A) involves moving employees through different roles or responsibilities to provide them with experience and prevent burnout, but it does not directly address the issue of dependency on a single person for critical tasks.

Retention (option B) focuses on keeping employees within the organization but does not necessarily prevent disruptions caused by the departure of a key individual.

Outsourcing (option C) involves contracting tasks or services to external parties, which may not always be feasible or effective for managing internal processes like batch jobs.

Therefore, D. Separation of duties is the most effective measure to prevent incidents caused by the sudden departure of an employee who had critical responsibilities, such as managing and supporting batch jobs.

38
Q

Following a recent security breach, an analyst discovered that user permissions were added when joining another part of the organization but were not removed from existing groups. Which of the following policies would help to correct these issues in the future?

A. Service accounts
B. Account audits
C. Password complexity
D. Lockout policy

A

B. Account audits

Here’s why:

Account audits involve regular reviews of user accounts, permissions, and access rights. By conducting periodic audits, organizations can identify and remove unnecessary permissions that users may have accumulated over time, such as permissions from previous roles or departments.

In the scenario described, if account audits had been performed regularly, the organization would have detected that permissions were not being properly managed when users moved between different parts of the organization. This would allow for timely removal of unnecessary permissions and help prevent security breaches caused by excessive access.

Service accounts (option A) are accounts used by services or applications rather than individual users and are managed differently from user accounts.

Password complexity (option C) and lockout policy (option D) are measures related to password security and account access control but do not directly address the issue of improper permission management when users change roles or departments.

Therefore, B. Account audits is the policy that would help correct the issue of permissions not being removed from existing groups after users join another part of the organization.

39
Q

Which of the following ensures an organization can continue to do business with minimal interruption in the event of a major disaster?

A. Business recovery plan
B. Incident response plan
C. Communication plan
D. Continuity of operations plan

A

D. Continuity of operations plan

Here’s why:

Continuity of operations plan (COOP) is designed to ensure that essential functions of an organization can continue to operate or be quickly restored in the event of a major disruption, such as a natural disaster, cyber attack, or other emergencies.

A COOP typically includes strategies, policies, and procedures to maintain essential services, manage resources, and mitigate risks during and after a disaster, thereby minimizing disruption to business operations.

Business recovery plan (option A) focuses on recovering and restoring business operations after a disruption has occurred, rather than ensuring continuous operation during the disaster itself.

Incident response plan (option B) outlines actions to be taken immediately when an incident occurs to contain, mitigate, and recover from the incident, but it does not necessarily ensure continuous business operations.

Communication plan (option C) outlines how communication will be managed during an incident or disaster, which is important but not sufficient alone to ensure continuity of operations.

Therefore, D. Continuity of operations plan is the option that best ensures an organization can continue to do business with minimal interruption in the event of a major disaster.

40
Q

In a rush to meet an end-of-year business goal, the IT department was told to implement a new business application. The security engineer reviews the attributes of the application and decides the time needed to perform due diligence is insufficient from a cybersecurity perspective. Which of the following BEST describes the security engineer’s response?

A. Risk tolerance
B. Risk acceptance
C. Risk importance
D. Risk appetite

A

(Community : B 66%, A 24%)

Choose A, Risk Tolerance.

The security engineer’s response indicates a decision not to proceed due to cybersecurity concerns despite pressure to meet a business goal. This aligns closely with the concept of “Risk tolerance,” which refers to an organization’s willingness to accept the level of risk associated with a particular activity or decision.

So, the BEST description of the security engineer’s response is:

A. Risk tolerance

41
Q

A security team created a document that details the order in which critical systems should be brought back online after a major outage. Which of the following documents did the team create?

A. Communication plan
B. Incident response plan
C. Data retention policy
D. Disaster recovery plan

A

D. Disaster recovery plan

Here’s why:

A disaster recovery plan (DRP) is a structured approach outlining the steps and procedures to recover and restore critical systems and operations after a major disruption or disaster. It includes specific details such as the sequence in which systems should be restored, the roles and responsibilities of team members, communication protocols, and recovery time objectives (RTOs).

The order in which critical systems should be brought back online is a key component of a disaster recovery plan, ensuring that the organization can resume essential operations in a structured and efficient manner.

Communication plan (option A) focuses on how communication will be managed during and after an incident or disaster.

Incident response plan (option B) outlines actions to be taken immediately when an incident occurs to contain, mitigate, and recover from the incident.

Data retention policy (option C) defines how long data should be retained and the procedures for managing data throughout its lifecycle, which is unrelated to the sequence of system restoration.

Therefore, D. Disaster recovery plan is the document that the security team created to specify the order for bringing critical systems back online after a major outage.

42
Q

A company wants to reconfigure an existing wireless infrastructure. The company needs to ensure the projected WAP placement will provide proper signal strength to all workstations. Which of the following should the company use to best fulfill the requirements?

A. Network diagram
B. WPS
C. 802.1X
D. Heat map

A

D. Heat map

Here’s why:

A heat map is a graphical representation that shows the signal strength and coverage areas of wireless access points (WAPs) within a given space. It helps visualize where the signal is strong, weak, or non-existent, allowing for optimal placement of WAPs to ensure adequate coverage for all workstations.

By using a heat map, the company can plan and adjust the placement of WAPs to minimize dead zones and ensure that all areas where workstations are located receive sufficient Wi-Fi signal strength.

Network diagram (option A) provides a schematic representation of the network topology and devices but does not specifically show signal strength or coverage areas of WAPs.

WPS (Wi-Fi Protected Setup) (option B) is a method to simplify the process of connecting devices to a wireless network but does not provide information on signal strength.

802.1X (option C) is an IEEE standard for network access control, specifically for port-based authentication, and is unrelated to determining Wi-Fi signal coverage.

Therefore, D. Heat map is the tool that the company should use to best ensure proper signal strength to all workstations when reconfiguring their wireless infrastructure.

43
Q

A company wants to pragmatically grant access to users who have the same job. Which of the following access controls should the company most likely use?

A. Role-based
B. Need-to-know
C. Mandatory
D. Discretionary

A

A. Role-based

Here’s why:

Role-based access control (RBAC) is a widely used access control model where access decisions are based on the roles assigned to users within an organization. Users are grouped into roles based on their job functions, responsibilities, and authorization needs.

RBAC simplifies access management by granting permissions to roles rather than to individual users. This approach ensures that users who share the same job responsibilities receive consistent access permissions, reducing administrative overhead and enhancing security.

Need-to-know (option B) is a principle that restricts access to sensitive information to only those users who need it to perform their job duties. It is often used in environments where access must be strictly controlled based on specific information requirements rather than job roles.

Mandatory access control (option C) is a stricter access control model where access decisions are based on security labels assigned to subjects and objects. It is typically used in environments with high security requirements, such as military or government settings.

Discretionary access control (option D) allows users to determine who can access their resources and is based on the discretion of the resource owner. It provides flexibility but may not be as structured or pragmatic for managing access based on job roles.

Therefore, A. Role-based access control is the most appropriate choice for pragmatically granting access to users who have the same job within a company.

44
Q

A security engineer is working to address the growing risks that shadow IT services are introducing to the organization. The organization has taken a cloud-first approach and does not have an on-premises IT infrastructure. Which of the following would best secure the organization?

A. Upgrading to a next-generation firewall
B. Deploying an appropriate in-line CASB solution
C. Conducting user training on software policies
D. Configuring double key encryption in SaaS platforms

A

B. Deploying an appropriate in-line CASB (Cloud Access Security Broker) solution

Here’s why:

CASB solutions are specifically designed to provide visibility, control, and security for cloud services and applications. They can monitor and manage usage of cloud services, enforce security policies, and detect and mitigate risks associated with shadow IT, such as unauthorized cloud applications and services.

An in-line CASB solution integrates directly with cloud services and applications, allowing real-time visibility and control over data and activities. This helps prevent unauthorized access, ensures compliance with security policies, and protects sensitive data stored and processed in the cloud.

Upgrading to a next-generation firewall (option A) focuses on securing network traffic but may not provide sufficient visibility and control over cloud services and applications.

User training on software policies (option C) is important for educating employees on acceptable use policies and security practices, but it may not effectively prevent or mitigate risks introduced by shadow IT services.

Configuring double key encryption in SaaS platforms (option D) enhances data security but does not directly address the visibility, control, and management of shadow IT services.

Therefore, B. Deploying an appropriate in-line CASB solution is the best option to help secure the organization against the growing risks introduced by shadow IT services in a cloud-first environment.

45
Q

An enterprise has hired an outside security firm to conduct penetration testing on its network and applications. The firm has been given all the developer’s documentation about the internal architecture. Which of the following best represents the type of testing that will occur?

A. Bug bounty
B. White-box
C. Black-box
D. Gray-box

A

B. White-box

Here’s why:

White-box testing (option B) involves testing a system with full knowledge of its internal workings, architecture, and code. In this scenario, since the security firm has been provided with all the developer's documentation about the internal architecture, they will have comprehensive insight into how the network and applications are structured, how they interact, and potentially even access to source code.

White-box testing allows the security firm to perform thorough assessments, including code reviews, architectural reviews, and in-depth analysis of potential vulnerabilities based on the known internal structure of the system.

Black-box testing (option C) involves testing a system with no prior knowledge of its internal workings or architecture. The tester approaches the system as an outsider would, without access to internal documentation or source code.

Gray-box testing (option D) is a hybrid approach that combines elements of both white-box and black-box testing, where the tester has partial knowledge of the system, such as some internal documentation or limited access to code.

Bug bounty (option A) refers to a program where external researchers or hackers are invited to find and report vulnerabilities in a system, often without access to internal documentation but sometimes with access to the application itself.

Therefore, B. White-box best represents the type of testing that will occur when the security firm has been provided with all the developer’s documentation about the internal architecture of the enterprise’s network and applications.

46
Q

Which of the following are the most likely vectors for the unauthorized or unintentional inclusion of vulnerable code in a software company’s final software releases? (Choose two).

A. Certificate mismatch
B. Use of penetration-testing utilities
C. Weak passwords
D. Included third-party libraries
E. Vendors/supply chain
F. Outdated anti-malware software

A

D. Included third-party libraries

E. Vendors/supply chain
Explanation:

D. Included third-party libraries:
    Third-party libraries are often used to save development time and add functionality. However, they can also introduce vulnerabilities if not properly vetted or if they contain security flaws. These libraries might be included in the final software release without a thorough security review.

E. Vendors/supply chain:
    Vulnerabilities can be introduced through the supply chain, such as through compromised software components or development tools provided by third-party vendors. This vector highlights the risks associated with dependencies on external suppliers and the potential for supply chain attacks.

Less likely vectors:

A. Certificate mismatch:
    While certificate mismatches can lead to security issues, they are not a common vector for including vulnerable code in software releases. They are more related to secure communications and authentication issues.

B. Use of penetration-testing utilities:
    Penetration-testing utilities are typically used to identify vulnerabilities, not introduce them. While misuse of these tools could potentially cause issues, they are not a primary vector for vulnerable code inclusion.

C. Weak passwords:
    Weak passwords are a security concern but are more related to unauthorized access rather than the inclusion of vulnerable code in software releases.

F. Outdated anti-malware software:
    While outdated anti-malware software can lead to inadequate protection against malware, it does not directly lead to the inclusion of vulnerable code in software releases.

Conclusion:

The use of third-party libraries and supply chain dependencies are the most likely vectors for the unauthorized or unintentional inclusion of vulnerable code in a software company’s final software releases.

47
Q

A hosting provider needs to prove that its security controls have been in place over the last six months and have sufficiently protected customer data. Which of the following would provide the best proof that the hosting provider has met the requirements?

A. NIST CSF
B. SOC 2 Type 2 report
C. CIS Top 20 compliance reports
D. Vulnerability report

A

B. SOC 2 Type 2 report

Here’s why:

SOC 2 Type 2 report is specifically designed to provide assurance about the effectiveness of controls relevant to security, availability, processing integrity, confidentiality, and privacy. It evaluates the service provider's systems and the suitability of the design and operating effectiveness of these controls over a period of time, typically six months or more.

The report is conducted by an independent third-party auditor who assesses and tests the controls based on the Trust Services Criteria (TSC) established by the American Institute of CPAs (AICPA).

SOC 2 Type 2 reports are highly regarded in the industry and provide detailed findings and assurances that the hosting provider's security controls have been implemented and operated effectively to protect customer data.

NIST CSF (option A) provides a framework for organizations to manage and improve their cybersecurity posture but does not provide the same level of independent verification and assurance as a SOC 2 Type 2 report.

CIS Top 20 compliance reports (option C) are based on recommended security controls to mitigate common cyber threats but do not provide comprehensive assessment and assurance of security controls over a specific period like a SOC 2 Type 2 report.

Vulnerability report (option D) may provide insights into specific vulnerabilities identified within the environment but does not offer comprehensive assurance about the overall effectiveness of security controls and protection of customer data over time.

Therefore, B. SOC 2 Type 2 report would provide the best proof that the hosting provider has maintained security controls and sufficiently protected customer data over the last six months.

48
Q

Which of the following are common VoIP-associated vulnerabilities? (Choose two).

A. SPIM
B. Vishing
C. VLAN hopping
D. Phishing
E. DHCP snooping
F. Tailgating

A

(Brain dump: A. SPIM, B. Vishing )
(Community AB 55%, BC 34%)
SPIM (Spam over Internet Messaging) poses a threat to VoIP systems by consuming bandwidth, diverting resources, and potentially causing denial of service attacks. The influx of SPIM messages can degrade the quality of VoIP calls, overload servers, and serve as a platform for social engineering attacks, jeopardizing the security of VoIP users. To mitigate these risks, organizations should implement spam filters, intrusion detection systems, and regular software updates while also educating users to recognize and avoid potential threats associated with SPIM

Chat GPT
A. SPIM and C. VLAN hopping.

Here’s why:

SPIM (SPam over Internet Messaging) (option A) is the unsolicited sending of messages over VoIP systems, similar to email spam but affecting VoIP communications.

VLAN hopping (option C) is a technique where an attacker gains unauthorized access to network traffic between separate VLANs (Virtual Local Area Networks). It can potentially compromise VoIP traffic if VLANs are improperly configured, allowing attackers to intercept or manipulate VoIP data.

The other options do not typically relate directly to VoIP-associated vulnerabilities:

Vishing (option B) is a social engineering technique that involves phone calls to deceive individuals into providing sensitive information, but it is not specific to VoIP.

Phishing (option D) is another form of social engineering where attackers attempt to trick individuals into divulging sensitive information, often through fraudulent emails or websites, rather than VoIP systems.

DHCP snooping (option E) is a security feature that mitigates DHCP-related attacks by monitoring DHCP messages, which are not directly related to VoIP vulnerabilities.

Tailgating (option F) refers to physical security breaches where unauthorized individuals gain access to restricted areas by following authorized personnel, which does not directly involve VoIP vulnerabilities.

Therefore, A. SPIM and C. VLAN hopping are the common VoIP-associated vulnerabilities among the options provided.

49
Q

A security administrator received an alert for a user account with the following log activity:

5/23/22 10:07.16.301 AM 197.42.32.110 Shell_Client US

5/23/22 10:07.16.301 AM 197.42.32.110 Shell_Client US

5/23/22 10:07.16.301 AM 197.42.32.110 Shell_Client US

5/23/22 11:15.16.301 AM 210.167.76.13 Sign In Shell_Client AT

Which of the following best describes the trigger for the alert the administrator received?

A. Number of failed log-in attempts
B. Geolocation
C. Impossible travel time
D. Time-based log-in attempt

A

C. Impossible travel time
Explanation:

The log activity shows that a user account had log activity from two different IP addresses with a very short time difference:

At 10:07 AM, the user account logged in from IP address 197.42.32.110 (located in the US) multiple times.
Then, at 11:15 AM, there is a log-in attempt from IP address 210.167.76.13 (located in AT, likely Austria).

Given the short time interval between these log-in activities (less than an hour), it suggests that it would be impossible for the user to physically travel between the US and Austria within that timeframe. This discrepancy triggers the alert for impossible travel time, indicating a potential security concern such as account compromise or unauthorized access.

A. Number of failed log-in attempts is not correct because there is no indication of failed log-in attempts in the provided log.

B. Geolocation is related to the locations (US and AT) but doesn’t directly address the issue of the short time interval between log-ins from geographically distant locations.

D. Time-based log-in attempt is too general and doesn’t specifically address the issue of log-ins from different locations within an implausibly short timeframe.

Therefore, C. Impossible travel time is the best description of the trigger for the alert based on the provided log activity.

50
Q

Which of the following is a reason why a forensic specialist would create a plan to preserve data after an incident and prioritize the sequence for performing forensic analysis?

A. Order of volatility
B. Preservation of event logs
C. Chain of custody
D. Compliance with legal hold

A

(Community A 100%)
Chat GPT
A. Order of volatility

Here’s why:

Order of volatility refers to the principle in digital forensics that states volatile data should be collected and preserved first because it is more likely to change or be lost if not captured immediately. Volatile data includes information stored in memory (RAM), network connections, and system processes.

By creating a plan to preserve data based on the order of volatility, the forensic specialist ensures that critical evidence is captured before it is overwritten or lost. This helps maintain the integrity and completeness of the forensic investigation.

Let’s briefly cover the other options for clarity:

Preservation of event logs (option B) is important for maintaining a record of events that occurred on systems and networks, but it does not address the sequence or prioritization of data preservation and analysis in the same way as the order of volatility.

Chain of custody (option C) refers to the documentation and processes used to establish the chronological history of who has had control of digital evidence from the time of collection to its presentation in court. It is essential for maintaining the integrity and admissibility of evidence but is not directly related to the sequence for performing forensic analysis.

Compliance with legal hold (option D) refers to the requirement to preserve evidence related to litigation or investigations, ensuring that data is not altered or destroyed. While important for legal compliance, it does not dictate the sequence for performing forensic analysis based on the volatility of data.

Therefore, A. Order of volatility is the reason why a forensic specialist would create a plan to preserve data after an incident and prioritize the sequence for performing forensic analysis.

(Brain dump: B. Preservation of event logs)