Security Measure Up Final Flashcards

Pass the First Time

1
Q

What are valid examples of multifactor authentication (MFA) requirements? Choose two.
A. Access token and smart card
B. Retina scan and password
C. Smart card and PIN
D. Retina Scan and voice analysis
E. Password and PIN

A

B. Retina scan and password
C. Smart card and PIN
A smart care and PIN require something you have and something you know. A retina scan and password are something you are and something you know.
A password and PIN do not qualify because both are something you know.
A retina scan and voice analysis do not qualify because both are something you are.
An access token and smart card do not qualify because both are something you have.
MFA is sometimes defined as including both authentication factors and attributes. Authentication attributes include:
Somewhere you are
Something you can do
Something you exhibit
Someone you know
Depending on how you define authentication attributes, they can overlap with authentication factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Malware has infected a server in a company. The security analyst makes a digital copy of the hard drive to analyze and places the original drive in a secure cabinet. Which aspect of incident response does this illustrate?
A. Loss control
B. Chain of custody
C. Incident isolation
D. Damage control

A

B. Chain of Custody
Chain of custody refers to the process of ensuring that there is documentation describing the seizure, custody, control, and analysis of evidence. By removing the drive, making a digital copy for analysis, and storing the original in a secure cabinet, you are helping establish the chain of custody. Documenting each step also an important part of maintaining the chain of custody.
This scenario does not illustrate damage or loss control. Damage and loss control refer to taking steps to limit the amount of damage or loss that is caused by an incident.
This does not illustrate incident isolation. Incident isolation is the process of limiting the exposure of other computers, services, or segments. One example of incident isolation is to quarantine an affected computer by removing it from the network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

To protect sensitive PHI, an organization plans to substitute random characters for original data, while maintaining the data’s format. Which of the following technologies or methods should they use?
A. Tokenization
B. Encryption
C. Masking
D. Hashing

A

The organization should use tokenization. Tokenization is designed to protect Personal Health Information (PHI) and other sensitive information by replacing the original data with data in the same format. Most tokenization methods use random character replacement and store the original-to-tokenized data mapping in an encrypted database or file. If the tokenized data is compromised, it is of little use to an attacker.
Masking permanently replaces the original data. The new data may be in the same format as the original data, but this is not a requirement. For example, a Social Security number may be masked with symbols: ***.
Encryption uses a reversible algorithm, unlike tokenization, which is meant to be random. Encrypted output would not retain the same data format.
Hashing algorithms produce fixed-length, irreversible output. Hases are often used to verify data integrity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An attacker posing as a janitor is able to access a storage area where sensitive printed documents are kept. Which method should the organization use to implement a preventive physical control?
A. Install a locked fence that limits access to the storage area.
B. Install surveillance cameras throughout the storage area.
C. Define a policy that forbids unauthorized access to the storage area.
D. Install alarms on all doors leading to the storage area.

A

A. Install a locked fence that limits access to the storage area.
The organization should install a locked fence that limits access to the storage area. Security controls fall into three families or categories: managerial, operational, or technical. A control’s function defines what the control does, and includes detective, corrective, and preventative features, among others. A physical preventive control is a physical component, such as a lock, a wall, or a fence, that prevents access to a secure location.
The organization should not install surveillance cameras throughout the storage area. Cameras are physical detective controls.
The organization should not define a policy that forbids unauthorized access to the storage area. Such a policy is an administrative preventive control.
The organization should not install alarms on all doors leading to the storage area. Alarms are detective physical controls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Following a breach, an organization implements awareness training to help users identify the risks associated with removable media. Which of the following attacks will this training help mitigate?
A. Baiting
B. Steganography
C. Injection
D. Side loading

A

A. Baiting
This training will help mitigate baiting attacks. In a baiting attack, an attacker leaves a malware-infected removable storage device in a conspicuous location. The premise of a baiting attack is that someone will find the device and be curious enough to attach it to their computer. Baiting can also use digital files, such as free music files to accomplish the same goal. The file or storage device serves as the bait. Users should be trained to be cautious when using removable media, especially when they find media placed by an undetermined source.
This training will not help to mitigate side loading. Side loading occurs when an app is installed on a mobile device from an unofficial source. For example, on an Android device, an app may be installed from a malicious website instead of the official Google Play Store. It is possible that side loading utilizes removable media, but side loading is not a risk created by using such media.
This training will not help mitigate injection attacks. Injection attacks involve injecting potentially malicious code into a query or program. For example, a Structured Query Language (SQL) injection attack attempts to inject malicious commands in an SQL statement before it is sent to a database service. Injection attacks do not involve removable media.
This training will not help to mitigate steganography attacks. Steganography can be used to hide data in an image by manipulating the pixels in such a way that the message itself is not readily detectible. However, steganography is not limited to image files alone and can be used on any type of digital file, including video and audio files. The primary risk generate by this threat vector is that the base file is considered to be safe for sharing and my bypass security controls that attempt to detect data exfiltration. Steganography is not risk specifically associated with removable media.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An organization wants to minimize the risk of vulnerabilities created by accidental misconfigurations on servers and other networking nodes. Which of the following technologies should the organization use to automate configuration of newly deployed devices?
A. Unified threat management (UTM)
B. Infrastructure as code (IaC)
C. Secure Access Service Edge (SASE)
D. Supervisory Control and Data Acquistion (SCADA)

A

The organization should use Infrastructure as code (IaC) to automate configuration of newly deployed devices. IaC is used to store the configuration of devices such as servers or routers in a centralized database. These configuration templates can be customized with variables for node-specific details such as node names and IP addresses. When a new node, such as a sever, is deployed, the template ensures that the server is configured properly. This helps avoid configuration mistakes or oversights that can lead to vulnerabilities on devices. For example, IaC could be used to ensure that every Windows server is deployed with the host firewall enabled and configured for common network services.
The organization should not use Supervisory Control and Data Acquistion (SCADA). SCADA provides monitoring and controls for industrial systems such as manufacturing equipment or energy distribution systems.
The organization should not use Secure Access Service Edge (SASE). SASE is used to provide secure, distributed network services via the cloud. SASE can include firewall, Cloud Access Security Broker (CASB), zero trust, and other components.
The organization should not use unified threat management (UTM). UTM is designed to combine multiple security functions, such as intrusion prevention and antimalware in a single device. UTM functionality is most commonly associated with Next-Generation Firewalls (NGFWs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is required to complete a SOC 2 Type 2 audit as part of external compliance reporting. How does this differ from a SOC 2 Type 1 audit?
A. A Type 2 audit does not inspect physical controls
B. A Type 1 audit is considered a point-in-time audit
C. A Type 2 audit covers a particular time frame
D. A Type 2 audit is focused on financial records

A

A type 2 audit covers a particular time frame. The American Institute of Certified Public Accountants (AICPA) developed the Service Organization Control 2 (SOC 2) framework to be used to evaluate an organization’s information security program. There are two primary types of SOC 2 audits - Type 1 and Type 2 - and each has a unique set of requirements. Among other differences, A Type 2 audit covers a time frame, usually 12 months. The purpose of a Type 2 audit is to determine an if an organization implements and maintains secure operations consistently over time.
A Type 2 audit is not considered a point-in-time audit. This describes a Type 1 audit.
A Type 2 audit does not differ from Type 1 by focusing on financial controls. This is true of both Type 1 and Type 2 audits.
A Type 2 audit may inspect physical controls. This is true of both Type 1 and Type 2 audits. Neither audit type excludes physical controls such as locked doors or badge readers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company’s recovery plan states that it will take, on average, three hours to restore services to an operational level after a catastrophic failure. What is this value is known as?
A. MTBF
B. RTO
C. RPO
D. MTTR

A

The average time needed to restore data is known as the mean to restore or mean time to recovery (MTTR or MTR). When disaster recovery services are delivered by an outside provider, the MTR is often specified in the service contract. This does not guarantee recovery within three hours in every situation, it is just the average value. Acronyms in a service contract should be clearly defined. MTTR can also be used to stand for mean time to repair. However, this repair time would not necessarily include the time needed to restore data.
The situation does not indicate the recovery time objective (RTO). RTO is the specification of the maximum time it should take to get back to operational status. There are ways to reduce the RTO, such as having hot sites with equipment ready and data loaded. However, the shorter the RTO, the more expensive the support is.
The situation does not indicate the recovery point objective (RPO). The RPO refers to the maximum acceptable amount of data loss after recovery. For example, if your organization can accept losing the last hour before the failure, you have an RPO of one hour. Reducing RPO requires more frequent backups and often the use of redundant data storage. The shorter the RPO, the more expensive it is to support.
The situation does not indicate the mean time between failures (MTBF). The MTBF specifies how much time should pass, on average, between failures. You would use this in your disaster planning to determine frequency of occurrence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Data custodian

A

Entity responsible for technical control of data including availability, security, scalability, technical standards, and backup and restore.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Data Owner

A

Entity who collects or creates the data and is legally responsible and accountable for the data and its protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Data controller

A

Entity responsible for protecting the rights and privacy of the data’s subject and controlling the procedures and purpose of data use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Data processor

A

Entity that works with the data under the direction of a responsible party but does not control the data or its use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An organization enters a contract with a third-party. Which of the following should occur NEXT as part of third-party risk management (TPRM)?
A. Continuous monitoring
B. Risk assessment
C. Due diligence
D. Data inventory and retention

A

A. Continuous monitoring
Continuous monitoring should occur next as part of third-party risk management (TPRM). TPRM involves applying the same processes and procedures that an organization would use internally as part of a sound information security program to vendors that the organization does business with. Prior to entering into a contract with a vendor changes that could impact risk.
Due diligence should not occur next. In the context provided by this question, due diligence involves researching, investigating, analyzing, and verifying that a third-party vendor meets an organization’s cybersecurity standards. This process should be completed before entering a contract with the vendor.
Risk assessment should not occur next. A risk assessment is used to identify and evaluate risks in an organization. In TRPM, this should be completed prior to entering a contract with a vendor.
Data inventory and retention should not occur next. This task usually occurs as part of privacy management and is often required for compliance with regulations such as General Data Protection Regulation (GDPR).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Primary purpose of attestation?
A. To identify vulnerabilities or other weaknesses
B. To identify, document, an quantify risks
C. To simulate a cyber-attack against an organization
D. To demonstrate compliance with regulations

A

The primary purpose of attestation is to demonstrate compliance with regulations. Attestation involves an auditor or assessor that an organization meets certain cybersecurity guidelines. The resulting report can be used to show that an organization’s information security practices have been independently reviewed and found in compliance with a standard or regulation. The American Institue of Certified Public Accountants (AICPA) developed the widely known Service Organization Control 2 (SOC 2) framework to be used to evaluate an organization’s information security program.
The purpose of attestation is not to identify vulnerabilities or other weaknesses. This describes vulnerability scanning or threat hunting. Threat hunting involves proactively searching for indicators of compromise (IoC) of indicators of attack (IoA) in a network or system.
The purpose of attestation is not to identify, document, and quantify risks. This describes risk management. Once a risk is identified, it can be tracked using a risk register.
The purpose of attestation is not to simulate a cyber-attack against an organization. This describes penetration testing, which is an important part of cybersecurity program. Unlike vulnerability scanning, which primarily looks for available services on a target system, penetration testing attempts to mimic a real attack by exploiting vulnerabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An organization plans to deploy a centrally managed wireless network that will require a PKI. The organization needs to ensure that user onboarding is as seamless as possible and error free as possible. What should the organization do first?
A. Obtain a certificate from a public CA
B. Generate a CSR
C. Install and configure a CA
D. Obtain a self-signed certificate

A

The organization should generate a Certificate Signing Request (CSR) first. CSRs are generated by applications, users, or services and are submitted to a publicly trusted Certificate Authority (CA) for validation. The CSR identifies the certificate owner and is used by the CA to generate an X.509 certificate. Certificates generate by public CAs are typically inherently trusted by client systems and browsers. Any certificates issued by one of these authorities or their subsidiaries are automatically trusted.
Once a CSR is generated, it can be submitted to a public CA for validation. The CA will then issue a Secure Sockets Layer (SSL) certificate to the client. Notably, although SSL is still commonly used when describing secure internet communications, it has been replaced by the more secure Transport Layer Security (TLS). The terms are often used interchangeably.
A self-signed certificate is generated by the certificate holder or a related entity, and its authenticity and validity is not independently verified. Since they are not inherently trusted, most client systems and browsers will display an alert to a user indicating that the certificate is not from a trusted source.
An X.509 certificate can be generated by private or public CAs. A public CA is a trusted entity that uses its own methods for validating a certificate requestor prior to issuing the certificate. This allows an entity to present a certificate from a trusted third party as a form of authentication. If the organization installs and configures their own CA, the certificates issued by this CA will not be inherently trusted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization determines that their working production control is susceptible to attack. What should the organization implement to mitigate the risk of compromised code integrity?
A. Obfuscation
B. Elasticity
C. Version control
D. Normalization

A

The organization should implement version control. Version control systems store master code files in repositories or repos. There are two flavors of version control systems: local and remote. In a centralized version control system (CVCS), a developer checks out a code file, which retrieves a working copy from the central code repo and locks the master copy. If code is accidentally or maliciously changed, a saved version of the code can be easily recoverd.
Since it relates to secure coding, normalization ensures that all data input is in a known and expected format. This can protect an application from buffer overflow and other similar attacks.
Code obfuscation is meant to make code harder to reverse engineer. This makes it more difficult for an attacker to find weaknesses in an application logic or processes.
An elastic application can scale up or down based on workload. This feature has become popular with the advent cloud-based application hosting, which supports easy scaling of compute resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following statements correctly describes data sanitization?
A. Storage devices holding data must be physically destroyed
B. All unnecessary permissions assigned to the data must be removed
C. Data must be permanently deleted from storage devices
D. Data located on storage media or devices must be obfuscated

A

C. Data must be Permanently deleted from storage devices
In data sanitization, data must be permanently deleted from storage devices. Data sanitization uses several methods to ensure that the data on a device is destroyed and cannot be recovered. One method involves physically destroying the device. However, data sanitization can also be completed without destroying the device. For example, the data on the device could be irreversibly encrypted. Another method of sanitization involves using software to overwrite the data until it cannot be recovered even with advanced forensics tools. These methods are sometimes referred to as logical sanitization.
In data sanitization, storage devices holding data do not need to be physically destroyed. There are many methods used to physically destroy storage devices, including shredding. However, while sanitized devices may be destroyed, this is not a requirement.
In data sanitization, data located on storage media or devices does not need to be obfuscated. Hackers and security professionals sometimes use obfuscation to make malicious scripts or other code difficult for people to read or understand. For example, a hacker might rename variables or create unnecessary code structures to make reverse engin/ering a malicious script difficult.
In data sanitization, unnecessary permissions are not removed. Permissions audits should be performed regularly to avoid permissions creep, which occurs as users change roles within an organization while retaining permissions they no longer need.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which statement describes a social engineering attack?
A. An attacker scans users’ personal social media accounts for useful information.
B. An attacker defaces a company’s website in support of an environmental cause.
C. An attacker enters false DNS entries to try and hijack users’ social media accounts.
D. An attacker impersonates a utility worker and gains access to a secure data center.

A

An attacker impersonating a utility worker to gain access to a secure data center is an example of a social engineering attack. With impersonation, an attacker pretends to be an employee, vendor, or other trusted entity in order to trick users into providing access to data, a secure location, or other resource. This type of attack is considered social engineering because it relies on trust and other social mechanisms to deceive or defraud a target victim.
An attacker scanning users’ personal social media accounts for useful information is not an example of a social engineering attack. This type of activity, known as reconnaissance, typically precludes an attack on an organization. While an attacker could later use this information as part of a social engineering attack, it is not a requirement.
An attacker entering false Domain Name System (DNS) entries to try and hijack users’ social media accounts is not an example of a social engineering attack. This describes DNS poisoning. The purpose of this attack is to redirect legitimate user requests to malicious sites that are often clones of valid sites. For example, an attacker may use this method to direct requests to a fake banking site where users will enter their logon credentials.
An attacker defacing a company’s website in support of an environmental cause is not an example of a social engineering attack. This activity describes hacktivism, where an attacker is trying to promote a cause or political agenda.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company’s systems engineer is devising an incident management plan.
What should be the primary goal of the incident management plan for a Dos attack on the company’s ecommerce servers?
A. Implement DPI on the firewall.
B. Identify the vulnerabilities that the attacker exploited.
C. Discover the identity of the attacker.
D. Restore normal operations as quickly as possible.

A

The primary goal of incident management is to restore normal operations as quickly as possible. Often this is accomplished by replacing the compromised server or servers with new devices.
Performing research to discover the identity of the attacker could be one of the goals included in the incident respose plan, but it is not the primary goal of incident management.
Identifying the vulnerabilities the attacker exploited is an important part of the incident response plan, but this part of the plan will be performed after normal operations are restored.
Although during the course of researching the attack the engineer may discover that deep packet inspection
(DPI) is necessary on the firewalls, this is not the primary goal of incident management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The company CSO has ordered that all emails sent or received by senior management personnel be preserved. Managers should not be able to delete emails. If changes are made to an email, both the original and modified versions should be preserved. Managers should still have access to their email accounts.
Security personnel are tasked with ensuring this. What should the security personnel use?
A. Legal hold
B. Principle of least privilege
C. Forensic hashing
D. Chain of custody

A

To carry out the chief security officer’s request (CSO), the security personnel should place managers’ email accounts on legal hold. Legal precedent in the United States and many other countries requires that relevant information be preserved when there is a reasonable anticipation of legal action.
Most email systems support placing accounts on legal hold. The way it is implemented can vary by the specific email system. Users may be prevented from deleting emails or deleted emails may be placed on hold and remain availaule. Similarly, users may either be prevented from modifying emails or both the original and modified versions of any emails are maintained
The security personnel should not use hashing to protect the emails. Hashing is used to preserve the integrity of data by generating a value based on the data content. It would let personnel know when data has changed but does not protect the original data or provide a way to retrieve the original content. It also does not prevent deletion.
The security personnel would not use chain of custody to protect the emails. Chain of custody is used to document any activity relating to seized artifacts, and records the sequence of custody, control, transfer, analysis, and disposition of any artifact that might be used as evidence.
The security personnel should not apply the principal of least privilege as a way to protect the emails.
Protections put in place via rights assignments would limit the managers’ access to their email accounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What should an organization do to identify open service ports on its core servers? Server impact must be minimized.
A. Perform threat hunting.
B. Perform a penetration test.
C. Perform protocol analysis.
D. Perform a vulnerability scan.

A

The organization should perform a vulnerability scan. A vulnerability scanner is used to identify open service ports, potential misconfigurations, and vulnerabilities on a target system. Most vulnerability scanners can be configured to perform non-intrusive scans and can send simple requests to each potential listening port. Each open port represents a network endpoint backed by a service or application that is listening for client requests. Identifying open ports can help the organization determine which services or applications should be enabled or disabled, as attackers can use each port to attempt to access the server by exploiting vulnerabilities or misconfigurations.
The organization should not perform protocol analysis. Protocol analysis involves capturing and analyzing network packets using a protocol analyzer. In this question, protocol analysis could possibly be used to identify open ports. However, if the port in question is not sending or receiving traffic, this approach will not meet the stated requirement.
The organization should not perform a penetration test. Penetration testing is used to emulate a hacker attacking an application, system, or network. This includes using the same tools, tactics, and procedures (TTPs) an attacker would use, which could potentially cause performance or stability issues on the target servers.

(TTPs) an attacker would use, which could potentially cause performance or stability issues on the target servers.
The organization should not perform threat hunting. Threat hunting involves proactively searching for indicators of compromise (loC) or indicators of attack (loA) in a network or system. This process may or may not involve identifying open service ports on target systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A company is deploying a PK. They want to use a hardware device separate from their Windows servers to manage and maintain cryptographic keys.
What should the company use?
A. TACACS+
B. HSM
C. TPM
D. DLP

A

You should use a hardware security module (HSM). An HSM is a hardware device that can function as a cryptographic service provider (CSP) device. A CSP can help improve key generation and management by providing secure key generation and secure onboard storage, whether or not the key was initially generated by the CSP. When using an HSM, secure key backup is typically designed into the device. When setting up a certificate authority (CA) that uses an HSM to store certificates, you must install and configure the HSM before the CA.
You would not use DLP. DLP refers to data loss prevention, an umbrella term that refers to protecting data.
You would not use a trusted platform module (TPM). A TPM is hardware component that provides cryptographic functions. It works with the computer’s BIOS and encryption software to provide high-level encryption support. This does not meet the solution requirements because it does not provide the key management needed as part of the solution.
You should not use Terminal Access Controller Access-Control System Plus (TACACS+). TACACS is an authentication protocol.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following tasks is MOST likely performed by a third-party as part of compliance monitoring for an organization?
A. Data inventory
B. Continuous monitoring
C. Cyber attestation
D. Due care

A

Cyber attestation is most likely performed by a third-party as part of compliance monitoring for an organization. Attestation involves an auditor or assessor attesting that an organization meets certain cybersecurity guidelines. In the context of compliance monitoring, attestation is used to show that an organization’s information security practices have been independently reviewed and found in compliance with a standard or regulation. For example, the American Institute of Certified Public Accountants (AICPA) developed the widely known Service Organization Control 2 (SOC 2) framework to be used to evaluate an organization’s information security program.
Due care is not performed by a third-party as part of compliance monitoring. Due care involves implementing and maintaining processes that keep an organization’s operations in peak operational performance. In a cybersecurity context, due care focuses on internal actions, such as risk identification and mitigation.
Continuous monitoring is not performed by a third-party as part of compliance monitoring. Continuous monitoring is typically part of third-party risk management (TPRM) and is designed to ensure vendors and contractors maintain secure operations over time.
Data inventory is not performed by a third-party as part of compliance monitoring. Data inventory is typically done as part of data classification or data privacy processes. For example, data inventory might be performed as part of compliance with regulations like General Data Protection Regulation (GDPR).

24
Q

A company’s website suffers a cross-site scripting (XSS) attack. To mitigate this risk, a security administrator has been asked to deploy a system that is designed to detect and block these types of attacks. Which of the following should the administrator consider?
A. Web Application Firewall (WAF)
B. Server-based Intrusion Detection System (IDS)
C. Secure access service edge (SASE)
D. Layer 4 firewall

A

The security administrator should consider a Web Application Firewall (WAF). A cross-site scripting (XSS) attack attempts to exploit web elements and is considered a layer 7, or application layer, attack. Layer 7 is the top layer of the Open Systems Interconnection (OSI) model. Mitigating this type of attack requires a device that can scan layer 7 requests, such as those that would be sent to a web server or other web-based service. A WAF fills this role and can defend against other layer 7 attacks, such as cross-site request forgery (CSRF) and Structured Query Language (SQL) injection.
The administrator should not consider a layer 4 firewall. Layer 4 firewalls can identify and filter traffic based on source and destination IP addresses (layer 3) and port numbers (layer 4). This scenario requires a layer 7 firewall.
The administrator should not consider Secure access service edge (SASE). SASE is used to provide secure, distributed network services via the cloud. SASE can include firewall, Cloud Access Security Broker (CASB), zero trust, and other components.
The administrator should not consider a server-based Intrusion Detection System (IDS). Typically, an IDS sends an alert or generates a log entry when malicious behavior is detected. However, an IDS does not block malicious attacks.

25
Q

Which of the following describes a risk organizations should consider prior to migrating an on-premises application to a serverless architecture?
A. Latency
B. Multitenancy
C. Reliability
D. Scalability

A

Multitenancy is a risk organizations should consider prior to migrating an on-premises application to a serverless architecture. Multitenancy in serverless computing refers to the fact that customers do not receive dedicated hardware for running their applications. This allows cloud vendors to maximize efficiencies and provide competitive pricing for customers. However, this means that errors or vulnerabilities in code created by other customers could impact any processes running in the shared environment. Additionally, an organization’s sensitive data could be accidentally exposed to other customers.
Latency is not a risk organizations should consider prior to migrating an on-premises application to a serverless architecture. As applications running in a serverless architecture run in the cloud, processing can be located closer to users. For example, a company located in Japan can choose to run serverless compute components closer to customers located in Germany, which can reduce network latency for those customers.
Scalability is not a risk organizations should consider prior to migrating an on-premises application to a serverless architecture. Although not entirely accurate, many cloud providers such as Google and Amazon advertise serverless computing as infinitely scalable. This is because these providers deploy massive datacenters globally and provide more compute capability than any single organization is likely to need.
Reliability is not a risk organizations should consider prior to migrating an on-premises application to a serverless architecture Similar to scalability benefits cloud vendors offer high levels of reliability with their serverless compute platforms.

26
Q

What is a PRIMARY security concern when using containerization technologies such as Docker?
A. Infected images in public repositories
B. Container OS updates and patching
C. Managing container device drivers
D. Configuring XDR in each container

A

Infected images in public repositories are a security concern when using containerization technologies.
Containerization is a virtualization technology that bundles all the components needed for an app in a single, portable unit. This includes all the binaries, settings, and libraries the app needs to run. The benefit is highly portable apps that can run on a variety of endpoints or systems. Containers are often created and shared publicly in container repositories, which means they could be preloaded with malware or other malicious components. For this reason, container images should be created from scratch or only obtained from trusted sources.
Container OS updates and patching are not a security concern when using containerization technologies.
Containers do not run an OS and instead rely on the host OS for core functionality.
Managing container device drivers is not a security concern when using containerization technologies.
Containers do not run device drivers. These are managed by the underlying host OS.
Configuring extended detection and response (XDR) in each container is not a security concern when using containerization technologies. XDR and other antimalware applications run on the host.

27
Q

Refer to the messages below:
LOG 1
Aug 12 17:36:34.303: Sig: 3051 Subsig:1 Sev: 4
TCP Connection Window Size DoS [1.1.100.11: 19223 -> 172.16.1.10:80]
LOG 2
Aug 12 11: 13:44 Inbound TCP connection denied
from 1.1.1.1/21 to 10.10.10.1/51172 flags SYN ACK on interface outside
A consultant is asked to analyze logs from a couple of network devices.
Which devices MOST likely generated these messages?

A. LOG 1 - IPS, LOG 2 - Firewall
B. LOG 1 - DLP, LOG 2 - AP
C. LOG 1 - Firewall, LOG 2 - IPS
D. LOG 1 - Firewall, LOG 2 - DLP
E. LOG 1 - AP, LOG 2 - DLP
F. LOG 1 - DLP, LOG 2 - Firewall

A

The first log has been generated by an Intrusion Prevention System (IPS). You can see the signature number that has triggered (3051/1) - a Denial of Service attack (DoS). An IPS is designed to analyze network traffic, find anomalies, and drop a message if required.
The second log has been generated by a firewall. You can see a TCP connection that has been dropped. In this case, it was due to asymmetric routing. A firewall is designed to secure a network by blocking unauthorized access.
Data Loss Prevention system (DLP) is designed to verify network traffic and the way company files are accessed and transferred. You would use DLP to protect sensitive information and detect potential data breach incidents. Here is an example of a log message from a DLP system:
Message = ID: S918, Policy Violated: POLICH CREDIT_ CARD_ NUM,
Count: 11
Access Points (APs) allow wireless connections. Here is a sample log from an AP:
Station AP e8b1. f1la.14a3 Associated KEY_MGMT IWPAV2 PSK]

28
Q

An organization recently deployed a biometric authentication system. Which of the following should the organization use as its primary tuning metric?
A. False acceptance rate
B. Crossover error rate
C. False rejection rate
D. True positive rate

A

The organization should make the crossover error rate their primary tuning metric. Biometric system tuning seeks to find a balance between the false acceptance rate (FAR) and the false rejection rate (FRR). The point where these two rates meet is known as the crossover error rate (CER) or equal error rate (EER). In a well-tuned biometric system, the CER will be as low as possible.
A false acceptance event occurs when an unauthorized user is granted access, and the FAR measures the rate of such occurrences. A high FAR means that many unauthorized users may be able to access the system.
The FRR measures the frequency of authorized users not being granted access by the system. A high FRR rate can lead legitimate users to find ways around the faulty system.
True positives occur in antivirus and Intrusion Detection System (IDS) systems when the system correctly identifies a threat.

29
Q

An organization wants to use source code inspections to identify vulnerabilities in custom-built applications.
Which method should the organization choose?
A. Vulnerability scanning
B. Dynamic Application Security Testing (DAST)
C. Extended detection and response (XDR)
D. Static Application Security Testing (SAST)

A

The organization should use Static Application Security Testing (SAST). SAST is a method of testing that involves testing applications without actually executing them. This process is often deployed early in the development process and can help identify bugs or other security vulnerabilities before development is complete. SAST can use both automated code scanning tools and manual reviews.
The organization should not use Dynamic Application Security Testing (DAST). DAST is used to test applications that are running for bugs or vulnerabilities. This method does not include source code inspections.
The organization should not use vulnerability scanning. A vulnerability scanner is used to identify open service ports, potential misconfigurations, and vulnerabilities on a target system. This method does not include source code inspections and instead focuses on running applications, services, or network nodes.
The organization should not use extended detection and response (DR). XDR enhances traditional malware protection by correlating threats across an organization’s network. This provides a holistic view of the organization’s threat landscape.

30
Q

An organization plans to contract with a provider for a disaster recovery site that will host server hardware.
When the primary data center fails, data will be restored, and the secondary site will be activated. Costs must be minimized. Which type of disaster recovery site should the organization deploy?
A. Cold site
B. Mobile
C. Hot site
D. Warm site

A

The organization should deploy a warm site. A warm site includes power, networking, and server hardware.
In the event of a disaster, the servers must be powered on and operating systems installed or updated. Data from the most recent primary site backups can then be restored. A warm site does not typically host all the same hardware as the primary site, and often provides just enough processing capability for the organization to operate while the primary site is restored.
A hot site mirrors the primary site and includes all the hardware, software, and connectivity required to support full operations. Data is mirrored from the primary to the hot site on a frequent schedule, if not in real time.
A mobile site can be compared to a warm site. The provider supplies a trailer with power, networking, and hardware, and systems must be configured and data restored. However, due to the mobility requirements, mobile sites do not minimize costs.
A cold site is a facility with power, but typically does not host any server hardware. During a failover, hardware must be installed, network connectivity provisioned, and data restored. Cold sites are the least expensive recovery option but require the longest time to spin up.

31
Q

What is a security benefit of migrating an intranet application to the cloud?
Choose the correct answer
Login
Increased control of resources
Increased scalability under load
Reduced connectivity reliance
Availability of multitenancy

A

Increased scalability under load is a security benefit of migrating an intranet application to the cloud.
Availability is a core tenet of cybersecurity and the cloud can enhance availability through massive scalability. This is useful in scenarios where application use varies over time. For example, an e-commerce retailer may require massive resources during a busy holiday shopping season while only requiring minimal resources at other times. The cloud makes this not only possible but highly cost-effective.
Availability of multitenancy is not a security benefit of migrating an intranet application to the cloud.
Multitenancy refers to the shared nature of cloud resources, where multiple customers could be running applications or storing data on common hardware. This actually increases data risks versus storing data on-premises on non-shared hardware.
Reduced connectivity reliance is not a security benefit of migrating an intranet application to the cloud. The opposite is likely true as users will need to rely on internet connectivity to get to the cloud-based application.
Increased control of resources is not a security benefit of migrating an intranet application to the cloud.
Depending on the cloud model implemented, the cloud service provider (CSP) will maintain control over some elements, such as physical access to servers.

32
Q

A developer is preparing to deploy an e-commerce website. The website uses dynamically generated web pages based on user input. This is a requirement for the application running on the website.
The site must be designed to prevent cross-site scripting attacks. What should the developer do?
A. Use only inline JavaScript.
B. Implement user input validation.
C. Implement URL filtering
D. Use encrypted cookies.

A

The developer should implement user input validation. Cross-site scripting (XSS) uses code passed through user input to attack online applications. The script passed usually includes HTML tags and script code. Tags commonly used in a XSS attack include ‹SCRIPT>, ‹OBJECT>, <APPLET>, <EMBED› and ‹FORM›. Input validation lets you check for tags and other content that identifies an attempted XSS attack, letting you block the input. In some cases, you might have the user’s connection broken at the same time. XSS attacks can use a number of different technologies in the attack, including JavaScript, VBScript, HTML, Perl, C++, ActiveX, and Flash.
+
Using encrypted cookies will not mitigate the risk of an XSS attack. Using encrypted cookies will mitigate the risk that an authentication cookie might be used to hijack a user’s session.
Using only inline JavaScript will not increase security in any way.
Implementing URL filtering will not prevent XSS attacks. URL filtering is used to prevent clients from accessing a specific website.

33
Q

To enhance availability, an organization has configured authentication and storage servers that provide redundancy for on-premises servers. However, the organization must ensure that all data is encrypted between the data center and the private cloud network.
What should the organization do to meet this requirement?
A. Deploy NGFW appliances in the data center and cloud and share X.509 certificates.
B. Configure an IPsec tunnel between the data center and cloud gateway routers.
C. Deploy a NAT gateway and only permit inbound connections from the cloud network.
D. Configure IPsec in transport mode between routers in each location.

A

The organization should configure an Internet Protocol Security (IPsec) tunnel between the data center and cloud gateway routers. IPsec is a Layer 3 protocol that can be used to enforce data confidentiality and data integrity for Internet Protocol (IP) packets. IPsec can be configured in one of two modes: tunnel mode or transport mode. Tunnel mode is used to create a secure tunnel between two trusted networks.
A Network Address Translation (NAT) gateway typically sits between public network like the internet and private networks and allows privately addressed nodes to access and be accessed by publicly addressed nodes. While a NAT gateway could help protect the organization’s servers, it will not encrypt data sent between the cloud and data center networks.
A Next-Generation Firewall (NGFW) is used to provide advanced intrusion detection and prevention capabilities for traffic traversing between network zones. NGF Ws are commonly used to protect user sessions and to prevent attacks from the Internet.
IPsec transport mode is used to connect two endpoints, such as a client and a server. Transport mode is not used to send data between trusted networks.

34
Q

An organization plans to deploy a secure access service edge (SASE) architecture. Which of the following technologies is MOST likely to be included in this deployment?
A. Supervisory Control and Data Acquisition (SCADA)
B. Simple Network Management Protocol (SNMP)
C. Wi-Fi Protected Access 3 (WPA3)
D. Cloud Access Security Broker (CASB)

A

Cloud Access Security Broker (CASB) is the most likely technology to be deployed in a secure access service edge (SASE) architecture. SASE is used to provide secure, distributed network services via the cloud.
SASE can include technologies such as firewalls, zero trust, and CASB services. A CASB is designed to provide policy-based protection for cloud-based resources. This is particularly important as organizations deploy more and more cloud resources and employees access cloud-based services. CASB is often considered a core component of a sound SASE architecture.
Supervisory Control and Data Acquisition (SCADA) is not likely to be deployed in a SASE architecture.
SCADA provides monitoring and controls for industrial systems such as manufacturing equipment or energy distribution systems.
Wi-Fi Protected Access 3 (WPA3) is not likely to be deployed in a SASE architecture. WPA3 is one of the newest wireless security specifications and includes technologies and protocols that are designed to address the latest wireless attacks and vulnerabilities. WPA3 is not inherently part of SASE architecture.
Simple Network Management Protocol (SNMP) is not likely to be deployed in a SASE architecture. SNMP is used to provide management and monitoring capabilities for network-enabled devices. SNMP architectures often include a management station that queries endpoints using the SNMP protocol. Endpoints can also send notifications or traps to the management station

35
Q

Which type of cybersecurity assessment or audit is MOST likely to require an independent third party?
A. Vulnerability assessment
B. Penetration testing
C. Risk assessment
D. Compliance audit

A

Compliance audits are most likely to require an independent third party. Compliance audits are used to evaluate an organization’s compliance with government or industry-imposed regulations or frameworks.
Depending on the standard or regulation in question, these audits are usually performed by independent third parties to ensure an unbiased result. The auditing organization is typically staffed and trained to perform specific types of compliance audits and can provide attestation of compliance on behalf of an organization.
Vulnerability assessments are not likely to require an independent third party. Vulnerability assessments use tools like vulnerability scanners to identify vulnerabilities on an organization’s assets. Once discovered, these vulnerabilities should be ranked based on severity and their remediation should be tracked as part of a vulnerability management program.
Penetration testing is not likely to require an independent third party. Penetration testing is used to emulate a hacker attacking an application, system, or network. This includes using the same tools, tactics, and procedures (TTPs) an attacker would use, which could potentially cause performance or stability issues on the target servers.
Risk assessment is not likely to require an independent third party. A risk assessment is used to identify and evaluate risks in an organization. Any risks identified as part of an assessment should be documented and tracked in a risk register.

36
Q

A social media provider is the frequent target of attacks that crash its web servers. As a result, users are unable to access their accounts and the provider is losing advertising revenue. The provider wants to improve availability for users. Which action should the provider take?
A. Place the web server in the DMZ and configure restrictive ACLs.
B. Deploy a NIDS between the network firewall and the web server.
C. Deploy a NAT gateway and configure port forwarding rules.
D. Deploy a web server farm and configure active/active load balancing.

A

The provider should deploy a web server farm and configure active/active load balancing. In active/active load balancing, a hardware or software load balancer distributes traffic across two or more nodes. In this scenario, the provider could build a web farm with all web servers hosting the same content. A load balancer could then be configured to distribute requests using the round-robin method. If any single server fails or is otherwise busy, the remaining servers can service requests.
The provider should not place the web server in the demilitarized zone (DMZ) and configure restrictive Access Control Lists (ACLs). A DMZ is used to host Internet-accessible servers on a protected network that is separate from the production Local Area Network (LAN). This approach will not necessarily enhance availability.
The provider should not deploy a Network Address Translation (NAT) gateway and configure port forwarding rules. NAT is often used to enhance network privacy by hiding a network behind one or more public Internet Protocol (IP) addresses. This approach will not necessarily enhance availability.
The provider should not deploy a Network-based Intrusion Detection System (NIDS) between the network firewall and the web server. A NIDS monitors and analyzes traffic and reports intrusion attempts.

37
Q

After performing a firmware update on a router, an administrator notices a dramatic increase in dropped packets. What should the administrator do NEXT?
A. Initiate an incident response playbook.
B. Check the trigger criteria in the backout plan.
C. Extend the planned maintenance window.
D. Record the finding in the impact analysis log.

A

The administrator should check the trigger criteria in the backout plan. A backout plan is used as part of change management to provide step-by-step instructions for reverting a change. Among other details, the backout plan should define the criteria, that when met, activate the backout plan steps. For example, a backout plan may stipulate that if the number of dropout packets exceeds 500, the router firmware should be reverted to the previous version. Backout plans are also known as rollback plans.
The administrator should not record the finding in the impact analysis log. Impact analysis is a process that is used to evaluate the impact an anticipated event will have on a system or environment. In this scenario, impact analysis would have been performed during firmware update testing.
The administrator should not initiate an incident response playbook. Incident response playbooks are used to provide instructions for dealing with cyberattacks or breaches. For example, a ransomware incident response playbook would provide step-by-step instructions for identifying and managing a malware outbreak on a computer.
The administrator should not extend the planned maintenance window. Maintenance windows are used in change management to identify a timeframe during which maintenance will be performed on a system or service. During this time, the system or service may be periodically unavailable while maintenance activities, such as the installation of updates, are performed.

38
Q

A server has failed four times in the past year.
Which measurement is used to determine the amount of time the server was operational?
A. ALE
B. MTTF
C. ARO
D. MTBF

A

Mean Time Between Failures (MTBF) is the measurement used to determine the amount of time that a repairable system was operational.
Mean Time To Failure (MTTF) is the measurement used to determine how long it will be before an event makes a system un-repairable.
Annualized Loss Expectancy (ALE) is a measurement that indicates the cost of loss due to a risk over the course of a year. It is calculated by multiplying Single Loss Expectancy (SLE) by Annualized Rate of Occurrence (ARO).
SLE is the amount of loss expected for a single occurrence of risk and ARO is the number of occurrences of a risk anticipated over the year.

39
Q

A security administrator discovers an attack that uses PowerShell to make unauthorized registry changes.
What should the administrator do to prevent this attack on sensitive systems?
A. Disable access to the CLI.
B. Whitelist allowed applications.
C. Configure each system’s firewall.
D. Install a HIDS on sensitive systems.

A

The administrator should whitelist allowed applications. An application whitelist can be used to prevent unauthorized software from running on a user’s devices. An application whitelist is a list of all applications that can run on a system. Whitelist rules can identify specific executables or folders that are trusted by the system. Any application not on the list will not be allowed to install or run. In this scenario, PowerShell would not be whitelisted.
The administrator should not install a host-based intrusion detection system (HIDS). An IDS is designed to detect, record, and alert on malicious behavior. Even if the HIDS detects registry changes, it cannot prevent
them.
The administrator should not configure the system’s firewall. Host-based firewalls are used to reduce a workstation’s attack surface by limiting inbound and outbound network connectivity. A firewall will not block PowerShell scripts or registry changes on the local system.
The administrator should not disable access to the command-line interface (CLI). On a Windows-based system, the CLI is also known as the command prompt. PowerShell does not rely on the command prompt to run.

40
Q

Which of the following statements correctly describes the primary objective of typosquatting?
A. To deceive victims by registering domain names with slight misspellings of popular sites
B. To deceive victims by creating spoofed social media profiles of popular individuals
C. To deceive victims by intercepting, eavesdropping on, and manipulating network communications
D. To deceive victims by manipulating search engine rankings to get a malicious site listed as a top result

A

Typosquatting involves deceiving victims by registering domain names with slight misspellings of popular sites. The goal is to direct users to legitimate-looking sites if they misspell the Uniform Resource Locator (URL) for these websites. For example, an attacker may register the domain name googel.com and create a cloned version of google.com. When an unsuspecting user incorrectly types the URL for google.com, they are led to the malicious site that may contain malware or other deceptive methods to collect user information. Typosquatting is also known as URL hijacking.
Typosquatting does not involve deceiving victims by creating spoofed social media profiles of popular individuals. This is a form of social engineering that attempts to play on a victim’s trust to lure them into taking some undesirable action. For example, an attacker could use this method to solicit donations for a bogus humanitarian cause.
Typosquatting does not involve deceiving victims by manipulating search engine rankings to get a malicious site listed as a top result. This is sometimes referred to as black hat search engine optimization (SEO).
Typosquatting does not involve deceiving victims by intercepting, eavesdropping on, and manipulating network communications. This describes an on-path attack, which occurs when the attacker is “on the path” between the victim and a website or service the victim is communicating with. On-path attacks are also known as man-in-the-middle (MiTM) attacks.

41
Q

Which method can be used to implement a managerial control for an educational institution that stores sensitive information about students?
A. Implement MFA on all servers holding sensitive information.
B. Require users who access sensitive information remotely to use a VPN
C. Implement full-disk encryption on servers holding sensitive information.
D. Perform a risk assessment for servers holding sensitive information.

A

Implementing multi-factor authentication (MFA) on all servers holding sensitive information is the best managerial control. It requires users to provide multiple verification factors, significantly enhancing security.

42
Q

Your organization has developed a fault-tolerant design to help ensure business continuity in case of a disaster. The disaster recovery site has mission-critical hardware already installed and connectivity already established. Data backups of critical data are on hand, but they may be up to a week old.
This is an example of which of the following?
A. Hot site
B. Cold site
C. Off-site storage site
D. Warm site

A

This is an example of a warm site. Typically, the site will have current application versions and may have the most recent backups it has received already applied on the computers. The site is designed so that it can be brought on line relatively quickly. It is generally seen as a cost- and time-effective compromise between a cold site and a hot site.
This is not a cold site. A cold site typically has hardware, but the hardware is not set up. Also, a cold site will typically not have any data on hand.
This is not a hot site. A hot it is a fully ready-to-run site with current (or near-current) data. It is the most expensive solution to maintain, but it may be necessary in situations where it is critical to minimize down time.
This is not an off-site storage site. In an off-site storage site, you would have data, but no (or insufficient) hardware.

43
Q

Which of the following attacks is MOST likely to lead to a cryptographic vulnerability?
A. Directory traversal
B. Downgrade
C. Spraying
D. Credential replay

A

A downgrade attack is the most likely to lead to a cryptographic vulnerability. In a downgrade attack, an attacker attempts to trick a target system into using an outdated or less-secure version of a security protocol. Some systems run outdated and unsecure protocols to be backward compliant with older clients.
For example, a downgrade attack may try to get a web server to use an unsecure version of Secure Sockets Layer (SSL). If the downgrade attack is successful, the attacker can use modern tools to crack the unsecure protocol and then eavesdrop on communications.
A password spraying attack is not the most likely to lead to a cryptographic vulnerability. Password spraying involves attempting to authenticate with multiple accounts using common passwords, such as Password123.
A credential replay attack is not the most likely to lead to a cryptographic vulnerability. Credential replay involves capturing the passwords or tokens used during the authentication process for a valid user and then replaying those credentials at a later time.
A directory traversal attack is not the most likely to lead to a cryptographic vulnerability. Directory traversal involves exploiting web server vulnerabilities to gain access to files and directories outside of directory where the website files are stored, or the web root.

44
Q

A network for a small project group is being deployed. Each group member should be responsible for securing access to his or her own computer’s resources.
What access control model should be used?
A. MAC
B. Rule-based access control
C. Role-based access control
D. DAC

A

You should use discretionary access control (DAC) in this situation. In the DAC model, users have control over access to their own data or local computer resources. This is the access control model needed. This model is used, for example, to manage security on client computers in a peer-to-peer network environment.
You should not use mandatory access control (MAC). In the MAC model, a hierarchical access model is used, with all access permissions set by administrators. Resource objects, such as data files, are assigned security labels that assign a classification and category to each object. Classification and category information is also assigned to each user account, and access is determined by comparing the user and object security properties.
You should not use rule-based access control. In rule-based access control, access is defined by policies (or rules) established by an administrator. Users cannot change access settings set by administrators.
Access is tracked through an access control list (ACL) associated with each object. Because access is based on user account or group membership but does not further classify objects or users, rule-based access control is considered less strict than MAC.
You should not use role-based access control. Role-based access control assigns access permissions based on a user’s job function in an organization. This is different than a group-based access control model because, while a user can be assigned membership in multiple groups, a user can only be assigned to one role within the organization.

45
Q

A security administrator performs a vulnerability scan for a network and discovers an extensive list of vulnerabilities for several Windows-based file servers. What should the administrator do FIRST to mitigate the risks created by these vulnerabilities?
A. Install missing patches.
B. Remove any unnecessary software.
C. Install HIDS software.
D. Create application deny lists.

A

The security administrator should remove any unnecessary software first. One of the basic tenets of operating system (OS) hardening is removing unnecessary software and services. Not only do such items impact performance on the server, but each service is a potential attack vector that increases the risk of a breach. Hackers often scan systems for forgotten or neglected services that are actively listening for client requests. As this is a preventive measure, it should be considered before any other mitigation methods.
The administrator should not install Host-Based Intrusion Detection System (HIDS) software, first. HIDS is typically software installed on a system that monitors for suspicious or malicious behavior. If concerning behavior is detected, the HIDS can send an alert to a user informing them of the detection.
The administrator should not create application deny lists first. Application allow and deny lists are used to control which applications can or cannot run on a device. This increases security by preventing unauthorized applications from being used.
The administrator should not install missing patches first. Sound patch management involves ensuring that the latest stable patches or updates have been installed on a system. However, cybersecurity best practices focus on eliminating risk, where possible. While patching unnecessary software might reduce risk, removing it would eliminate the risk that a vulnerability in the software might be exploited.

46
Q

An office manager receives a notification from a freight operator indicating that they must place a critical delivery in the organization’s data center. What should the office manager do NEXT?
A. Report anomalous behavior to the security manager.
B. Require the driver to use the access control vestibule.
C. Consult the risk register for instructions on next steps.
D. Escalate the request to the manager of the red team.

A

The office manager should report anomalous behavior to the security manager. Anomalous behavior is behavior that is unexpected or out of the ordinary. In this scenario, the office manager should recognize that both the notification itself and the requested access are suspicious and may constitute an impersonation attack. By reporting the behavior, the security manager can ensure that the activity is assessed and can determine if a potential security incident has occurred. The most important element in this scenario is the awareness training that the office manager has received, allowing them to recognize this risky behavior.
The office manager should not require the driver to use the access control vestibule. An access control vestibule used in a facility to only allow admission to one user at a time by using locking doors on each end of the room. This is useful for mitigating tailgating attacks, which is not the type of potential attack occurring in this question.
The office manager should not escalate the request to the manager of the red team. Red teaming is a type of offensive penetration test that is used to simulate real-world attack tactics, tools, and procedures (TTPs).
Although the red team manager could identify this activity as suspicious, reporting to the security manager is the better choice. Additionally, in many organizations the activities of the red team are kept secret so as to more fully emulate a real attacker.
The office manager should not consult the risk register for instructions on next steps. A risk register is used to keep track of vulnerabilities and risks as part of risk management. Each risk should include a risk score, which is the combination of the impact if the risk is realized and how likely it is for the risk to be realized. It is not likely the risk register will provide much guidance for the office manager.

47
Q

Which of the following tasks is MOST likely performed by a third-party as part of compliance monitoring for an organization?
A. Data inventory
B. Cyber attestation
C. Continuous monitoring
D. Due care

A

Cyber attestation is most likely performed by a third-party as part of compliance monitoring for an organization. Attestation involves an auditor or assessor attesting that an organization meets certain cybersecurity guidelines. In the context of compliance monitoring, attestation is used to show that an organization’s information security practices have been independently reviewed and found in compliance with a standard or regulation. For example, the American Institute of Certified Public Accountants (AICPA) developed the widely known Service Organization Control 2 (SOC 2) framework to be used to evaluate an organization’s information security program.
Due care is not performed by a third-party as part of compliance monitoring. Due care involves implementing and maintaining processes that keep an organization’s operations in peak operational performance. In a cybersecurity context, due care focuses on internal actions, such as risk identification and mitigation.
Continuous monitoring is not performed by a third-party as part of compliance monitoring. Continuous monitoring is typically part of third-party risk management (TPRM) and is designed to ensure vendors and contractors maintain secure operations over time.
Data inventory is not performed by a third-party as part of compliance monitoring. Data inventory is typically done as part of data classification or data privacy processes. For example, data inventory might be performed as part of compliance with regulations like General Data Protection Regulation (GDPR).

48
Q

An organization recently deployed an office-wide wireless network using 100 APs. However, the wireless administrator has found managing authentication for each of the APs cumbersome. To remedy this, the organization has deployed a wireless LAN controller and a server running Microsoft Active Directory. How should the wireless network be configured so that users are centrally authenticated using their individual accounts?
A. Configure the WLAN controller to use 802.1x and specify a RADIUS server.
B. Enable key-based authentication on each of the APs and distribute keys to users.
C. Configure WPA2-PSK authentication on the controller and provision the APs.
D. Configure MAC filtering on the WLAN controller and define trusted addresses.

A

The organization should configure the Wireless Local Area Network (LAN) controller (WLC) to use 802.1x and specify a Remote Authentication Dial-In User Service (RADIUS) server. When WLC is configured to use 802.1x, an authentication server that can process client authentication requests must be defined. In most environments, this is done by configuring an external RADIUS server. This server in turn submits client authentication requests to an authentication server, such as Microsoft Active Directory.
The organization should not enable key-based authentication on each of the access points (APs) and distribute keys to users. This approach does not facilitate centralized authentication with individual user accounts.
The organization should not configure Media Access Control (MAC) filtering on the WLC and define trusted addresses. MAC filtering is used to allow or deny access to network nodes based on MAC addresses. MAC filtering is not a centralized authentication.
The organization should not configure Wi-Fi Protected Access 2 - Pre-shared Key (WPA2-PSK) authentication on the controller and provision the APs. In this approach, a common password, or key, is created and shared, thus the name pre-shared key. Every user would use this same key to authenticate with the wireless network. This approach does not facilitate centralized authentication with individual user

49
Q

Which statement BEST describes risk threshold?
A. The risk threshold determines which risks will be accepted and which will not be accepted.
B. The risk threshold broadly defines the amount of risk an organization has chosen to accept.
C. The risk threshold indicates the level or degree of risk that the organization has accepted.
D. The risk threshold defines the risks that remain after mitigating controls are implemented.

A

The risk threshold determines which risks will be accepted and which will not be accepted. As part of a risk assessment, an organization should record and score each identified risk in a risk register. This can be done using simple formulas like multiplying the likelihood of a risk being realized with the impact of that risk to create a risk score. For example, an organization might determine that on a scale of 1 to 10, the likelihood of a ransomware attack is 8/10 and the impact is 9/10, resulting in a risk score of 72/100. This value is arbitrary and only useful when compared to other risks. The organization may decide, based on their risk appetite or risk tolerance, that the risk threshold should be 75, and any risks below this threshold can be accepted, while any risks with scores above this threshold must be mitigated.
The risk threshold does not define the risks that remain after mitigating controls are implemented. This describes residual risk. In many cases, residual risks represent those that have been accepted because the cost of mitigation is not justified. For example, if an organization must spend $1M on a Web Application Firewall (WAF) to mitigate a low priority or low impact risk, the organization may choose to accept the risk.
The risk threshold does not define the amount of risk an organization has chosen to accept. This describes risk appetite, which is typically broadly applied to an organization. For example, an aggressive startup may have an appetite for extreme risk as it attempts to take over an emerging market.
The risk threshold does not indicate the level or degree of risk that the organization has accepted. This describes risk tolerance, which is closely aligned to risk appetite, but is typically more objective. Risk tolerance usually defines a range of risk the organization is willing to tolerate.

50
Q

An application environment needs to be kept as secure as possible and requires the strictest access control model.
What access control model should be used?
A. DAC
B. Role-based access control
C. Rule-based access control
D. MAC

A

You should use the mandatory access control (MAC) model. This is considered the strictest access control model. In the MAC model, a hierarchical access model is used, with all access permissions set by administrators. Resource objects, such as data files, are assigned security labels that assign a classification and category to each object. Classification and category information is also assigned to each user account, and access is determined by comparing the user and object security properties.
You should not use rule-based access control. In rule-based access control, access is defined by policies (or rules) established by an administrator. Users cannot change access settings set by administrators.
Access is tracked through an access control list (ACL) associated with each object. Because access is based on user account or group membership but does not further classify objects or users, rule-based access control is considered less strict than MAC.
You should not use role-based access control. Role-based access control assigns access permissions based on a user’s job function in an organization. This is different than a group-based access control model because, while a user can be assigned membership in multiple groups, a user can only be assigned to one role within the organization.
You should not use discretionary access control (DAC). In the DAC model, users have control over access to their own data or local computer resources. This model is used, for example, to manage security on client computers in a peer-to-peer network environment. This is generally considered the least restrictive, but also the least secure, access model.

51
Q

A company’s recovery plan states that it will take, on average, three hours to restore services to an operational level after a catastrophic failure.
What is this value is known as?
A. MTTR
B. MTBF
C. RPO
D. RTO

A

The average time needed to restore data is known as the mean time to restore or mean time to recovery
(MTTR or MTR). When disaster recovery services are delivered by an outside provider, the MTR is often specified in the service contract. This does not guarantee recovery within three hours in every situation, it is just the average value. Acronyms in a service contract should be clearly defined. MTTR can also be used to stand for mean time to repair. However, this repair time would not necessarily include the time needed to restore data.
The situation does not indicate the recovery time objective (RTO). RTO is the specification of the maximum time it should take to get back to operational status. There are ways to reduce the RTO, such as having hot sites with equipment ready and data loaded. However, the shorter the RTO, the more expensive the support is.
The situation does not indicate the recovery point objective (RPO). The RPO refers to the maximum acceptable amount of data loss after recovery. For example, if your organization can accept losing the last hour before the failure, you have an RPO of one hour. Reducing RPO requires more frequent backups and often the use of redundant data storage. The shorter the RPO, the more expensive it is to support.
The situation does not indicate the mean time between failures (MTBF). The MTBF specifies how much time should pass, on average, between failures. You would use this in your disaster planning to determine frequency of occurrence.

52
Q

A company is designing and developing an automated authentication system based on biometric attributes.
One of the goals is to keep the authentication process as transparent and unobtrusive to employees as possible.
The company installed close-circuit television (CCTV) cameras throughout its corporate campus. Images are fed through an artificial intelligence (Al) analysis system for employee identification. Human operators provide feedback to assist with machine learning and improve accuracy.
Which biometric attributes are BEST suited to this application? (Select two.)
Choose the correct answers
Facial
Retina
Voice
Gait
Fingerprint
Vein

A

Facial (facial recognition) and gait are the two attributes best suited to this application. Both are attributes that can be easily and clearly captured by the CCTV cameras. Facial and gait analysis and recognition are established biometric factors. Relatively high false positive and false negative rates should be expected during initial development with the accuracy improving through machine learning.
The system should not use retina, fingerprint, or vein. These all require a closer scan than facial or gait, and they would not be easily captured through CCTV.
The system should not use voice. One reason is that it would require that an employee is talking while moving through the campus, which is not necessarily guaranteed. There are also issues about recording quality and its impact on accuracy.

53
Q

A security analyst has completed forensics on a compromised server. The analyst suspects the server was breached using a buffer overflow attack. What is the BEST indicator of this attack?
A. Choose the correct answer
B. Corrupted system files
C. User logon errors
C. System crashes
D. Directory traversal events

A

System crashes are the best indicator of buffer overflow attacks. A buffer overflow occurs when an attacker sends a memory buffer more information than the buffer is designed to handle. This can lead to risks ranging from application crashes to full system exploitation. For example, an attacker could use a buffer overflow attack to get a system to run a malicious command such as locating a password file. As buffer overflow attacks often involve overwriting used buffers, system or application crashes are common.
Directory traversal events are not the best indicator of a buffer overflow attack. Directory traversal involves exploiting web server vulnerabilities to gain access to files and directories outside of directory where the website files are stored, or the web root.
User logon errors are not the best indicator of a buffer overflow attack. As it relates to cybersecurity, logon events may indicate password guessing attacks.
Corrupted system files are not the best indicator of a buffer overflow attack. In this question, a buffer overflow would likely cause the system to crash, which in turn might cause system files to be corrupted.
However, a buffer overflow is not likely to corrupt system files directly.

54
Q

An engineering team has deployed PK| within their organization. To meet legal reporting requirements, they need to implement a way to provide decryption keys to a third party on an as-needed basis.
What should they do?
A. Implement a key escrow arrangement.
B. Identify a recovery agent.
C. Deploy an additional CA.
D. Use certificate registration.

A

The team needs to implement a key escrow arrangement. In this type of arrangement, the decryption keys are stored in a centralized location, or held in escrow in case they are needed. Keys held in escrow can be released to a third party on an as-needed basis.
You should not use a recovery agent in this scenario. A recovery agent is used with the Windows Encrypting File System (EFS) to make it possible to recover encrypted data.
You should not deploy an additional Certificate Authority (CA). A CA issues certificates for use in encryption, but adding a CA does nothing to make certificates available to a third party.
You should not use certificate registration. Certificate registration is part of the process for requesting a new certificate. During registration, the certificate request is registered with a CA in preparation for issuing a certificate.

55
Q
A