Kaplan Sec+ test Flashcards

1
Q

Arrange the steps in the risk response process in the appropriate order.

A

The following activities are performed in the risk response process (arranged in
order of priority):
1. Establishment of risk appetite and risk tolerance – this is the foremost activity because management
needs to determine what extent of risk is acceptable and tolerable to the organization that would not have
an impact on achieving its business objectives.
2. Risk identification – this is done to determine all the risks that are applicable to the organization.
3. Risk analysis – once the risks have been identified, assessment is performed for the risk impact and
likelihood.
4. Risk response selection and documentation – the risk response is selected based on the established risk
appetite and risk tolerance.
5. Risk response prioritization – prioritization is based on the risk environment and cost-benefit analysis.
6. Development of risk action plan – this is created in order to be able to manage the risk responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Recently, while reviewing log data, you discover that a hacker has used a design flaw in an application to
obtain unauthorized access to the application. Which type of attack has occurred?

buffer overflow
maintenance hook
privilege escalation
backdoor

A

An escalation of privileges attack occurs when an attacker has used a design flaw in an application to obtain
unauthorized access to the application. Privilege escalation includes incidents where a user logs in with valid
credentials and then takes over the privileges of another user, or where a user logs in with a standard account
and uses a system flaw to obtain administrative privileges.
There are two types of privilege escalation: vertical and horizontal. With vertical privilege escalation, the
attacker obtains higher privileges by performing operations that allow the attacker to run unauthorized code.
With horizontal privilege escalation, the attacker obtains the same level of permissions as he already has but
uses a different user account to do so.
A backdoor is a term for lines of code that are inserted into an application to allow developers to enter the
application and bypass the security mechanisms. Backdoors are also referred to as maintenance hooks.
A buffer overflow occurs when an application erroneously allows an invalid amount of input in the buffer. It can
be used to perform a denial-of-service (DoS) attack or a Distributed Denial-of-Service (DDoS) attack.
For the Security+ exam, you also need to understand the following application issues:
Race condition – typically targets timing, mainly the delay between time of check (TOC) and time of use
(TOU). To eliminate race conditions, application developers should create code that processes exclusive lock resources in a certain sequence and unlocks them in reverse order.
Insecure direct object references – occurs when a developer exposes a reference to an internal object,
such as a file, directory, database record, or key, as a URL or form parameter without implementing the
appropriate security control. An attacker can manipulate direct object references to access other objects
without authorization. Implementing an access control check helps to protect against these attacks.
8/18/24, 12:08 PM Learning Management System
https://www.kaplanlearn.com/education/qbank/view/97706779?testId=306548879 1/2
Cross-site request forgery (CSRF) – occurs when a malicious site executes unauthorized commands from
a user on a web site that trusts the user. It is also referred to as a one-click attack or session riding.
Implementing anti-forgery tokens protect against this attack.
Improper error and exception handling – occurs when developers do not design appropriate error or
exception messages in an application. The most common problem because of this issue is the fail-open
security check, which occurs when access is granted (instead of denied) by default. Other issues include
system crashes and resource consumption. Error handling mechanisms should be properly designed,
implemented, and logged for future reference and troubleshooting.
Improper storage of sensitive data – occurs when sensitive data is not properly secured when it is stored.
Sensitive data should be encrypted and protected with the appropriate access control list. Also, when
sensitive data is in memory, it should be locked.
Secure cookie storage and transmission – Cookies store a user’s web site data, often including
confidential data, such as usernames, passwords, and financial information. A secure cookie has the
secure attribute enabled and is only used via HTTPS, ensuring that the cookie is always encrypted during
transmission.
Memory leaks – occur when an application does not release memory after it is finished working with it.
Reviewing coding and designing best practices helps to prevent memory leaks.
Integer overflows – occurs when an operation attempts to input an integer that is too large for the register
or variable. The best solution is to use a safe integer class that has been built to avoid these problems.
Geo-tagging – occurs when media, such as photos or videos, are tagged with geographical information.
Turning off the geo-tagging feature on your device protects against releasing this type of information. It is
also possible to remove geo-tagging information from media before using it in an application or web site.
Data remnants – occurs when applications are removed but data remnants, including registry entries, are
left behind. Specialty tools and apps are available to ensure that applications have been completely
removed from a device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In cloud architecture models, which considerations are crucial for understanding the security implications of
different deployment models and ensuring a comprehensive security posture? (Choose three.)

Responsibility matrix
Public-private cloud configurations
Third-party vendors
Data classification

A

In cloud services, a responsibility matrix identifies tasks and management areas, and assigns responsibility for
those items to the client organization or the cloud provider. When an incident occurs, or there is a question
about whom should be accountable, both the client and cloud provider can refer to the responsibility matrix.
The graphic below shows a simplified responsibility matrix for a Platform as a Service agreement.
There are several security implications with hybrid considerations. Responsibilities shift and are divided
between the client organization’s security personnel and their counterparts in the cloud provider. Data
protection is typically a greater concern in hybrid environments due to the decentralization of data. In addition,
there is the issue that data may be intercepted as it is moving from the client datacenter to the cloud.
8/18/24, 12:10 PM Learning Management System
https://www.kaplanlearn.com/education/qbank/view/97706779?testId=306548879 1/2
Many cloud services involve third-party vendors providing additional functionalities, such as security tools,
identity services, or compliance solutions. Evaluating the security practices and capabilities of these third-party
vendors is crucial. The security of the overall cloud architecture is influenced by the reliability and security
measures implemented by these external partners.
Data classification focuses on categorizing data based on its sensitivity, importance, and confidentiality level
but does not directly address the responsibilities, considerations, and third-party involvement in various cloud
architecture models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Your organization has recently undergone a hacker attack. You have been tasked with preserving the data
evidence. You must follow the appropriate eDiscovery process. You are currently engaged in the Preservation
and Collection process. Which of the following guidelines should you follow? (Choose all that apply.)
The chain of custody should be preserved from
the data acquisition phase to the presentation
phase.
The data acquisition should be from a live system to
include volatile data when possible.
Hashing of acquired data should occur only when
the data is acquired and when the data is
modified.
The data acquisition should include both bit-stream
imaging and logical backups

A

When following the eDiscovery process guidelines, you should keep the following points in mind regarding the
Preservation and Collection process:
The data acquisition phase should be from a live system to include volatile data when possible.
The data acquisition should include both bit-stream imaging and logical backups.
The chain of custody should be preserved from the data acquisition phase to the presentation phase.
While it is true that the hashing of acquired data should occur when the data is acquired and when the data is
modified, these are not the only situations that require hashing. Hashing should also be performed when a
custody transfer of the data occurs.
Other points to keep in mind during the Preservation and Collection process include the following:
A consistent process and policy should be documented and followed at all times.
Forensic toolkits should be used.
The data should not be altered in any manner, within reason.
Logs, both paper and electronic, must be maintained.
At least two copies of collected data should be maintained.
8/18/24, 12:11 PM Learning Management System
https://www.kaplanlearn.com/education/qbank/view/97706779?testId=306548879 1/3
The eDiscovery process is similar to the Forensic Discovery process. However, the eDiscovery process is
usually slower.
The stages of Forensic Discovery include the following:
Verification – Confirm that an incident has occurred.
System Description – Collect detailed descriptions of the systems in scope.
Evidence Acquisition – Acquire the relevant data in scope, minimizing data loss, in a manner that is legally
defensible. This is primarily concerned with the minimization of data loss, the recording of detailed notes,
the analysis of collected data, and reporting findings.
Data Analysis – This includes media analysis, string/byte search, timeline analysis, and data recovery.
Results Reporting – Provide evidence to prove or disprove statements of facts.
The stages of eDiscovery include the following:
Identification – Verify the triggering event that has occurred. Find and assign potential sources of data,
subject matter experts, and other required resources.
Preservation and Collection – Acquire the relevant data in scope, minimizing data loss, in a manner that is
legally defensible. This is primarily concerned with the minimization of data loss, the recording of detailed
notes, the analysis of collected data, and reporting findings.
Processing, Review, and Analysis – Process and analyze the data while ensuring that data loss is
minimized.
Production – Prepare and produce electronically stored information (ESI) in a format that has already been
agreed to by the parties.
Presentation – Provide evidence to prove or disprove statements of facts.
When preparing an eDiscovery policy for your organization, you need to consider the following facets:
Electronic inventory and asset control – You must ensure that all assets involved in the eDiscovery
process are inventoried and controlled. Unauthorized users must not have access to any assets needed in
eDiscovery.
Data retention policies – Data must be retained as long as required. Organizations should categorize data
and then decide the amount of time that each type of data is to be retained. Data retention policies are the
most important policies in the eDiscovery process. They also include systematic review, retention, and
destruction of business documents.
Data recovery and storage – Data must be securely stored to ensure maximum protection. In addition,
data recovery policies must be established to ensure that data is not altered in any way during the
recovery. Data recovery and storage is the process of salvaging data from damaged, failed, corrupted, or
inaccessible storage when it cannot be accessed normally.
Data ownership – Data owners are responsible for classifying data. These data classifications are then
assigned data retention policies and data recovery and storage policies.
Data handling – A data handling policy should be established to ensure that the chain of custody protects
the integrity of the data.
8/18/24, 12:11 PM Learning Management System
https://www.kaplanlearn.com/education/qbank/view/97706779?testId=306548879 2/3
A data breach is a specific type of security incident that results in organizational data being stolen. Sensitive or
confidential information must be protected against unauthorized copying, transferring, or viewing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is not a cryptographic attack?
Spraying
Birthday
Downgrade
Collision

A

A spraying attack is not a cryptographic attack, but rather a type of brute-force password attack. A spraying
attack has a couple of different forms. It may use a common or default password for an organization and test
that against multiple accounts. “P@$$w0rd” is often used as a secure password, and in a larger organization,
you are likely to find an account that uses this password. Another form would be to use a variation of a
company’s slogan against a user list.
A downgrade attack is a cryptographic attack that causes the system to use less-stringent security controls.
When these less-stringent (downgraded) security controls, typically insecure protocols, are activated, the
attacker takes advantage of those less-than-secure settings. An example would be an attack that disables
HTTPS port 443. In order for web traffic to go through, HTTP port 80 is enabled. HTTP is less secure protocol
than HTTPS, and the attacker exploits HTTP.
A collision attack is a cryptographic attack that combines brute force attacks, each with a different input, to
produce the same hash value.
A birthday attack is a type of cryptographic attack. A birthday attack is named after the mathematical
probability that two people in the same network have the same birthday.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Match the zero trust concepts to the planes to which they belong.
Zero trust concept
Adaptive identity
Threat scope reduction
Implicit trust zones
Subject/System
Policy enforcement point
Policy-driven access control
Policy administrator
Policy engine

A

The control plane manages users and devices in a network with the following tools:

Adaptive identity
Threat scope reduction
Policy-driven access control
Policy administrator
Policy engine

The data plane manages the movement of data in a network with the following tools:

Implicit trust zones
Subject/system
Policy enforcement point

Adaptive identity can use additional information to validate a user’s identity. If a user supplies the proper login
credentials, adaptive identity can then use such things as time of day, login location, and the device
8/18/24, 12:12 PM Learning Management System
https://www.kaplanlearn.com/education/qbank/view/97706779?testId=306548879 1/3
configuration, to decide whether or not to grant access. As an example, if the user supplies the correct
username and password, but is trying to log on from somewhere outside the corporate network, additional
information such as an OTP or multifactor authentication might be requested.
Threat scope reduction deals with reducing the attack surface. Threat scope reduction combines least
privilege policies with network segmentation based on identity rather than on the network’s logical topology. By
limiting the actions that an individual can perform once they are admitted into the network, you reduce the
amount of damage they can inflict.
Policy-driven access control looks at access policies using a policy engine. The access decisions are made by
the policy engine, while policy enforcement is managed by the policy administrator (in the control plane) and
the policy enforcement points (in the data plane).
The policy administrator decides to open or close the communication path from the requestor to the resource,
based on the decision made by the policy engine. The policy engine is responsible for granting or denying
access based primarily on policy, but other factors can be taken into consideration.
Implicit trust zones are associated with the zero-trust data plane. The implicit trust zone is the limited group of
systems and resources that the user can interact with once the user has been validated. This can also be
viewed as the scope of interaction.
In a zero-trust implementation, subjects are the users who are requesting access, and systems are the
devices used by that user.
Policy enforcement points are part of the data plane, not the control plane. The policy enforcement point is
responsible for establishing and terminating connections based on the decisions made by the policy engine
and policy administrator. However, it is the policy enforcement point that enacts that decision in the network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your organization has recently adopted SD-WAN to enhance its network connectivity. The IT team is tasked
with implementing security controls to safeguard the enterprise infrastructure. Given this scenario, which
actions would be most effective in securing the SD-WAN deployment? (Choose two.)
Selecting effective controls
Conducting regular vulnerability scans
Implementing multi-factor authentication (MFA)
Increasing bandwidth allocation

A

Conducting regular vulnerability scans and selecting effective controls would be the most effective.
Regular vulnerability scans are crucial in identifying potential weaknesses and security gaps within the SDWAN infrastructure. By scanning for vulnerabilities on a routine basis, the IT team can proactively address and
mitigate security risks, ensuring that the SD-WAN deployment remains resilient against potential threats.
Selecting effective controls specifically designed for SD-WAN is essential for securing the infrastructure. SDWAN controls may include encryption protocols, access controls, and traffic monitoring mechanisms. These
controls help in enforcing security policies, protecting data in transit, and preventing unauthorized access to
the SD-WAN environment.
Increasing bandwidth allocation is related to network performance rather than security. Security controls, such
as encryption and access controls, are more relevant for securing SD-WAN.
Multi-Factor Authentication (MFA) is a valuable security measure. MFA is typically associated with user
authentication rather than securing the underlying infrastructure. While MFA may be part of an overall security
strategy, it is not as directly applicable to securing SD-WAN as the controls designed specifically for this
technology.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your organization has decided to implement an encryption algorithm to protect data. One IT staff member
suggests that the organization use IDEA. Which strength encryption key is used in this encryption algorithm?
256-bit
64-bit
128-bit
56-bit

A

128
International Data Encryption Algorithm (IDEA) uses a 128-bit encryption key to encrypt 64-bit blocks of data.
Data Encryption Standard (DES) uses a 56-bit key to encrypt 64-bit blocks of data. Some private key
encryption standards support 256-bit encryption keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following is based on impersonating an executive in an organization, with the intent of convincing
an employee to do something they shouldn’t?
Brand impersonation
Typo-squatting
Business email compromise
Misinformation

A

BEC
Business email compromise is an attack that exploits the name and/or position of a high-ranking executive
within the organization. The attacker will impersonate the executive in an email to the victim, typically an
employee in the organization, asking them to perform tasks. One of the most common examples asks an
employee to purchase dozens of gift cards.
Misinformation and disinformation are types of influence campaigns designed to swing public opinion in a
certain direction. Misinformation is the spread of incorrect information, usually because the initial facts were
incorrect or misunderstood, without the intent to deceive. Disinformation is crafting and spreading deliberately
inaccurate or false information with the intent to deceive.
Brand impersonation, also known as brand spoofing, is a type of phishing attack. Attackers will use a
legitimate company’s assets, such as logos, banners, and images, to make it appear as though the email is
coming from the legitimate company. The attackers may also create a fake website that mimics the legitimate
website. As an example, the attacker creates a fake website that looks like a legitimate banking website. The
attacker then sends an email to users asking them to log in to their account by clicking on the link to the
fraudulent website. When the user enters their login credentials, the attacker can steal their information and
use it for fraudulent purposes.
A typo-squatting attack, also known as URL hijacking, relies on mistakes made by users when they input Web
addresses. Another type of URL hijacking involves replacing the source behind a link in a search engine index
and redirecting to a false URL. The attacker hopes that the recipient opens the email, recognizes the company
brand, and follows the instructions. The goal is to get a payment from the recipient, capture logon credentials
or other sensitive information like account information, or even to click a button that installs an executable
containing malware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following is an independent third party which provides validation services to assure that a digital
certificate is genuine?
Root of trust
Certificate authority
OCSP
Certificate signing request

A

Certificate authorities (CA) are independent third parties who provide validation services to assure that a digi
certificate is genuine. Certificate authorities can also create and manage certificates. Some of the major CA
organizations include Amazon Web Services, GoDaddy, and GlobalSign.
The certificate signing request (CSR) is a step in the certificate generation process. Once the CA has validat
your identity, the next step is for you to provide them with your public key through a certificate signing reques
The root of trust, as it pertains to certificates, assumes that if the CA is trusted, all certificates issued by the C
are also trusted,
Online Certificate Status Protocol (OCSP) is a real-time protocol for validating keys. OCSP is replacing
Certificate Revocation List (CRL), which takes 24-48 hours to broadcast.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You are a cyber security consultant in your company. You are educating developers regarding the use of
webhooks when developing applications.
Which of the following scenarios would not be a suitable use for webhooks?
Notifying the customer support team when customers raise a payment
dispute
Automatically forwarding customer payments from an e-commerce platform
to the accounting department
Deleting or updating data on other systems or databases.
Sending an email to a developer to request a fix for a non-urgent issue.

A

Webhooks would not be suitable for deleting or updating data on other systems or databases. An APIis the
interface of the application that permits other programs or applications to request, input, delete, or update data
in the application. A webhook uses an HTTP POST message to communicate from one application’s APIto
another application’s API. The communication is triggered in response to a user-defined event that occurs in
the webhook’s application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following options could be affected during the course of the change management process and
should be considered in the impact analysis? (Choose as many as apply.)
Allow lists/deny lists
Dependencies
Service restart
Restricted activities
Stakeholder interests

A

Allow lists/deny lists, restricted activities, dependencies, and service restarts are all technical implications of
the change management process. A technical implication means that changes made to one portion of a
network, system, policy, or procedure could unintentionally cascade to other parts of the organization’s
security footprint if all elements are not considered. For that reason, the proposed changes should follow the
change management system. An impact analysis should identify all technical implications of a proposed
change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You are a cybersecurity advisor for your organization. In a recent audit conducted by an external party, it was
found that your organization lacks a process to track and manage assets and their relation to one another.
To remediate the finding, you have been asked to suggest a solution. What should you suggest?
Implement a change management process
Implement a release management process.
Maintain an Excel file for all the IT assets and resources.
Implement a configuration management process

A

You should implement a configuration management process to remediate the finding. Configuration
management is an IT Service Management (ITSM) process used to track and manage assets and to maintain
the relationship between the IT assets and resources. Configuration management identifies and tracks
configuration items (CIs) and documents their capabilities and dependencies on other assets. There are many
commercial software tools available in the market to implement configuration management. These tools are
based on the Information Technology Infrastructure Library (ITIL) framework, which is considered the industry
standard for ITSM processes.
Implementing the change management process will not remediate the finding in the given scenario. Change
management is an ITSM process and ensures all changes to the IT environment are handled in a controlled
manner. Any change is documented and tracked through the change management process. All documented
changes are reviewed and approved by the change control board.
Maintaining an Excel file for all the IT assets and resources will not remediate the finding. Maintaining the
Excel file will ensure you are keeping the manual inventory of all the IT assets and resources, but a static
Excel file will not track configuration items (CIs) and document their capabilities and dependencies on other
assets.
Implementing a release management process will not remediate the finding in the given scenario. Release
management is an ITSM process for managing, planning, scheduling, and controlling the software developed
through different stages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
A

The agreement names and their characteristics should be matched as follows:
Business partners agreement (BPA) – Defines the general terms, pricing, deliverables, and responsibilities
for future transactions or projects between a vendor and a client
Non-disclosure agreement (NDA) – : Establishes confidentiality obligations between parties, preventing
them from disclosing confidential information shared during the course of a business relationship.
Master service agreement (MSA) – Outlines the terms and conditions for the provision of services between
a vendor and a client, including scope of work, responsibilities, and service levels.
Memorandum of understanding (MOU) – Formalizes the mutual understanding and intentions regarding a
specific project, initiative, or partnership between two or more parties.
Service level agreement (SLA) – Specifies the level of service expected and the metrics by which
performance will be measured between a service provider and a client
Memorandum of agreement (MOA) – Records the agreement on key terms, objectives, roles, and
responsibilities between parties, serving as a preliminary step in negotiations or partnerships.
Work order (WO) – Authorizes a vendor to perform specific work or services for a client, detailing the
scope of work, timelines, costs, and terms.
Statement of work (SOW) – Defines the scope, objectives, deliverables, and requirements for a project or
engagement between a client and a vendor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Audits are regarded as a tool for current-state risk assessments mainly because:
They include listings of controls and the respective control owners.
They perform rigorous testing of current controls in place and rely strongly on
evidence
provided by process owners.
They provide recommendations for process improvements.
They identify IT and business risk scenarios and establish suitable risk
responses for each.

A

Audits can be used by the risk practitioner as a tool to assess the current state of risks because they perform
rigorous testing of controls and rely strongly on evidence provided by the process owners. Audits provide
information on the design and operating effectiveness of the controls in place.
The controls register, not the audit process, includes a listing of controls that are in place in an organization
and the respective control owners.
Audits may, in some cases, provide recommendations for process improvements. However, this is not
mandatory in audits, hence it is not the main reason why they are regarded as a current-state risk assessment
tool.
Identification of risk scenarios and establishment of risk responses are management’s responsibility and are
independent of audits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your company decides to implement a RAID-5 array on several file servers. Which feature is provided by this
deployment?
High availability
Distributed allocation
Scalability
Elasticity

A

A RAID-5 array provides high availability. Redundant Array of Independent Disks (RAID) combines multiple
hard drives for redundancy, performance, and fault tolerance. There are several levels of RAID varying in
configuration based on need.
RAID 5 includes 3 to 32 drives. A portion of each drive is reserved and combined into a “parity” drive, which
stores data and drive rebuilding information. In the event of a drive failure, information is pulled from the parity
drive to rebuild the failed drive, while the system remains operational.
RAID 0 combines the drives to appear as a single drive. This a great performance feature, but if one drive
fails, they all fail.
RAID 1 is mirroring, writing to two drives simultaneously. If drive 1 fails, drive 2 keeps writing.
Elasticity is a cloud computing feature that allows the provider to add or delete (scale) resources as they are
needed. If scaling can be accomplished easily, the system has high elasticity. Scalability is the ability of a
system to grow (add resources) or shrink (remove resources). Scalability is a major factor when choosing a
system or provider.
Distributive allocation is also known as load balancing. With distributive allocation, excessive traffic, and file
requests on one system can be diverted to other systems that are not as busy.
Other strategies include redundancy, fault tolerance, and high availability.
Redundancy occurs when you have systems in place ready to come online when a system fails.
Fault tolerance allows a system to remain online if a component fails. Additional NICs, multiple power supplies,
extra cooling fans, and RAID storage systems are examples of fault tolerance. High availability is the
incorporation of multiple resiliency mechanisms to minimize the amount of system down time. The standard for
high availability is to have the system up 99.999% of the time. That equates to a little over 5 minutes of down
time per year.

17
Q

Which aspect of effective security governance defines the framework for overseeing and modifying the rules
and obligations concerning the management of technology assets?
Roles and responsibilities for systems and data
Monitoring and revision
Procedures
Types of governance structures

A

Roles and responsibilities for systems and data involve defining and assigning tasks, duties, and obligations
for managing and protecting technology assets. This includes roles such as system administrators, data
custodians, security officers, and compliance officers, each responsible for specific aspects of security
governance. Clarifying roles and responsibilities helps ensure accountability, coordination, and effective
execution of security tasks and activities.
Monitoring and revision refer to the ongoing process of tracking, assessing, and updating security measures
and policies. It involves regularly reviewing security controls, policies, and procedures to ensure they remain
effective and aligned with changing threats, technologies, and regulatory requirements. Monitoring and
revision also include analyzing security incidents, vulnerabilities, and compliance status to identify areas for
improvement and implementing necessary changes.
Types of governance structures encompass various models and frameworks for organizing and managing
security governance. This includes centralized, decentralized, and hybrid governance structures, as well as
frameworks such as COBIT (Control Objectives for Information and Related Technologies), ITIL (Information
Technology Infrastructure Library), and ISO/IEC 27001. These structures define how security responsibilities
are distributed, decision-making processes, and accountability mechanisms to ensure effective security
governance.
Procedures refer to documented instructions and steps for carrying out security-related tasks, processes, and
activities. This includes procedures for incident response, access control, change management, risk
assessment, and security awareness training. Procedures provide a structured approach for implementing
security controls, ensuring consistency, repeatability, and compliance with security policies and standards.

18
Q

As part of the incident response team, you have been called in to help with an attack on your company’s web
server. You are currently working to identify the root cause of the attack. During which step of incident
response does root cause analysis occur?
Containment
Lessons Learned
Identification
Recovery
Eradication
Preparation

A

You should perform root cause analysis during the review and close step. This is the final step in incident
response.
There are six steps in incident response:
1. Preparation – Ensure that the organization is ready for an incident by documenting and adopting formal
incident response procedures.
2. Identification – Analyze events to identify an incident or data breach. If the first responder is not the person
responsible for detecting the incident, the person who detects the incident should notify the first responder.
This step is also often referred to as detection.
3. Containment – Stop the incident as it occurs and preserve all evidence. Notify personnel of the incident.
Escalate the incident if necessary. Containing the incident involves isolating the system or device by either
quarantine or device removal. This step also involves ensuring that data loss is minimized by using the
appropriate data and loss control procedures.
4. Eradication – Fix the system or device that is affected by the incident. Formal recovery/reconstitution
procedures should be documented and followed during this step of incident response. This step is also
referred to as remediation
5. Recovery – Ensure that the system or device is repaired. Return the system or device to production. This
step is also referred to as resolution.
6. Lessons Learned – Perform a root cause analysis, and document any lessons learned. Report the incident
resolution to the appropriate personnel. This step may also be referred to as review and close.
During the preparation step of incident response, you may identify incidents that you can prevent or mitigate.
Taking the appropriate prevention or mitigation steps is vital to ensure that your organization will not waste
valuable time and resources on the incident later.

19
Q

Your company really needs to enhance email security to prevent spoofing. What should you implement?
Gateway filter
SPF
DNS filtering
DKIM
DMARC

A

Domain-based Message Authentication, Reporting, and Conformance (DMARC) is the correct solution to
prevent email spoofing and enhance email security. DMARC is an email authentication protocol that helps
prevent email spoofing and phishing attacks by allowing senders to specify policies for email authentication
and enforcement. DMARC enables organizations to specify how they want email servers to handle messages
that fail authentication checks, thereby providing an additional layer of protection against fraudulent emails.
Domain Name System (DNS) filtering involves blocking access to malicious websites and filtering out
malicious content by inspecting DNS queries and responses. While DNS filtering is an effective security
measure for blocking access to known malicious domains and preventing users from accessing harmful
content, it may not directly address the issue of phishing attacks via email.
Sender Policy Framework (SPF) is an email authentication mechanism that allows domain owners to specify
which mail servers are authorized to send emails on behalf of their domain. By publishing SPF records in
DNS, organizations can prevent email spoofing and unauthorized use of their domain in phishing attacks,
thereby enhancing email security.
DomainKeys Identified Mail (DKIM) is an email authentication technique that allows senders to digitally sign
outgoing messages using cryptographic keys, enabling recipients to verify the message’s authenticity and
integrity. By signing outgoing emails with DKIM signatures, organizations can ensure that messages have not
been tampered with during transit and protect against email spoofing and phishing attacks.
Gateway filters inspect inbound and outbound email traffic at the network gateway to detect and block
malicious content, including phishing emails. While gateway filters are a crucial component of email security, it
may not specifically address the issue of phishing attacks via email.

20
Q

Which of the following architectural considerations would be an issue that relates to backups?
Ease of recovery
Inability to patch
Compute
Power

A

Ease of recovery is an important design consideration that relates to backups. You may have built an
architecture that has rock-solid and reliable backups. However, what does it take to recover the data from the
backups? If the procedure is too cumbersome, you might as well not even have a backup.
Inability to patch is a consideration, particularly when a server needs high availability. You might have to take a
server offline to complete patch installation. If it’s a critical server that cannot go offline, you might not be able
to patch. There is also a concern with embedded systems, which vendors often do not patch. Even worse,
when there is a need for a patch for an embedded system, the vendors rarely get one out in a timely manner.
Power is a consideration. Power costs must be included in the design. Virtualized and cloud servers consume
less power than a physical server. A robust physical server with multiple virtual machines might allow you to
maximize computing power with little to no increase in power requirements. If you are considering embedded
devices in your design, keep in mind that these devices often have batteries that need to be replaced. They
may also draw power from a host device.
Compute (or computational) power is also a consideration. In the initial architectural design, you need to
ensure that the number of cores you specify will serve your needs. Adding additional cores to a physical server
is rather difficult, so you may find yourself using cloud services.

21
Q

What is a physical barrier that acts as the first line of defense against an intruder?
a bollard
a lock
a fence
an access control vestibule
a turnstile

A

Fencing acts as the first line of defense against casual trespassers and potential intruders, but fencing should
be complemented with other physical security controls, such as guards and locks, to maintain the security of
the facility. A fence height of 6 to 7 feet is considered ideal for preventing intruders from climbing over the
fence. In addition to being a barrier to trespassers, the fence can also control crowds. A fence height of 3 to 4
feet acts as a protection against casual trespassers. For critical areas, the fence should be at least 8 feet high
with three strands of barbed wire.

22
Q

Which is the best way to ensure risk levels remain within acceptable limits of the organization’s risk appetite?
Business impact analysis
Vulnerability assessments
Continuous monitoring
Threat modeling

A

Risk management is an ongoing, cyclical process that recognizes the dynamic nature of risk and the need for
continuous monitoring and assessment. It is the best way to ensure risk levels are within acceptable limits of
the organization’s risk appetite.
A business impact analysis is best used to quantify likelihood and impact of risk scenarios, not to assess risk
levels as compared to risk appetite.
Vulnerability assessments are designed to identify vulnerabilities, without regard to risk appetite.
Threat modeling is used to examine the nature of threats and potential threat scenarios, without regard to risk
appetite.
CompTIA lists four types of risk assessment in the Security+ objectives: ad hoc, recurring, one-time, and
continuous.

23
Q

Which two options are threat vectors used against vulnerable software? (Choose two.)
Client-based
Unsupported systems and applications
Default credentials
Agentless

A

Client-based attacks and agentless attacks are used against vulnerable software.
Client-based attacks exploit vulnerabilities within software running on a computer or mobile device. An
example could be a vulnerability within a web browser that allows an attacker to install malware on the
computer.
Agentless attacks use web applications and services to acquire information from a computer or mobile device.
The acquisition can occur without the need of a software installation on the device.
Unsupported systems and applications are other examples of common threat vectors. Unsupported items may
no longer be receiving security updates and patches. As an example, a business may still be using an
outdated version of Windows such as Windows 7. Windows 7 is no longer supported by Microsoft, so
attackers can exploit known vulnerabilities in the operating system to gain access to the user’s computer.
Other examples of unsupported systems and applications-based threat vectors include using outdated web
browsers, plugins, and software that are no longer supported by their respective vendors.
Default credentials are a common threat vector, but it primarily targets hardware devices like routers and
wireless access points. Someone will configure a device, such as a new router, and forget to change the
default credentials used for setup. These credentials rarely change by brand, and a list of default credentials
for many devices can be found at https://www.softwaretestinghelp.com/default-router-username-andpassword-list/.

24
Q
A