Pocket Prep 14 Flashcards

1
Q

An engineer has been asked by her supervisor to determine how fast each system must be back up and running after a disaster has occurred to meet BCDR objectives. What has this engineer been asked to determine?

A. Recovery Service Level (RSL)
B. Recovery Time Objective (RTO)
C. Maximum Tolerable Outage (MTO)
D. Maximum Tolerable Downtime (MTD)

A

D. Maximum Tolerable Downtime (MTD)

Explanation:
The MTD is a time measurement of the time in which each system must be brought back up and running after a disaster occurs to meet Business Continuity and Disaster Recovery (BCDR) objectives. It effectively defines how long the system can be non-operational.

The RTO is the amount of time that is allowed to take the actions required to bring a system operational.

RSL is the level of functionality. It is not normal for a system to recover to a level of 100% after failure when we are discussing BC/DR. The plans are to fail to another site (physical or virtual). When that happens, the question that must be addressed is what level of functionality must be there if it is going to be at a level that the business can tolerate.

If the functionality is less than 100%, that will only be tolerable for a certain amount of time. That time window is defined as the MTO. By the end of the MTO, the system needs to be returned to normal status.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An organization’s cloud infrastructure is scattered over multiple organizations’ data centers. Which of the following is the MOST likely cloud model in use?

A. Private Cloud
B. Hybrid Cloud
C. Public Cloud
D. Community Cloud

A

B. Hybrid Cloud

Explanation:
The physical environment where cloud resources are hosted depends on the cloud model in use:

Public Cloud: Public cloud infrastructure will be hosted by the CSP within their own data centers.
Private Cloud: Private clouds are usually hosted by an organization within its own data center. However, third-party CSPs can also offer virtual private cloud (VPC) services.
Community Cloud: In a community cloud, one member of the community hosts the cloud infrastructure in their data center. Third-party CSPs can also host community clouds in an isolated part of their environment.

Hybrid and multi-cloud environments will likely have infrastructure hosted by different organizations. A hybrid cloud combines public and private cloud environments, and a multi-cloud infrastructure uses multiple cloud providers’ services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following is MOST closely related to an organization’s efforts
to ensure features like confidentiality and non-repudiation?

A. Cryptographic Key Establishment and Management
B. Security Function Isolation
C. Separation of System and User Functionality
D. Boundary Protection

A

A. Cryptographic Key Establishment and Management

Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:

Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them.
Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings.
Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors.
Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems.
Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems.
Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.

Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Managing the automation of CI/CD pipelines to support Agile and DevOps practices falls under which of the following?

A. Release Management
B. Deployment Management
C. Configuration Management
D. Change Management

A

B. Deployment Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In which cloud service model does the CSP’s responsibility extend to securing operating systems, database management systems (DBMSs), and similar components made available to the cloud customer?

A. IaaS
B. SaaS
C. PaaS
D. All service models

A

C. PaaS

Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.

However, at the software level, responsibility depends on the cloud service model in use, including:

Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them.
Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use.
Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Gaby has been working for a large pharmaceutical company for many years now. She is transitioning into the role of cloud administrator. In her role, she will set up and configure many of the elements needed for the Research and Development (R&D) department. She has just finished building the Virtual Machines (VMs) that some of the researchers need as well as the applications and the appropriate Identity and Access Management (IAM) needs.

What has she been configuring?

A. Network
B. Database
C. Compute
D. Storage

A

C. Compute

Explanation:
The fundamental pieces that must be built within the cloud for it to work are compute, storage, and network.

Compute refers to the Virtual Machines (VMs), containers, applications, and anything needed to be able to process the user’s data. Software as a Service fits into this category.

Storage is where the data sits. When data is at rest, it is in storage. So, this includes block storage, object storage, file storage, databases, big data, and so on.

The answer option database fits into the storage group.

Network is the ability to move the data (by data we mean data, voice, and video). This includes virtual routers, switches, IP addresses, Virtual Private Networks (VPN), load balancers, Domain Name Service (DNS), and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Vaeda is working with her company as the lead cloud information security specialist. In her job, she has been working with the developers as they are building new capabilities into their software portfolio. They are creating software that will be able to analyze large amounts of data for their customers. The customer will have to help train the systems’ algorithms through reinforcement.

What are they building?

A. Machine learning with Reinforced Learning
B. General Artificial Intelligence
C. Natural Language Processing
D. Narrow Artificial Intelligence

A

A. Machine learning with Reinforced Learning

Explanation:
Machine learning involves training algorithms that are able to learn from data and make predictions. Reinforced learning is one of the three subfields. The other two are supervised learning and unsupervised learning.

Narrow Artificial Intelligence (AI) has the ability to perform specific tasks within a limited domain. Siri and Alexa are good examples.

General AI represents machines that possess the ability to understand, learn, and apply knowledge across various domains similar to human intelligence.

Natural Language Processing (NLP) involves the interaction between the computer and human language. This allows machines to understand, interpret language, and generate human language responses. Chatbots and translation tools are examples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An information security professional working with the Development/Operation (Dev/Ops) teams is helping them identify the threat modeling approach that most aligns with their needs. They are looking for a threat modeling technique that prioritizes the vulnerabilities that they have for the software they are currently building. Which of the following is an OWASP recommended model that can be used to perform this task?

A. Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of privilege (STRIDE)
B. Damage, Reproducibility, Exploitability, Affected users, Discoverabilit. (DREAD)
C. Process for Attack Simulation and Threat Analysis (PASTA)
D. Architecture, Threats, Attack Surfaces and Mitigations (ATASM)

A

B. Damage, Reproducibility, Exploitability, Affected users, Discoverabilit. (DREAD)

Explanation:
Correct answer: Damage, Reproducibility, Exploitability, Affected users, Discoverability (DREAD)

DREAD is a threat modeling technique that helps to prioritize vulnerabilities based on the five criteria that make up its name. It uses a scoring system, usually 1-10, to help prioritize things. Each of the five criteria is scored and then divided by 5 for an average score.

STRIDE is a threat modeling framework used to identify and categorize potential threats or attacks. It provides a set of six categories that make up its name.

ATASM is a threat modeling approach that approaches the process by breaking apart the logical and functional components so that all attackable surfaces that could potentially be attacked are identified.

PASTA is also a threat modeling approach that takes business objectives into consideration to align them with the technical requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Giada is working with developers on a new application that will be used in a Platform as a Service (PaaS) deployment. The software will be handling and processing credit cards when their customers purchase access to their Software as a Service (SaaS). The SaaS will allow the customers a way to build and test software code for their own applications. It is imperative that the software protects the customers’ source code.

What can the developers add to their software that will ensure the values of the credit card numbers so that they can be charged for the continued use of the SaaS?

A. Tokenization
B. Obfuscation
C. Hashing
D. Data Loss Prevention (DLP)

A

C. Hashing

Explanation:
Hashing feeds data into a one-way algorithm that generates a unique value called a hash or message digest. Generating another hash of the same file in the future will produce the exact same value only if the data has not been modified; this ensures data integrity. If the data has been altered, the new hash will differ from the original.

Tokenization could be used to store the credit card numbers so that there is no known or stored card number, but this does not satisfy the needs of the question. The question asks for a way to ensure the value of the card number. A token number would also need to be verified as the correct number. Hashing can be used there as well.

DLP can be used to make sure that the card numbers are not sent insecurely in an email, for example. Again, this is not protecting the values.

Obfuscation is “to confuse.” Encryption is a type of obfuscation. However, that is not as specific to protecting the values as hashing is. So, hashing is a better answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An attacker sent commands through an application’s input and data fields. By doing this, the attacker was able to get the application to execute the code they sent as part of its normal processing. The attacker was able to use this technique to get the application to expose sensitive data that they should not have access to.

What type of attack was used?

A. Cross-site scripting
B. Identification & authentication failures
C. Denial of service
D. Injection

A

D. Injection

Explanation:
An injection attack occurs when an attacker sends (injects) malicious code or commands to an application’s input or data fields. The goal of the attacker is to get the application to execute the code as part of its normal processing. The best way to prevent injection attacks is by ensuring that all data and input fields include proper input validation and sanitization.

Identification & authentication failures is in position 7 on the OWASP Top 10 2021. Formerly, it was called broken authentication and was in position 2 in 2017. This is a lack of protection of the authentication process. When this is not done properly, someone can gain access to existing accounts.

Cross-Site Scripting (XSS) has now (2021) been combined with injection from the 2017 OWASP top 10 list. XSS allows attackers to execute scripts within a user’s browser. Input validation can be used to reduce the chance of this happening.

Denial of service is when something is done to prevent the user from being able to do their job.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following is NOT a threat for which the CSP bears some responsibility?

A. Denial of Service
B. Improper Disposal
C. Theft or Media Loss
D. Unauthorized Provisioning

A

D. Unauthorized Provisioning

Explanation:
Data storage in the cloud faces various potential threats, including:

Unauthorized Access: Cloud customers should implement access controls to prevent unauthorized users from accessing data. Also, a cloud service provider (CSP) should implement controls to prevent data leakage in multitenant environments.
Unauthorized Provisioning: The ease of setting up cloud data storage may lead to shadow IT, where cloud resources are provisioned outside of the oversight of the IT department. This can incur additional costs to the organization and creates security and compliance challenges since the security team can’t secure data that they don’t know exists.
Regulatory Non-Compliance: Various regulations mandate security controls and other requirements for certain types of data. A failure to comply with these requirements — by failing to protect data or allowing it to flow outside of jurisdictional boundaries — could result in fines, legal action, or a suspension of the business’s ability to operate.
Jurisdictional Issues: Different jurisdictions have different laws and regulations regarding data security, usage, and transfer. Many CSPs have locations around the world, which can violate these laws if data is improperly protected or stored in an unauthorized location.
Denial of Service: Cloud environments are publicly accessible and largely accessible via the Internet. This creates the risk of Denial of Service attacks if the CSP does not have adequate protections in place.
Data Corruption or Destruction: Data stored in the cloud can be corrupted or destroyed by accident, malicious intent, or natural disasters.
Theft or Media Loss: CSPs are responsible for the physical security of their data centers. If these security controls fail, an attacker may be able to steal the physical media storing an organization’s data.
Malware: Ransomware and other malware increasingly target cloud environments as well as local storage. Access controls, secure backups, and anti-malware solutions are essential to protecting cloud data against theft or corruption.
Improper Disposal: The CSP is responsible for ensuring that physical media is disposed of correctly at the end of life. Cloud customers can also protect their data by using encryption to make the data stored on a drive unreadable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A cloud provider wants to assure potential cloud customers that their environment is secure. What is one way for the cloud provider to achieve this without needing to provide audit access to every potential customer?

A. Undergo a Service Organization Control (SOC) 2 Type II audit
B. Undergo a Service Organization Control (SOC) 3 audit
C. Undergo a Payment Card Industry Data Security Standard (PCI DSS) audit
D. Undergo a Service Organization Control (SOC) 2 Type I audit

A

A. Undergo a Service Organization Control (SOC) 2 Type II audit

Explanation:

A Service Organization Control 2 (SOC 2) audit reports on various organizational controls related to security, availability, processing integrity, and confidentiality or privacy. A cloud provider may choose to have a SOC 2 audit performed and make the report available to the public. This allows potential customers to have a sense of confidence that the environment is secure without needing to do an audit of their own.

A type II audit is over a span of time, which shows the effectiveness of the controls. This is more informative than a Type I audit, which shows that the design of the controls at a moment in time is good.

A SOC 3 report is a result of a SOC 2 audit. The information in the SOC 2 report may be more than the company wants to release. So there is a SOC 3, which shows that the SOC 2 was performed and that the customer can have confidence in the provider without divulging sensitive information. (The short version is that a SOC 3 is for public release.)

A PCI DSS audit is a good thing to do, but this would be done because of the handling of payment cards of some type. It has a much smaller reach as an audit. A SOC 2 would give the customer much more information and confidence in the provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The information security manager, Rohan, is working with the network and application teams to determine the best data protection methods to use for a new application that is being developed. Their concerns are that the integrity and confidentiality of the data must be protected.

Of the following, which is the BEST combination of technologies to meet their concerns?

A. A Representation State Transfer (REST) API and the Rivest Shamir Adelman algorithm
B. Transport Layer Security (TLS) using Rivest Shamir Adelman (RSA) and Message Digest 5
C. A Representation State Transfer (REST) API with the Secure Hash Algorithm (SHA256)
D. Transport Layer Security (TLS) using Advanced Encryption Standard (AES) and Message Digest 5

A

D. Transport Layer Security (TLS) using Advanced Encryption Standard (AES) and Message Digest 5

Explanation:
Protecting the data in transit with TLS using AES provides for the protection of the confidentiality of the data. AES is a symmetric algorithm, which is a common choice for use with TLS. If MD5 is used, then the integrity of the data can be determined.

TLS with RSA does not clarify the protection of the confidentiality as clearly as saying TLS with AES. RSA is an asymmetric algorithm. It is not commonly used to protect the confidentiality of data. It is commonly used to exchange the symmetric algorithm, so it is almost the correct answer. However, since AES is also in the correct answer, it is better.

REST does not encrypt data. TLS can be added to REST, but the answer does not specify that. So REST with SHA256 does not protect the confidentiality.

REST with RSA does not work either because RSA is not a logical addition. It could be used in TLS to exchange the symmetric key. However, there is nothing in that answer to protect the integrity of the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The Cloud Service Provider (CSP) will not permit your business to conduct an independent examination of cloud service controls and has indicated that this role must be performed by an independent third party and the results provided to your organization. What type of activity is this?

A. Auditability
B. Resiliency
C. Governance
D. Regulatory

A

A. Auditability

Explanation:
CSPs rarely permit a Cloud Service Customer (CSC) to audit their service controls. CSC engages third parties to conduct independent examinations of cloud service controls and to offer an opinion on their function in relation to their purpose. Service Organization Controls (SOC) 2 assessments are examples of these types of assessments.

An audit could be done to determine if a CSP is in compliance with a regulation, but there is no indication of a regulation or law in the question, so regulatory is not the best answer here.

Governance is the oversight provided by the Board of Directors (BoD) and the Chief Executive Officer (CEO). (The CCSP exam covers corporate governance, security governance, and data governance.)

Resiliency is a concern for businesses today and is addressed by redundancy throughout the network, the cloud, and the processes of the business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Adela has been working with her new company for just a week now. As the information security manager, she has been analysing the Platform as a Service (PaaS) serverless cloud provider they are using. She has found that the security settings on the current service are not what needs to be in place for the data that they have to protect. The settings that are available are actually insufficient. She is aware of another cloud provider that does have a suitable service for them to use. However, the function that the system is running is specifically written for the current provider.

What is the term used to describe this type of scenario?

A. Vendor lock-in
B. Vendor lock-out
C. Provider exit
D. Guest hopping

A

A. Vendor lock-in

Explanation:
Cloud customers should avoid vendor lock-in. Vendor lock-in occurs when an organization is unable to easily move from one cloud provider to another without doing a lot of work. The function in this case would have to be redesigned, and there might be an additional element of having to reenter the data if they moved to another provider.

Provider exit is when a cloud provider decides to shut down one of their offerings, or perhaps they will sell it off to another provider.

Vendor lock-out would occur when a cloud provider files for bankruptcy and the courts take over control of their assets. When they do that, it is possible that the systems are shut down. If that happens, lock-out is the problem.

Guest hopping is an attack where an attacker jumps from one guest machine to another, possibly on another tenant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization purchases their accounting program through the cloud. The accounting program is hosted entirely by the cloud provider on cloud hosted servers. The cloud customer is not responsible for maintaining any of the items needed to access the accounting program; they are simply able to access the program from anywhere that they have an internet connection.

What type of cloud service is being described here?

A. Platform as a Service (PaaS)
B. Database as a Service (DBaaS)
C. Infrastructure as a Service (IaaS)
D. Software as a Service (SaaS)

A

D. Software as a Service (SaaS)

Explanation:
Software as a Service (SaaS) is a type of cloud service in which the cloud provider maintains and manages everything on the back-end (including the infrastructure, platform, and server OS), and the cloud customer can simply access the software without needing to do any maintenance on it.

PaaS allows the customer to have access to a server-based or server-less environment to load their software.

IaaS allows the customer to bring all the operating systems to include routers, switches, servers, firewalls, intrusion detection systems, and so on. This allows a customer to build a virtual data center without having to worry about buying and maintaining the physical equipment.

DBaaS allows the customer to have a database without having to maintain the hardware or even the operating system that is the database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Biometrics and passwords are part of which stage of IAM?

A. Accountability
B. Authentication
C. Identification
D. Authorization

A

B. Authentication

Explanation:
Identity and Access Management (IAM) services have four main practices, including:

Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering.
Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in.
Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this.
Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Bridgit is working for a manufacturing corporation that must protect the personal information of their employees and of their customers. She is looking for a document to provide guidance on how they can and should protect that information. Which of the following standards was developed by a joint privacy task force consisting of the American Institute of Certified Public Accountants and the Canadian Institute of Chartered Accountants?

A. Privacy Management Framework (PMF)
B. Sarbanes Oxley (SOX)
C. General Data Protection Regulation (GDPR)
D. ISO/IEC 27018

A

A. Privacy Management Framework (PMF)

Explanation:
The Privacy Management Framework, formerly the GAPP (Generally Accepted Privacy Principles), is a privacy standard developed by the American Institute of Certified Public Accountants and the Canadian Institute of Chartered Accountants. PMF contains the main privacy principles and is focused on managing and preventing threats to privacy.

The GDPR is the European Union’s (EU) regulation for the member states privacy laws.

ISO/IEC 27018 is an international standard to provide guidance to cloud providers acting as data processors. Data processors process data but cannot be employees of the company. The EU GDPR defines processing to include holding or storage of data.

SOX is a U.S. regulation that requires publicly traded companies to protect the integrity of their financial statements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You are working with the Information Technology (IT) manager to establish a secure storage technology within their Infrastructure as a Service (IaaS) environment that they are building within their new virtual datacenter. They are worried that the data needs to be accessible within a short amount of time so that the users can perform the tasks that they need to do for their jobs. The users regularly access large amounts of data for analysis.

What technology should they use to enable access to the data?

A. Software Defined Network (SDN)
B. Hardware Security Manager (HSM)
C. Fibre Channel
D. Content Distribution Network (CDN)

A

C. Fibre Channel

Explanation:
iSCSI, Fibre Channel, and Fibre Channel over Ethernet (FCoE) are storage area network technologies that create dedicated networks for data storage and retrieval.

SDN is a technology that enables more efficient switch and router-based networks. It is not for storage of data.

CDN is a technology that does enable data access in a more efficient manner, but it does not match the scenario of the question. CDN is useful if there is a piece of data such as a video, movie, etc. that is being accessed by people within a certain area. The content can be temporarily cached on servers closer to the users who need it.

An HSM is used to store cryptographic keys. That is a good thing to add for security of the data, but the question is looking for a storage technology first. Then secure it. An HSM does not secure the data, it secures the keys that are used to protect the data.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An application uses application-specific access control, and users must authenticate with their own credentials to gain their allowed level of access to the application. A bad actor accessed corporate data after having stolen credentials. According to the STRIDE threat model, what type of threat is this?

A. Spoofing identity
B. Insufficient due diligence
C. Broken authentication
D. Tampering with data

A

A. Spoofing identity

Explanation:
The STRIDE threat model has six threat categories, including Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, and Elevation of privileges (STRIDE). A bad actor logging in as a user is known as identity spoofing. Ensuring that credentials are protected in transmission and when stored by any system is critical. Using Multi-Factor Authentication (MFA) is also essential to prevent this. If you have an account with any internet-accessible account (your bank, Amazon, etc.), you should enable MFA. The same advice is true in the cloud.

Broken authentication is the entry on the OWASP top 10 list that includes identity spoofing (and more). The question is about STRIDE.

Insufficient due diligence is a cloud problem (and elsewhere) when corporations do not think carefully before putting their systems and data into the cloud and ensuring all the right controls are in place.

Tampering with data could occur once the bad actor is logged in as a user, but the question does not go that far. It is not necessary for someone to log in to tamper with data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Tatum has been working with the cloud data architect and the cloud architect to plan the access control model that they will use to control access with. They are looking for something that will allow them to grant access based on characteristics such as job titles, department, location, time of data, and so on.

What would you recommend they use?

A. Attribute-Based Access Control (ABAC)
B. Access Control Lists (ACLs)
C. Content Dependent Access Control (CDAC)
D. Role-Based Access Control (RBAC)

A

A. Attribute-Based Access Control (ABAC)

Explanation:
Attribute-Based Access Control (ABAC) is an access control model that grants or denies access to resources based on various attributes associated with the subjects, objects, and environmental conditions. It offers a flexible and dynamic approach to access control, allowing access decisions to be made based on a wide range of attributes rather than relying solely on user roles or permissions. Attributes can include job titles, department, location, time of data, and more.

Role-Based Access Control (RBAC) is an access control model widely used in cloud computing environments to manage and enforce access permissions based on user roles. RBAC assigns roles to users or entities and associates permissions with those roles, allowing for simplified and efficient access management.

Access Control Lists (ACLs) are a mechanism used in computer systems and networks to define and enforce permissions or access rights for users or entities to access resources or perform specific actions. ACLs are typically associated with files, directories, network devices, or other system resources.

Content Dependent Access Control (CDAC), also known as Context-Based Access Control, is an access control mechanism that grants or denies access to resources based on the content or context of the information being accessed. Unlike traditional access control models that primarily rely on user identity or resource attributes, CDAC takes into account the actual content of the information to make access decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A cloud administrator has just implemented a new hypervisor that is completely dependent on the host operating system for all operations. What type of hypervisor has this administrator implemented?

A. Full-service hypervisor
B. Type 2 hypervisor
C. Type 1 hypervisor
D. Bare metal hypervisor

A

B. Type 2 hypervisor

Explanation:
Type 2 hypervisors are dependent and run off of the host operating system rather than being tied directly into the hardware in the way Type 1 hypervisors are.

Bare metal hypervisors are another name used for Type 1 hypervisors. Full-service hypervisors are not an actual type of hypervisor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What networking practice is based on hierarchical, distributed tables and when a change is made to the relationship between a domain and a specific IP address, the change is registered at the top of the hierarchical system and filters down to all lower systems?

A. Virtual Private Network (VPN)
B. Dynamic Host Configuration Protocol (DHCP)
C. Software Defined Network (SDN)
D. Domain Name Service (DNS)

A

D. Domain Name Service (DNS)

Explanation:
The Domain Name Service (DNS) is a hierarchical system that translates domain names into IP addresses. When a user wants to communicate with another machine, the user’s machine queries a DNS server to get the correct address.

DHCP is used to assign IP addresses to computers when they join a Local Area Network (LAN).

SDN is a technology that changes how switches know how to forward frames by adding a controller, which makes the decisions for the switches.

A VPN is an encrypted tunnel. The term is used in many places, but it means that the connection across the network is encrypted using TLS, SSH, or IPSec.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following features of a SIEM helps with identifying potential cybersecurity incidents?

A. Investigative Monitoring
B. Automated Monitoring
C. Log Centralization
D. Data Integrity

A

B. Automated Monitoring

Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:

Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources.
Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only).
Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats.
Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident.
Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest.
Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Padma has deployed a technology that is a different switch technology than what they have been using for a very long time. With this new technology, it removes the decision-making process from the switch and moves it to a controller. This leaves the process of forwarding frames to the switch.

What technology has been deployed?

A. internet Small Computer System Interface (iSCSI)
B. Fibre Channel
C. Virtual Local Area Network (VLAN)
D. Software Defined Networking (SDN)

A

D. Software Defined Networking (SDN)

Explanation:
Within a Software Defined Network (SDN), decisions regarding where traffic is filtered or sent to and the actual forwarding of traffic are completely separate from each other.

A Virtual Local Private Network (VLAN) is used to expand a local area network beyond physical/geographical limitations. It does not remove the decision making from the switch.

Fibre Channel (FC) and internet Small Computer System Interface iSCSI are technologies that are used in Storage Area Networks (SAN) so that the devices can communicate with the connected switch with a protocol more efficient than Ethernet IEEE 802.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which of the following is an umbrella agreement that might cover MULTIPLE projects that two companies collaborate on?

A. MSA
B. SLA
C. NDA
D. SOW

A

A. MSA

Explanation:
Two organizations working together may have various agreements and contracts in place to manage their risks. Some of the common types include:

Master Service Agreement (MSA): An MSA is an over-arching contract for all the work performed between the two organizations.
Statement of Work (SOW): Each new project under the MSA is defined using an SOW.
Service Level Agreement (SLA): An SLA defines the conditions of service that the vendor guarantees to the customer. If the vendor fails to meet these terms, they may be forced to pay some penalty.
Non-Disclosure Agreement (NDA): An NDA is designed to protect confidential information that one or both parties share with the other.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Mia works for a network hardware and software development company. She is currently working on setting up the team that will be testing one of their new products. This particular piece of software is the Operating System (OS) of a network router.

When conducting functional testing, which is NOT an important consideration?

A. Testing must be sufficient to have reasonable assurance there are no bugs
B. Testing must be designed to exercise all requirements
C. Testing must be realistic for all environments
D. Testing must use limited information about the application

A

D. Testing must use limited information about the application

Explanation:
Correct answer: Testing must use limited information about the application

Testing that must use limited information about the application is called grey box testing and occurs after functional testing and deployment.

Functional testing is performed on an entire system, and the following are important considerations: Testing must be realistic, must exercise all requirements, and be bug free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

In a serverless Platform as a Service (PaaS) environment where a customer has setup Function as a Service (FaaS) to be able to run a particular function to analyze some sales data, who allocates and maintains the data storage system?

A. Cloud auditor
B. Cloud Service Provider (CSP)
C. Cloud service broker
D. Cloud Customer (CC)

A

B. Cloud Service Provider (CSP)

Explanation:
The customer will not see the server or its operating system on the virtual machine that is running their function. They would not see the cloud storage setup either. Both would be handled by the CSP.

The cloud auditor performs audits on the cloud environment to see if they are compliant with a law, regulation, or industry standard like ISO 27001.

A cloud service broker negotiates the contracts and setup between the customer and the provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Which of the following types of testing verifies that individual functions or modules work as intended?

A. Regression Testing
B. Integration Testing
C. Usability Testing
D. Unit Testing

A

D. Unit Testing

Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:

Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended.
Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed.
Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience.
Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.

Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

An engineer is adding validation processes to an application that will check that session tokens are being submitted by the valid and original obtainer of the token. What OWASP Top 10 vulnerability is this engineer mitigating by doing so?

A. Vulnerable and outdated components
B. Identification and authentication failures
C. Insecure design
D. Injection

A

B. Identification and authentication failures

Explanation:
The OWASP Top 10 is an up-to-date list of the most critical web application vulnerabilities and risks. Identification and authentication failures, formerly known as broken authentication, refers to the ability for an attacker to hijack a session token and use it to gain unauthorized access to an application. This risk can be mitigated by adding proper validation processes to ensure that session tokens are being submitted by the valid and original obtainer of the token.

Insecure design is exactly what it says. We need to move left and build security into our software. This includes performing threat modeling and using reference architectures.

Vulnerable and outdated components speak to our current problem of using objects, functions, libraries, APIs, and such from Git, GitHub, GitLab, and so on. Much of this code is abandoned or not kept up to date.

Injection includes SQL and command injection. This is when a bad actor types in inappropriate commands from the user’s interface. Input validation would help to minimize this further. It had been in position one on the Top 10 for well over a decade and has finally fallen to position three on the list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An organization implemented a Data Rights Management (DRM) program. The cloud security specialist has been tasked with the responsibility of ensuring an in-depth report on the usage and access history that can be generated for all files. Which of the following BEST represents this functionality?

A. Rights revocation
B. Continuous auditing
C. Replication restrictions
D. Persistant protection

A

B. Continuous auditing

Explanation:
Continuous auditing ensures that you can provide an in-depth report on usage and access history for all files that are protected by data rights management.

Rights revocation is to take away the user’s access to the Data Rights Management (DRM) protected data.

Persistant protection is one of the features of DRM that means that the protection travels with the file. It is agnostic of the location of the data. For example, you can open a book you purchased on Amazon through Kindle for Mac, Windows, Mobile, or even an actual Kindle. But you should not have a copy of the book as a pdf that you can give to someone else.

Replication restrictions prevent the ability to take the content and make any copies you want and share with whomever you want. It can also restrict your ability to move it to new devices after so many previous moves.

NIST SP 800-137 Information Security Continuous Monitoring would be a good read for this info.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Of the following, which uses known weaknesses and flows to verify that systems are properly hardened and then produces a report for management regarding discovered weaknesses?

A. Penetration testing
B. Runtime Application Security Protection (RASP)
C. Static Application Security Test (SAST)
D. Vulnerability scanning

A

D. Vulnerability scanning

Explanation:
Vulnerability scanning is often run by organizations against their own systems. It uses known attacks and methodologies to ensure that systems are hardened against known vulnerabilities and threats.

Runtime Application Self-Protection (RASP) is a security mechanism that allows an application to protect itself by responding and reacting to ongoing events and threats.

During the penetration tests, the tester will try to break into systems using the same techniques that an actual attacker would use.

Static Application Security Testing (SAST) is a test in which the tester has special knowledge of and access to the source code to manually review it for vulnerabilities and weaknesses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Lily is a data science specialist working for a huge retailer. Lily and her team have been analyzing the use of data within the business. Through the analysis, they have been working on determining when they need to move products from stores in one location to another to match the current customer needs. When they have been analyzing the data, they pull data in from relational data from transactional systems, operational databases, and business applications.

What type of storage is this?

A. Data mart
B. Big data
C. Data mining
D. Data warehouse

A

D. Data warehouse

Explanation:
A data warehouse is a type of data storage that is a result of pulling in relational data from transactional systems, operational databases, and business applications.

A data mart is a small specialized collection of data.

Data mining is digging through the data (mining) looking for valuable information.

Big data technologies help to solve storage problems when the data a corporation has is not being managed well in databases anymore. It is characterized by volume, variety, velocity, veracity, and variability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Maalik works for an international trade company that does a lot of business between the United States, China, and South Korea. He knows that there is a privacy framework that should drive their decisions and handling of personal information. He knows that people have a legitimate expectation of privacy. What principle does this meet?

A. Asia-Pacific Economic Cooperation Privacy Framework, preventing harm
B. General Data Protection Regulation, data minimization
C. Asia-Pacific Economic Cooperation Privacy Framework, Integrity of personal information
D. General Data Protection Regulation, purpose limitation

A

A. Asia-Pacific Economic Cooperation Privacy Framework, preventing harm

Explanation:
The Asia-Pacific Economic Cooperation(APEC) Privacy Framework has a few principles. The principle of preventing harm says that organizations should take measures to prevent harm to individuals resulting from the misuse of their personal information.

The principle of Integrity of personal information says that organizations should ensure the accuracy and completeness of personal information and take measures to protect it against unauthorized access, alteration, or destruction.

The General Data Protection Regulation (GDPR) does not apply here. GDPR protects natural persons in Europe. They must be in Europe for GDPR to apply. If there is a German citizen in the US, China, or Korea, they are not protected under the GDPR. They are protected under the laws of the country they are currently in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which of the following is officially known as the “Financial Modernization Act of 1999”?

A. The Gramm-Leach-Bliley Act
B. Asian-Pacific Economic Cooperation
C. The Sarbanes-Oxley Act
D. General Data Protection Regulation

A

A. The Gramm-Leach-Bliley Act

Explanation:
Although it is officially named the Financial Modernization Act of 1999, it is most commonly known as and referred to as the Gramm-Leach-Bliley Act, or GLBA. This name pays tribute to the lead sponsors and authors of the act. GLBA is focused on protecting Personally Identifiable Information (PII) as it relates to financial institutions. Those same financial institutions must comply with SOX.

The Sarbanes-Oxley Act (SOX) requires publicly traded financial institutions to protect their financial statements. They must tell the truth about their financial status as publicly traded companies. It is a direct result of the Enron failure.

General Data Protection Regulation (GDPR) is a European Union (EU) regulation. It requires member states to have a compliant law. That law must have the same or greater level of protection of personal data for natural persons in the EU. It protects people if they are in the EU. If a German citizen is in the US at some point in time and a company collects their personal data in the US, they are not protected under GDPR.

Asian-Pacific Economic Cooperation (APEC) is headquartered in Singapore and has 21 economies participating. The purpose of APEC is to promote free trade around the Pacific rim by protecting personal data. At least, that is the piece we are concerned with here in CCSP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Steve is working for a consulting firm. He is being asked for advice regarding the protection of data for a customer who is moving data into a Software as a Service implementation that enables their business to create and store documents of all types in the cloud. He is focused on the first phase of the data lifecycle, where encryption can be added to a created document that is now at rest.

Which phase would that be?

A. Use
B. Create
C. Destroy
D. Store

A

D. Store

Explanation:
While security controls are implemented in the create phase in the form of Transport Layer Security (TLS), this only protects data in transit and not data at rest. The store phase is the first phase in which security controls are implemented to protect data at rest.

The data lifecycle is Create, Store, Use, Share, Archive, Destroy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A cloud service provider is building a new data center to provide options for companies that are looking for private cloud services. They are working to determine the size of datacenter that they want to build. The Uptime Institute created the Data Center Site Infrastructure Tier Standard Topology. With this standard, they created a few levels of data centers. The cloud provider has a goal of reaching tier three.

How is that characterized in general?

A. Concurrently Maintainable
B. Redundant Capacity Components
C. Fault Tolerance
D. Basic Capacity

A

A. Concurrently Maintainable

Explanation:
The Uptime Institute publishes one of the most widely used standards on data center tiers and topologies. The standard is based on four tiers, which include:

Tier I: Basic Capacity
Tier II: Redundant Capacity Components
Tier III: Concurrently Maintainable
Tier IV: Fault Tolerance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Zoe has been negotiating with a public cloud company regarding the Infrastructure as a Service (IaaS) deployment that they are working on. They are planning to move their data center into the cloud. This will eliminate the need for the company to maintain physical equipment. One of the concerns that Zoe has is that they must be in compliance with the Family Educational Rights and Privacy Act (FERPA). To ensure compliance, they must be able to review the data center, the configuration, the security controls, and so on that are in place.

What is this need known as that should be stated and agreed to in the contract?

A. Auditability
B. Resiliency
C. Interoperability
D. Measured service

A

A. Auditability

Explanation:
Auditability is the process of gathering and making available the evidence necessary to demonstrate the operation and use of the cloud. It’s important to remember that a Cloud Service Provider (CSP) rarely allows a customer to perform an audit on their controls. However, a CSP usually does supply third-party attestations under NDA.

Interoperability is the ability of two different systems to be able to exchange information and use it. For example, a picture created on a Mac and readable on a Windows box.

Measured service is the ability of cloud service providers to bill the customer for the resources that they use.

Resiliency is the ability of a system to withstand failures yet still be able to function.
Reference:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

If software developers and the supporting team were to ask the following four questions, what would they be doing?

What are we working on?
What could go wrong?
What are we going to do about it?
Did we do a good job?

A. Performing threat modeling
B. Determining Maximum Tolerable Downtime (MTD)
C. Evaluating the Recovery Point Objective (RPO)
D. Performing a quantitative risk assessment

A

A. Performing threat modeling

Explanation:
The four questions are the basic idea behind threat modeling. Threat modeling allows the team to identify, communicate, and understand threats and mitigations. There are several techniques, such as STRIDE, PASTA, TRIKE, and OCTAVE.

STRIDE is one of the most prominent models used for threat modeling. Tampering with data is included in the STRIDE model. DREAD is another model, but it does not include tampering with data as a category. TOGAF and REST are not threat models. STRIDE includes the following six categories:

Spoofing identify
Tampering with data
Repudiation
Information disclosure
Denial of service
Elevation of privileges

A quantitative risk assessment is when the Single Loss Expectency (SLE), Annual Rate of Occurrence (ARO), and Annualized Loss Expectency (ALE) are calculated based on the threat to a specific asset.

Determining the MTD is a step in Business Continuity/Disaster Recovery/Continuity planning. It answers the question of how long an asset can be unavailable before it is a significant problem for the business.

Evaluating the RPO is also a part of Business Continuity/Disaster Recovery/Continuity planning. The RPO is the value that represents how much data can be lost before it too is a problem for the business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Rachel is in need of a method of confirming the authenticity of data coming into the application that is being created by the software developers. As the information security professional, she is able to recommend several techniques that can be built into the software. If they achieve their goals, they will be able to hold their users accountable for their actions.

What will they have achieved?

A. Nonrepudiation
B. Encryption
C. Hashing
D. Public key

A

A. Nonrepudiation

Explanation:
Nonrepudiation is the ability to confirm the origin or authenticity of data to a high degree of certainty. Nonrepudiation is typically done through methods such as hashing and digital signatures.

A hash, or hashing, provides information that the bits are correct. All the ones should be ones and all the zeros should be zeros. It does not provide authenticity. There is no way to confirm where the data came from or who created it with just the hash. A digital signature needs to be added by encrypting the hash with a private key.

A digital signature is verified by decrypting it with a public key. This is not the correct answer to the question because the question is about what they will achieve. They will not achieve a public key.

Encryption is altering data to an unreadable format. This does not achieve nonrepudiation unless we specifically talk about asymmetric public and private keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Which of the following involves tracking known issues and having documented solutions or workarounds?

A. Availability Management
B. Continuity Management
C. Service Level Management
D. Problem Management

A

D. Problem Management

Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:

Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong.
Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident.
Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2.
Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process.
Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident.
Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix.
Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.).
Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments.
Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files.
Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc.
Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model).
Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
42
Q

The organization must guarantee that cloud-based systems and communications with cloud-based systems are properly secured. Which of the following is an organization’s responsibility regardless of the cloud model used?

A. Identity, Authentication, Authorization, and Accountability (IAAA)
B. Security governance, Risk, and Compliance (GRC)
C. Physical and environmental security
D. Operating System configuration and status

A

B. Security governance, Risk, and Compliance (GRC)

Explanation:
There is always a shared responsibility model that will exist between the Cloud Customer (CC) and the Cloud Service Provider (CSP). Always watch for this with your cloud providers because the division of responsibility is not always the same when you compare CSPs. For this exam, a good frame of reference is the one from (ISC)2.

A corporation’s GRC is always their own responsibility. That is a true statement for both the CC and the CSP. The shared responsibility models address the customer’s responsibilities by separating them from the providers.

The physical and environmental security is always the responsibility of the CSP. It is possible for a business to be it’s own CSP in a private cloud, in which case they would be responsible for physical and environmental security. To arrive at the correct answer (GRC), there are two considerations to keep in mind:

Most questions in this exam are from a customer's point of view when using a public cloud provider.
The customer is always responsible for their own GRC and sometimes the physical. The always will be the correct answer to choose.

The same (as the physical description above) would be true for the Operating System configuration. Sometimes the customer is responsible for this ]e.g., Infrastructure as a Service (IaaS)]. However, they are always responsible for GRC.

The customer is responsible for setting up the accounts for their users in Software as a Service (SaaS). They are not always responsible for the accountability because that is the creation of the logs, which is a CSP responsibility. So, again, the customer is always responsible for GRC, and sometimes for IAAA.

43
Q

Information must be protected when it is stored in a cloud. A large insurance company is moving their databases to the cloud. There is a technical element in the cloud that will help to protect this data that is not present in traditional data centers. The data itself is segmented and stored on different servers. What is this called?

A. Data dispersion
B. Server clustering
C. Redundant Array of Independent Disks
D. Cryptographic erasure

A

A. Data dispersion

Explanation:
Cloud data dispersion, also known as data dispersal or data dispersal algorithms, is a technique used to enhance data security and reliability in cloud storage environments. It involves splitting data into multiple fragments and distributing it across different storage nodes or cloud providers. This dispersion of data fragments introduces an additional layer of protection against unauthorized access, data breaches, and service interruptions.

Data dispersion algorithms break down the original data into smaller fragments or chunks. These fragments are typically created using various mathematical techniques, such as erasure coding, secret sharing, or data fragmentation algorithms.

Redundant Array of Independent Disks (RAID) arrays are storage configurations that involve combining multiple physical disk drives into a single logical unit. RAID arrays provide various benefits, such as improved performance, data redundancy, and increased storage capacity. This does not match the question because a RAID array is within a single server.

A server cluster, also known as a cluster server or a server farm, is a group of interconnected servers that work together to provide enhanced performance, high availability, and scalability for hosting applications, services, or websites. Server clustering is a common technique used in data centers and cloud computing environments to distribute workloads and improve overall system reliability. This does distribute the workload, but the data is not segmented and stored on different servers.

Cryptographic erasure, also known as secure erasure or cryptographic shredding, is a technique used to securely and irreversibly erase sensitive or confidential data stored on storage devices. It involves applying cryptographic algorithms to overwrite the data with random or pseudo-random values, rendering it unreadable and unrecoverable by unauthorized parties. We are storing data in the question, not deleting it.

44
Q

Gherorghe is working with the cloud operations department after a variety of strange behaviors have been seen in their Infrastructure as a Service (IaaS) environment. They are now looking for a tool or toolset that can help them identify fraudulent, illegal, or other undesirable behavior within their client-server datasets.

What tool or toolset can provide assistance with this?

A. eXtensible Markup Language (XML) firewall
B. Database Activity Monitor (DAM)
C. Application Programming Interface (API) gateway
D. Web Application Firewall (WAF)

A

B. Database Activity Monitor (DAM)

Explanation:
Gartner defines DAMs as “a suite of tools that can be used to support the ability to identify and report on fraudulent, illegal or other undesirable behavior.” These tools include Oracle’s Enterprise Manager. These tools have evolved from monitoring user traffic in databases. They are useful for so many different uses to know what is going on with user traffic in and out of databases.

A WAF is a layer 7 firewall that monitors web applications, HTML, and HTTP traffic.

An API gateway is also a layer 7 device. However, this one monitors APIs. This would include SOAP and REpresentation State Transfer (REST).

XML firewalls also exist at layer 7. This monitors XML traffic only. APIs would include XML and JavaScript Object Notation (JSON).

45
Q

Mandatory MFA and increased monitoring might be part of an organization’s security strategy for managing which type of access in the cloud?

A. Physical Access
B. Service Access
C. User Access
D. Privilege Access

A

D. Privilege Access

Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:

User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources.
Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring.
Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.

Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.

46
Q

A consulting firm is working with their customer, a bank, to build and configure their virtual datacenter in a public cloud provider’s Infrastructure as a Service (IaaS). They are concerned about the availability of their systems. The employees of the bank are working to understand the terms that the consultant is using. They are talking about high availability versus fault tolerance.

What is the MAIN difference between high availability and fault tolerance?

A. There is no difference between high availability and fault tolerance
B. Fault tolerance involves the use of shared resources and pooled resources to minimize downtime
C. Fault tolerance offers greater uptime than high availability
D. Fault tolerance is used to resolve software failures, while high availability is used to address hardware failures

A

C. Fault tolerance offers greater uptime than high availability

Explanation:
High availability makes use of shared and pooled resources to maintain a high level of availability and minimize downtime. High availability is common with products such as firewalls (hardware- or software-based). It is possible to configure an incoming path to pass through a firewall but have a path right next to it that also processes data. Then the two firewalls are in communication and if one fails, the other will take all the communication through it.

The term high availability can also be used to describe the level of reliability of a system(s) and is described by nines, such as 99.9999% or six nines in this example.

Fault tolerance is different in that it tries to ensure that the system can work even in the present of faults/failures, so it offers greater uptime than high availability.

47
Q

The Pandemic Eleven is a list of eleven risks that are associated specifically with which type of technology?

A. All web applications
B. All network devices
C. Cloud-based applications and systems
D. Traditional data centers

A

C. Cloud-based applications and systems

Explanation:
The Pandemic Eleven is similar to the OWASP Top 10. However, unlike the OWASP Top 10, which lays out vulnerabilities and risks associated with all web applications (regardless of whether they are hosted in the cloud or in a traditional data center), the Pandemic Eleven lays out risks and vulnerabilities that are specific to cloud-based applications and systems.

48
Q

A corporation is worried that after terminating an employee, the employee may leak sensitive intellectual property to a competitor. What security controls can be used to minimize the chance of this happening?

A. Corporate structure, Non Discloser Agreement (NDA), exit interviews
B. Background checks, professional ethics agreements, exit interviews
C. Background checks, Non Discloser Agreement (NDA), exit interviews
D. Background checks, Non Compete Agreement (NCA), annual reviews

A

C. Background checks, Non Discloser Agreement (NDA), exit interviews

Explanation:
A malicious insider is an individual who has been granted appropriate access to complete their job but then uses that access for unauthorized uses. The best choices for controls to prevent malicious insiders from selling secrets would be background checks, NDAs, and exit interviews.

An NDA is probably a better choice than an NCA. The NDA says that corporate secrets must remain secret. The NCA says that you cannot work for a competitor. Secrets could be leaked there, but the NDA is specific to keeping the secrets secret.

The corporate structure is not likely to prevent a malicious insider. A poorly structured environment could increase the likelihood of someone taking malicious actions because they are mad or aggravated.

Exit interviews are a good time to remind the now former employee that they signed the NDA and must keep the corporation’s secrets.

Professional ethics agreements are good but less direct about selling secrets than an NDA.

Annual reviews are useful to discover aggravated employees, hopefully before malicious actions are taken.

49
Q

In which of the following types of tests might participants perform limited, non-disruptive actions such as spinning up backup VMs?

A. Tabletop Exercise
B. Full Test
C. Simulation
D. Parallel Test

A

C. Simulation

Explanation:
Business continuity/disaster recovery plan (BCP/DRP) testing can be performed in various ways. Some of the main types of tests include:

Tabletop Exercises: In a tabletop exercise, the participants talk through a provided scenario. They say what they would do in a situation but take no real actions.
Simulation/Dry Run: A simulation involves working and talking through a scenario like a tabletop exercise. However, the participants may take limited, non-disruptive actions, such as spinning up backup cloud resources that would be used during a real incident.
Parallel Test: In a parallel test, the full BC/DR process is carried out alongside production systems. In a parallel test, the BCP/DRP steps are actually performed.
Full Test:  In a full test, primary systems are taken down as they would be in the simulated event. This test ensures that the BCP/DRP systems and processes are capable of maintaining and restoring operations.
50
Q

Alix has been working with the Business Continuity and Disaster Recovery (BC/DR) teams for a couple of years now. Working for a financial institution brings specific challenges with it. There is a need to ensure that the systems continue to operate in a variety of different disaster level scenarios.

When developing a Business Continuity Plan (BCP) or a Disaster Recovery Plan (DRP), which of the following can be done to identify which systems are the most important?

A. Remediation recommendations
B. Vulnerability assessment
C. Business Impact Analysis (BIA)
D. Penetration testing

A

C. Business Impact Analysis (BIA)

Explanation:
Business Impact Analysis (BIA) is a process that organizations use to identify and assess the potential impact of disruptions or incidents on their business operations. BIA is a crucial component of business continuity planning and helps organizations prioritize their recovery efforts and allocate resources effectively. The first step of a BIA is to identify the critical business functions.

Vulnerability assessment is a systematic process of identifying and evaluating vulnerabilities within a system, network, or application. It involves the use of various tools, techniques, and methodologies to discover and analyze potential weaknesses that could be exploited by attackers.

Penetration testing, also known as ethical hacking or pen testing, is a proactive security assessment technique that involves simulating real-world attacks on a system, network, or application. The objective of penetration testing is to identify vulnerabilities and weaknesses in the target environment that could be exploited by malicious actors.

Remediation recommendations is one of the key steps in a vulnerability assessment. Along with identifying vulnerabilities, a vulnerability assessment should provide recommendations for remediation. This may involve suggesting patches, configuration changes, security best practices, or other mitigation measures to reduce the risk associated with each vulnerability.

51
Q

It’s extremely difficult, if not impossible, to find a location for a data center that is not at risk of being hit by some type of natural disaster. Which of the following can be used to help mitigate the threats of natural disasters?

A. Rapid elasticity
B. Reinforced walls
C. Encryption
D. Multitenancy

A

B. Reinforced walls

Explanation:
Reinforced walls may help to mitigate the risk of certain types of natural disasters. There is a building code in Florida for the quality of the controls in a building to be able to withstand the winds of a category 5 hurricane. If the walls are constructed properly, they could possibly withstand fire for a certain amount of time from a wildfire. That does make some assumption of the scale of the fire, but it is of possible assistance.

Encryption is no help against natural disasters. After a disaster, if a drive ends up in the wrong hands, it is good to have the data encrypted. However, this is after the natural disaster so not specific to the actual question.

Multitenancy if no assistance. It is commonly listed as a problem with clouds because virtual machines or data are on the same physical devices as other companies.

Rapid elasticity is the core characteristic of a cloud, meaning that the systems expand and contract to customer needs. So, this answer is also not helpful regarding a natural disaster.

52
Q

For which of the following data security strategies is key management the MOST important?

A. Hashing
B. Encryption
C. Tokenization
D. Masking

A

B. Encryption

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
53
Q

Which of the following concepts in IAM is MOST relevant if an organization has a close partner that they share access to data, systems, and software with?

A. Federated Identity
B. Single Sign-On
C. Multi-Factor Authentication
D. Identity Providers

A

A. Federated Identity

Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:

Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization.
Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it.
Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web.
Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app.
Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies.
Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
54
Q

Which OWASP Top 10 vulnerability is defined as the capacity of unauthenticated users to see unlawful and sensitive data, perform unauthorized functions, and modify access rights?

A. Injection
B. Identification and authentication failures
C. Broken access control
D. Cryptographic failure

A

C. Broken access control

Explanation:
Broken access control vulnerabilities may enable unauthenticated users to view unlawful and sensitive data, perform unauthorized functions, and modify access privileges. It is imperative that applications perform checks when each function is accessed to ensure the user is properly authorized to access it.

Identification and authentication failures include susceptibility to brute force attacks and credential stuffing or permitting weak passwords and using weak recovery procedures, to name a few problems.

Injection attacks include SQL injection and command injection.

Cryptographic failure was formerly known as sensitive data exposure. This is when there is no encryption or weak encryption algorithms. Or when weak keys are used, default keys are used, or old keys are reused, to name a few of the problems.

55
Q

Data classification and labeling should occur during which phase of the cloud data lifecycle?

A. Store
B. Use
C. Create
D. Share

A

C. Create

Explanation:
The cloud data lifecycle has six phases, including:

Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase.
Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls.
Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage.
Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data.
Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data.
Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/ crypto shredding.
56
Q

Paul is working for a cloud provider and is responsible for a particular aspect of that data center. He works to ensure that the required system resources needed to deliver performance exist within the data center. He also must ensure that this is done in a cost-effective manner. Which management practice covers these aspects of managing data centers?

A. Release management
B. Availability management
C. Capacity management
D. Configuration management

A

C. Capacity management

Explanation:
Capacity management is a critical component of any IT system’s overall operation. If a system is under-provisioned, services and performance will suffer, perhaps resulting in business or reputation damage. Capacity management focuses on the system resources required to meet the demand.

Availability management is managing the environment to ensure the customers receive the agreed amount of availability.

Configuration management is about Configuration Items (CI) or other resources that work together to deliver a product or service. This includes parameters.

Release management is about making new or changed services and features available for use.

57
Q

Winston has been hired by a company that has made the decision to move their production systems into the cloud. As a cloud information security professional, he is uniquely prepared to guide them through the risk management process to ensure that their move to an Infrastructure as a Service (IaaS) environment is successful.

What is the FIRST stage of the risk management process?

A. Framing risk
B. Monitor risk
C. Respond to risk
D. Assess risk

A

A. Framing risk

Explanation:
Framing risk refers to determining what risk and the levels that need to be evaluated. This is the context that describes the environment in which decisions are made based on risk.

The purpose of the risk framing component is to produce a risk management strategy that addresses how organizations intend to assess risk, respond to risk, and monitor risk.

The risk management strategy establishes a foundation for managing risk and delineates the boundaries for risk-based decisions within organizations. Regarding risk treatment and the risk management process, the first stage is framing risk.

NIST SP 800-30 is a good reference for this topic if you are looking to dig a little deeper.

58
Q

What is the cloud service model where the customer has the MOST control over their infrastructure stack where personnel threats, external threats, and lack of required expertise are a potential threat?

A. Platform as a Service
B. Software as a Service
C. Function as a Service
D. Infrastructure as a Service

A

D. Infrastructure as a Service

Explanation:
In the Infrastructure as a Service (IaaS) model, the cloud customer controls most of their infrastructure stack. However, some potential risks in an IaaS model include:

Personnel Threats: The provider’s employees have access to the physical infrastructure hosting customers’ environments. Negligent or malicious employees could cause harm to the customer.
External Threats: Malware, denial of service (DoS), and other attacks can impact an organization’s systems regardless of the cloud model.
Lack of Required Expertise: With IaaS, an organization is remotely managing systems in an environment defined by the provider. Without sufficient expertise in system management or knowledge of the provider’s environment and relevant security settings, the organization may have misconfigurations or other issues that place it at risk.

Other service offerings such as Platform, Software, and Function as a Service inherit all of these threats and add others.

59
Q

An engineer entered a data center and noticed that the humidity level was 20 percent relative humidity. What risk could this pose to systems?

A. Systems may overheat and burn internal components
B. Condensation may form, causing water damage
C. Electrostatic discharge causing damage to the equipment
D. There is no risk because 20% relative humidity is the ideal humidity level

A

C. Electrostatic discharge causing damage to the equipment

Explanation:
The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) recommends that data centers have a moisture level of 40-60 percent relative humidity.

Having the humidity level too high could cause condensation to form and damage systems. Having the humidity level too low could cause an excess of electrostatic discharge, which may cause damage to systems.

60
Q

An organization is using VMware ESXI. Which of the following is this an example of?

A. Type 2 hypervisor
B. Application based
C. Type 1 hypervisor
D. Software based

A

C. Type 1 hypervisor

Explanation:
A type 1 hypervisor, also known as a bare-metal hypervisor, runs directly on the machine’s physical hardware unlike type 2 hypervisors that are software-based. VMware ESXI is an example of a type 1 hypervisor.

Vendor specific questions like this will not be in the test. This question is here in case you are unfamiliar with the two types of hypervisors. VMware ESXi is a type 1 and VMWare workstation is a type 2. Reading about these two can help make hypervisors more sense. (Or any other vendors).

A type 1 hypervisor is essentially the operating system for the server it is loaded on. It is often called a bare-metal hypervisor because it is the OS that you load onto the physical server.

A type 2 hypervisor is software based. It relies on a full operating system, such as Mac OS or Windows desktop, to load on top and is most likely found on users’ desktop computers, not the servers in the data center. They could be nested hypervisors that are loaded on to the virtual server in a cloud environment though.

61
Q

Which of the following determines what is considered a “critical” system when developing plans for disruptive incidents?

A. COOP
B. DRP
C. BIA
D. BCP

A

C. BIA

Explanation:
A business continuity plan (BCP) sustains operations during a disruptive event, such as a natural disaster or network outage. It can also be called a continuity of operations plan (COOP).

A disaster recovery plan (DRP) works to restore the organization to normal operations after such an event has occurred.

The decision of what needs to be included in a business continuity plan is determined by a business impact assessment (BIA), which determines what is necessary for the business to function vs. “nice to have.”

62
Q

Bai is working on moving the company’s critical infrastructure to a public cloud provider. Knowing that she has to ensure that the company is in compliance with the requirements of the European Union’s (EU) General Data Protection Regulation (GDPR) country specific laws since the cloud provider is the data processor, at what point should she begin discussions with the cloud provider about this specific protection?

A. At the moment of reversing their cloud status
B. Configuration of the Platform as a Service (PaaS) window
servers
C. Establishment of Service Level Agreements (SLA)
D. Data Processing Agreement (DPA) negotiation

A

D. Data Processing Agreement (DPA) negotiation

63
Q

Which of the following types of clouds has the GREATEST potential for cost savings due to shared resources?

A. Public Cloud
B. Hybrid Cloud
C. Private Cloud
D. Community Cloud

A

A. Public Cloud

Explanation:
Cloud services are available under a few different deployment models, including:

Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive.
Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider.
Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them.
Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers.
Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
64
Q

Kai is an information security researcher who has been analysing issues that could occur for his business when they move to the cloud. They plan to migrate to an Infrastructure as a Service (IaaS) deployment within the next six months. He found the Mitre has a Common Weakeness Enumeration (CWE) number 352, which is a Cross-Site Request Forgery (CSRF) issue.

What type of threat is this?

A. Vulnerable and outdated components
B. Broken access control
C. Security misconfiguration
D. Server-side request forgery

A

B. Broken access control

Explanation:
According to OWASP, the broken access control threat is when access control does not enforce policy such that users cannot act outside of their intended permission. Failures typically lead to unauthorized information disclosure, modification, or destruction of all data or performing a business function outside the user’s limits. Cross-Site Request Forgery (CSRF) is when a client can be tricked into making an unintentional request to the web server, and the web server does not authenticate the request. This is an access control issue.

Server-side request forgery is a type of vulnerability in web applications that allows an attacker to send crafted requests from the server to other internal or external resources accessible to the server. The attacker typically manipulates the input parameters or URLs used by the server to make unintended requests to sensitive resources.

Vulnerable and outdated components is an issue that we have when we do not know and track all the software and components that we use. When we do not know and track, we can end up with unpatched and unprotected software and data.

Security misconfiguration is when we leave default accounts unchanged in a system or do not remove unnecessary ports, services, pages, accounts and privileges, or other features like these.

It is critical to know these threats for the test and to be able to recognize a threat in a scenario or know what can be done to fix a scenario/threat (such as a second authentication for CSRF). For example, if you want to pay a bill or transfer money out of your bank account, it should reauthenticate you to ensure that it is you authentically making that request.

65
Q

Where is the Basic Input/Output System (BIOS) stored?

A. Firmware
B. Motherboard
C. Disk
D. Memory

A

A. Firmware

Explanation:
The BIOS is a form of firmware. It is typically stored in read-only memory. The BIOS is crucial for secure booting processes, as it verifies the hardware and firmware configurations of a system before allowing the operating system or applications to execute.

A computer disk, also known as a Hard Disk Drive (HDD) or a hard disk, is a non-volatile storage device used to store and retrieve digital data on a computer. It is a primary storage component that provides long-term storage for operating systems, software applications, and user files.

Memory in a computer refers to the electronic components that store data and instructions temporarily for immediate access by the Central Processing Unit (CPU). It plays a critical role in the overall performance and functionality of a computer system. Random Access Memory (RAM) is the primary and volatile memory in a computer system. It provides temporary storage for data and instructions that are actively being used by the CPU. RAM allows for fast and random access to data, which enables efficient multitasking and quick retrieval of information. The size of RAM determines the system’s ability to handle multiple tasks simultaneously and affects overall performance.

A motherboard is the primary circuit board that serves as the central hub for connecting and integrating various hardware components in a computer system. It provides the foundation for communication and coordination between different components, allowing them to work together harmoniously.

66
Q

Potential risks to a system are typically quantified based on which of the following?

A. Number and likelihood
B. Magnitude and impact
C. Likelihood and impact
D. Scope and impact

A

C. Likelihood and impact

Explanation:
After risks to a system have been identified, the next step in risk management is analysis. Risks are typically quantified based on:

Likelihood: Some risks are more likely to occur than others. For example, a short blip in network connectivity is much more likely than an extended outage of a critical service.
Impact: Some risks have a greater impact on users than others. For example, a network blip has a low impact, while an extended outage by an ISP has a significant impact.

A risk assessment of a CSP’s services should be based on the organization’s policies, SLAs, audit reports, and similar information. This will provide insight into the potential risks that the vendor has addressed and the controls that they have in place for doing so.

67
Q

A corporation has discovered that a product sold to remotely managed virtual desktops of all kinds for their customers has allowed a bad actor the ability to use it to run arbitrary command execution. This has allowed the attackers to leverage the Remote Monitoring and Management (RMM) software to deploy ransomware. The feature of the RMM software that allowed this to happen was a zero-touch update capability.

What threat was exploited here?

A. Misconfiguration and inadequate change control
B. Insecure Interfaces and Application Programming Interfaces (API)
C. Lack of cloud security strategy and security architecture
D. Accidental cloud data disclosure

A

C. Lack of cloud security strategy and security architecture

Explanation:
This is what happened to a company called Kaseya. You can find more in the Pandemic Eleven document from the Cloud Security Alliance. Here is what they basically have to say:

On July 2, 2021, Kaseya received reports from customers suggesting unusual behavior and malware executing on endpoints managed by Kaseya. Attackers could exploit zero-day vulnerabilities in the Virtual Storage Appliance (VSA) product to bypass authentication and run arbitrary command execution. This allowed the attackers to leverage the standard VSA product functionality to deploy ransomware to the endpoints of Managed Service Provider (MSP) clients (i.e., clients of clients). This failure affected many customers due to a strategy of automated zero-touch updates to software deployed in different environments and a SaaS model of critical software change management; vendors and consumers can reconsider this strategy to limit similar attacks in the future.

Application Programming Interfaces (API) is not an insecure interface. The problem is within the software as a zero-day exploit, not the API that it uses to communicate to other pieces of software.

The threat exploited was not misconfiguration and inadequate change control because it was a flaw at the code level, not a setting that was improperly configured.

This was not an accidental cloud data disclosure. This would be a misconfiguration on a cloud storage system that leaves the data exposed in some way.

68
Q

A forensic investigator must complete the task of identifying, collecting, and securing electronic data and records so that they can be used in a criminal court hearing. What task is this forensic investigator completing?

A. Investigation
B. Chain of custody
C. eDiscovery
D. Non-repudiation

A

C. eDiscovery

Explanation:
eDiscovery is the process of searching for and collecting electronic data of any kind (emails, digital images, documents, etc.) so that the data can be used in either civil legal proceedings or criminal legal proceedings.

Chain of custody is the record of how the evidence was handled; who handled it; what they were doing with it; where it was or where it was stored; when it was collected/inspected/handled; and why it was handled, inspected, and analyzed.

Investigation is the process of analyzing the data and pursuing the understanding of the evidence to comprehend who, what, where, why, when, and how something happened.

Non-repudiation is when the evidence removes the ability for someone to deny that they did something like creating or sending an email, signing a contract, or creating data.

69
Q

An organization’s backup frequency is MOST closely related to which of the following?

A. MTD
B. RPO
C. RSL
D. RTO

A

B. RPO

Explanation:
A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:

Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business.
Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations.
Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.

Reference:

70
Q

Erwina has discovered that there is a shared responsibility model from her cloud provider. Her company is using an Infrastructure as a Service (IaaS) deployment on a public cloud provider. Who would be responsible for the security of the servers in the data center?

A. Cloud broker
B. Cloud Service Provider (CSP)
C. Cloud Service Customer (CSC)
D. Cloud Access Security Broker (CASB)

A

B. Cloud Service Provider (CSP)

Explanation:
The Cloud Service Provider (CSP) is solely responsible for the security of the servers in the data center. The infrastructure would be shared between the provider and the customer in some way, specific to that cloud providers shared responsibility chart.

The Cloud Service Customer (CSC) is responsible for the Operating Systems (OS) that they bring to their IaaS, which includes servers, routers, switches, firewalls, and so on. They are then responsible for everything above that.

From NIST 500-292:

“As cloud computing evolves, the integration of cloud services can be too complex for cloud consumers to manage. A cloud consumer may request cloud services from a cloud broker, instead of contacting a cloud provider directly. A cloud broker is an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers. In general, a cloud broker can provide services in three categories [9]:

Service Intermediation: A cloud broker enhances a given service by improving some specific capability and providing value-added services to cloud consumers. The improvement can be managing access to cloud services, identity management, performance reporting, enhanced security, etc. 
Service Aggregation: A cloud broker combines and integrates multiple services into one or more new services. The broker provides data integration and ensures the secure data movement between the cloud consumer and multiple cloud providers. 
Service Arbitrage: Service arbitrage is similar to service aggregation except that the services being aggregated are not fixed. Service arbitrage means a broker has the flexibility to choose services from multiple agencies. The cloud broker, for example, can use a credit-scoring service to measure and select an agency with the best score."

From the CSA Guidance 4.0 a CASB:

“CASB:Cloud Access and Security Brokers (also known as Cloud Security Gateways) discover internal use of cloud services using various mechanisms such as network monitoring, integrating with an existing network gateway or monitoring tool, or even by monitoring DNS queries. After discovering which services your users are connecting to, most of these products then offer monitoring of activity on approved services through API connections (when available) or inline interception (man in the middle of monitoring). Many support DLP and other security alerting and even offer controls to better manage the use of sensitive data in cloud services(SaaS/PaaS/and IaaS).”

71
Q

The European Union’s (EU) General Data Protection Regulation (GDPR) has strict requirements for where the personal data of natural persons that are collected from within EU can be stored. Which jurisdiction can the data be stored in outside of Europe?

A. Taiwan
B. Brazil
C. Russia
D. Isle of Man

A

D. Isle of Man

Explanation:
As of this writing (May 2023), the approved countries that the EU says that Europeans can store data in outside of the EU are:

Andorra
Argentina
Canada (only commercial organizations)
Faroe Islands
Guernsey
Israel
Isle of Man
Jersey
New Zealand
Switzerland
Uruguay
Japan
The United Kingdom
South Korea
72
Q

Cloud service providers will have clear requirements for items such as uptime, customer service response time, and availability. Where would these requirements MOST LIKELY be outlined for the client?

A. Service Level Agreement (SLA)
B. Data Processing Agreement (DPA)
C. Privacy Level Agreement (PLA)
D. Business Associate Agreement (BAA)

A

A. Service Level Agreement (SLA)

Explanation:
Requirements such as uptime, customer service response time, and availability should be outlined in a Service Level Agreement (SLA). When a provider doesn’t meet their SLA requirements, it could lead to termination of the contract or financial benefits to the cloud customer.

PLAs, BAAs, and DPAs are all fundamentally the same. They are all agreements like an SLA but about the privacy requirements to protect the personal data or Personally Identifiable Information (PII) that will be stored and processed on the cloud provider’s equipment. DPA is the term used in the European Union under the General Data Protection Regulation (GDPR). HIPAA in the USA requires a BAA. The PLA is a generic term for anywhere else.

73
Q

Alrik has a presentation to the Board of Directors to explain his proposed business case to move their physical data centers into the cloud. The plan is to create virtual data centers in an Infrastructure as a Service (IaaS) environment on a public cloud provider. One of the key focuses of the business case is the responsibilities that they will retain when they move away from the capital expenditure of the hardware that creates a data center.

Which responsibility below is critical to maintaining their control over their applications and data?

A. The responsibility for the Operating Systems (OS) lies with the customer
B. The responsibility for Operating Systems lies with the cloud provider
C. The responsibility for paying for their usage is the customers
D. The responsibility for data organization lies with the customer

A

A. The responsibility for the Operating Systems (OS) lies with the customer

Explanation:
In an IaaS environment, the customer can bring their own operating systems for the routes, switches, firewalls, Intrusion Detection Systems (IDS), Intrustion Prevention Systems (IPS), and servers, to name a few of the virtual machines. Since they own the OSs that are the servers, they have control over the applications and the data within. The cloud provider does not own the server and therefore is not responsible for them.

The customer does have to pay for their usage, but that is not on the topic of the question. The question is about control over the applications and data.

The responsibility for data organization lies with the customer, but that does not address the control over the applications. It might logically be connected to the organization, but the OS does control the application and the application controls the data.

74
Q

The management team of a large retailer is working with Dasha and the information security team to prepare for major incidents. They are aware that it is possible for their online e-commerce systems to be offline at the wrong time of the year, so they are planning for a variety of incidents. To prepare properly, it is necessary to classify the incidents that could occur.

What are the critical elements that determine the classification?

A. If they are anticipating malware, it is necessary to consider the financial impact on their business, which is determined by the time of the day and year. This is further impacted by the system(s) that are affected.
B. If they are anticipating that a power outage could occur that would have an impact, it is necessary to take into consideration the time of the day it could occur.
C. If they are anticipating that a bad actor could gain access to their system, they must take into consideration the legal impact it could have.
D. If they are anticipating that an event could have an impact on their business, it is critical to consider what time of the day the event could occur.

A

A. If they are anticipating malware, it is necessary to consider the financial impact on their business, which is determined by the time of the day and year. This is further impacted by the system(s) that are affected.

Explanation:
Correct answer: If they are anticipating malware, it is necessary to consider the financial impact on their business, which is determined by the time of the day and year. This is further impacted by the system(s) that are affected.

There are many factors that can impact how bad an incident could be for a business. The impact of the event and the urgency it needs to be treated with are critical aspects. The correct answer identifies those two elements. The impact is financial and the urgency is the day of the year and the time that certain systems would be offline.

The answer “If they are anticipating that a bad actor could gain access to their system, they must take into consideration the legal impact it could have” Identifies the type of incident (the bad actor gaining access) and the impact is legal, but it does not identify any level of urgency.

The answer “If they are anticipating that an event could have an impact on their business, it is critical to consider what time of the day the event could occur” identifies the urgency through the time of day, but it does not get to any level of urgency as neither the systems nor the type of impact is considered.

The answer “If they are anticipating that a power outage could occur that would have an impact, it is necessary to take into consideration the time of the day it could occur” identifies the incident type and time of day, but it does not get to the systems or the impact it would have.

75
Q

Colette, an information security professional, is in charge of the team that has the responsibility of picking the next location for a new data center for a large public cloud provider. There are many concerns that must be taken into consideration, such as the location possibly being in the normal path of large hurricanes.

As they research new locations, what are some of the other concerns they need to address?

A. Water supply, access control procedures, skilled Information Technology (IT) personnel, and the regulatory and legal frameworks in that location
B. Generators, high-speed internet connectivity, skilled Information Technology (IT) personnel, and the regulatory and legal frameworks in that location
C. Water supply, high-speed internet connectivity, availability of enough fiber optic cable, and the regulatory and legal frameworks in that location
D. Water supply, high-speed internet connectivity, skilled Information Technology (IT) personnel, and the regulatory and legal frameworks in that location

A

D. Water supply, high-speed internet connectivity, skilled Information Technology (IT) personnel, and the regulatory and legal frameworks in that location

Explanation:
Correct answer: Water supply, high-speed internet connectivity, skilled Information Technology (IT) personnel, and the regulatory and legal frameworks in that location

Water supply, high-speed internet connectivity, skilled Information Technology (IT) personnel, and the regulatory and legal frameworks in that location are some of the additional considerations when picking a site. There are more, such as weather patterns/threats, the local crime rate, the potential of the site being expanded as the data center grows, the impact on the local community, and so on.

Having enough fiber optic cable is not location specific. It is possible to have more shipped to that area if necessary (and likely).

Regarding access control procedures, if the company is first selecting and then building a site, they will construct the access control that they require as they build the site.

Generators are necessary for data centers but not critical in selecting sites. This is similar to having enough fiber optic cable—more will come in from elsewhere.

76
Q

An information security professional has been asked to review a piece of completed software to ensure that there are no defects and that the code is free of bugs. What phase of the software development lifecycle is currently being described?

A. Development
B. Maintenance
C. Testing
D. Analysis

A

C. Testing

Explanation:
During the testing phase of the SLDC, the completed code is reviewed for problems. It’s checked to ensure that it is functioning and operating as expected. This includes having quality assurance checking the software for defects and bugs. During testing, the code is also checked using security scans to ensure that it is secure.

The development phase should include testing. As soon as there are lines of code, they can be analyzed with Static Application Security Testing. In the question, though, it says “completed software,” implying this phase is over.

Analysis is not a commonly used name for this phase, but it would be close. If testing was not here, it could have been the right answer.

Maintenance means the software is in production, and testing will occur (or should occur) before patches or changes are deployed. But again, the key to the question is “completed software.” It leaves room for the possibility that it has not been deployed yet, so we are in the testing phase.

77
Q

The CCSP specifies that ALL of the following attributes should be included in event logs EXCEPT…

A. IP Address
B. MAC Address
C. Geolocation
D. User Identity

A

B. MAC Address

Explanation:
An event is anything that happens on an IT system, and most IT systems are configured to record these events in various log files. When implementing logging and event monitoring, event logs should include the following attributes to identify the user:

User Identity: A username, user ID, globally unique identifier (GUID), process ID, or other value that uniquely identifies the user, application, etc. that performed an action on a system.
IP Address: The IP address of a system can help to identify the system associated with an event, especially if the address is a unique, internal one. With public-facing addresses, many systems may share the same address.
Geolocation: Geolocation information can be useful to capture in event logs because it helps to identify anomalous events. For example, a company that doesn’t allow remote work should have few (if any) attempts to access corporate resources from locations outside the country or region.

The CCSP doesn’t identify the MAC address as an important attribute to include in event logs.

78
Q

Some regulations like HIPAA and GDPR relax their restrictions on how data can be used if it has undergone which of the following?

A. Anonymization
B. Hashing
C. Encryption
D. Masking

A

A. Anonymization

Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:

Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules.
Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions.
Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number.
Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data.
Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
79
Q

Because an organization is using a Cloud Service Provider (CSP) for their application that they are developing, they have determined that they need to encrypt data from the server-based application to the client on the user’s machine. What technology would you recommend?

A. Secure Shell (SSH)
B. Internet Protocol Security (IPSec)
C. Sandbox
D. Transport Layer Security (TLS)

A

D. Transport Layer Security (TLS)

Explanation:
TLS is most commonly used for client-server use. It is possible to use SSH or IPSec, but they are not the most common.

SSH is most commonly used by administrators and operators to configure, manage, and monitor virtual devices in the cloud.

IPSec is most commonly used for site-to-site connections, for example, from an edge router at the on-prem data center to a router in the cloud.

Sandboxing is used to isolate something, for example, a code object, an application, a virtual device, or a virtual network.

80
Q

Which regulation would be used to build a risk-based policy for cost-effective security for government agencies?

A. Federal Information Security Management Act (FISMA)
B. Gramm-Leach-Bliley Act (GLBA)
C. Health Information Portability Accountability Act (HIPAA)
D. Protected Health Information (PHI)

A

A. Federal Information Security Management Act (FISMA)

Explanation:
US government agencies must build risk-based policies for cost-effective security. Government agencies are not immune to bad actors attacking them. In the past, the security within government agencies was not very good, so this regulation demands that they do better.

GLBA is an extension to Sarbanes-Oxley that demands that personal data be protected with the financial data. The HIPAA requires that Protected Health Information (PHI) be protected.

81
Q

A corporation has experienced a Denial of Service (DoS) attack. The corporation was able to contact the bad actors, and they were told to pay the bad actors in bitcoin or the attack would continue. When they did not pay, the bad actor took things to the next level and performed a brute force attack on their cloud account password. Since the corporation did not set up multifactor authentication, the bad actor was successful. The bad actor was then able to access their cloud account and delete everything. This included all the company’s virtual machines in their Infrastructure as a Service (IaaS) environment. The company ceased to exist immediately.

What was utilized when the bad actor was deleting all the corporation’s cloud assets?

A. Secure Shell (SSH)
B. Management plane
C. Hypervisor
D. Hyper Text Transfer Protocol Secure (HTTPS)

A

B. Management plane

Explanation:
The management plane is used to log in and control virtual machines and build virtual machines as well as configure virtual routers, switches, and firewalls. This is what the bad actor was using when they connected to the company’s cloud account to delete all the assets.

The hypervisor is the thin operating system on a server that allows virtual machines to be constructed, among so many other things.

HTTPS is an encrypted web (HTTP) connection. Transport Layer Security (TLS) is used to establish and manage the encryption for the web session. It is possible to use HTTPS to connect to a cloud environment and manage it. It is possible that this was used, but since the bad actor deleted all the company’s assets (their virtual machines), it is the actual management connection that is critical to this question.

The same is true with SSH. It is the encryption that is commonly used to protect the administrator’s connection. The connection is what is critical to the question. Not the encryption techniques (SSH, TLS, HTTP).

82
Q

Which of the following disaster events can multivendor pathway connectivity help to mitigate?

A. Server room flooding
B. Fallen tree taking down network cables
C. Localized power outage
D. HVAC system failure

A

B. Fallen tree taking down network cables

Explanation:
Data centers require network connectivity, and many network outages are isolated to a single ISP. Data centers often have dual-provider, dual-entry network connectivity, where network cables from multiple service providers enter the building from different locations to minimize the chance that an outage or accident will take network connectivity down completely.
In this scenario, a fallen tree may damage the cables of one ISP. However, a different ISP should have a different entry point, and their services may remain online.

83
Q

Which of the following solutions can be difficult to secure because certain security solutions can’t be deployed without an underlying OS?

A. Container
B. Hypervisor
C. Ephemeral Computing
D. Serverless

A

D. Serverless

Explanation:
Some important security considerations related to virtualization include:

Hypervisor Security: The primary virtualization security concern is isolation or ensuring that different VMs can’t affect each other or read each other’s data. VM escape attacks occur when a malicious VM exploits a vulnerability in the hypervisor or virtualization platform to accomplish this.
Container Security: Containers are self-contained packages that include an application and all of the dependencies that it needs to run. Containers improve portability but have security concerns around poor access control and container misconfigurations.
Ephemeral Computing: Ephemeral computing is a major benefit of virtualization, where resources can be spun up and destroyed at need. This enables greater agility and reduces the risk that sensitive data or resources will be vulnerable to attack when not in use. However, these systems can be difficult to monitor and secure since they only exist briefly when they are needed, so their security depends on correctly configuring them.
Serverless Technology: Serverless applications are deployed in environments managed by the cloud service provider. Outsourcing server management can make serverless systems more secure, but it also means that organizations can’t deploy traditional security solutions that require an underlying OS to operate.
84
Q

A banking corporation is planning to move to a cloud Infrastructure as a Service (IaaS). The lead information security manager, Ava, has been working with her team to evaluate potential cloud providers. They want to see if the controls that the cloud provider has in place are appropriate. Which report should they request?

A. SOC 2 Type 2
B. SOC 2 Type 1
C. SOC 3
D. SOC 1 Type 1

A

B. SOC 2 Type 1

Explanation:
SOC 2 reports focus on the security controls within an organization based on the five trust principles of security, availability, processing integrity, confidentiality, and privacy. Type 1 reports show the appropriateness of controls at a specific moment in time. Type 2 shows the effectiveness of controls over time.

A SOC 1 Type 1 report is focused on the impact security controls have on the customer’s financial statements. Type 1 and 2 are the same as above.

A SOC 3 is a summary report of the SOC 2 that is designed to be publicly disseminated.

85
Q

Which of the following risk treatment strategies involves implementing controls to manage the likelihood or impact of a potential risk?

A. Mitigation
B. Avoidance
C. Transference
D. Acceptance

A

A. Mitigation

Explanation:
Risk treatment refers to the ways that an organization manages potential risks. There are a few different risk treatment strategies, including:

Avoidance: The organization chooses not to engage in risky activity. This creates potential opportunity costs for the organization.
Mitigation: The organization places controls in place that reduce or eliminate the likelihood or impact of the risk. Any risk that is left over after the security controls are in place is called residual risk.
Transference: The organization transfers the risk to a third party. Insurance is a prime example of risk transference.
Acceptance: The organization takes no action to manage the risk. Risk acceptance depends on the organization’s risk appetite or the amount of risk that it is willing to accept.
86
Q

Odart is the information security professional working with the development and operations team that is planning for the deployment of a specific application and the Application Programming Interface (API) needed to make it function in their Platform as a Service (PaaS) deployment. As they are building, they get to the point of building the secrets manager.

What is its purpose?

A. Securing sensitive information such as API keys
B. Managing access control policies for the API
C. Logging and monitoring the API usage
D. Optimizing resource utilization for the API

A

A. Securing sensitive information such as API keys

Explanation:
The secrets manager is used to manage sensitive pieces of information such as passwords and API keys. Password and API keys should not be hard coded into the software source code nor the API. If they are, it is more likely that they will be compromised. Uber had a large breach because they had passwords in their source code. Access to the secrets manager should be strictly controlled. It is best if the application has access to a secrets server that then issues a secret to access the real secrets server. Then and only then does the application obtain the secret that it needs (password or API key) to be able to log in to the real server. Sounds convoluted? In a way, yes, but it is kind of like trying to get to a secured room in a building. You need your badge to get into the building, then you need a PIN to get into the secured room, except the PIN to get into the room is changed regularly and you need to speak to the guard for today’s PIN.

The secrets manager does not optimize resources. Something like Dynamic Optimization (DO) can do that.

The secrets manager does not log API usage. It is a good idea to make sure the logging is turned on, and the logs are sent to the syslog server (or equivalent).

The secrets manager does not manage access control policies for the API. That needs to be done by the cloud administrators and operators.

All access to the secrets manager does need to be logged. Policies do need to be created, and somewhere someone needs to optimize resource use, but not here.

87
Q

Which of the following data retention considerations primarily applies to archived data rather than data in active use?

A. Retention Requirements
B. Retention Periods
C. Regulatory Requirements
D. Data Classifications

A

B. Retention Periods

Explanation:
Data retention policies define how long an organization stores particular types of data. Some of the key considerations for data retention policies include:

Retention Periods: Defines how long data should be stored. This usually refers to archived data rather than data in active use.
Regulatory Requirements: Various regulations have rules regarding data retention. These may mandate that data only be retained for a certain period of time or the minimum time that data should be saved. Typically, the first refers to personal data, while the second is business and financial data or security records.
Data Classification: The classification level of data may impact its retention period or the means by which the data should be stored and secured.
Retention Requirements: In some cases, specific requirements may exist for how data should be stored. For example, sensitive data should be encrypted at rest. Data retention may also be impacted by legal holds.
Archiving and Retrieval Procedures and Mechanisms: Different types of data may have different requirements for storage and retrieval. For example, data used as backups as part of a BC/DR policy may need to be more readily accessible than long-term records.
Monitoring, Maintenance, and Enforcement: Data retention policies should have rules regarding when and how the policies will be reviewed, updated, audited, and enforced.
88
Q

An organization has just moved from a traditional data center environment to a cloud IaaS environment. Prior to moving to this new environment, the security team did not do a risk assessment or ensure the security of the new cloud provider.

What type of threat is being described here?

A. Insufficient identity, credential, access, and key management
B. Unsecure third-party resources
C. Misconfiguration and inadequate change control
D. Insufficient due diligence

A

D. Insufficient due diligence

Explanation:
When a security team or organization doesn’t perform proper due diligence (such as performing risk assessments and ensuring that the new cloud provider has the proper security procedures in place), it creates threats and problems that could have been addressed before moving to the new environment. Through active due diligence, such as training and maintaining proper procedures, many common threats and risks can be avoided.

The category insufficient identity, credential, access, and key management is the top threat according to the Cloud Security Alliance in their Pandemic 11, which is a highly recommended read. There is a lack of care around each of these items. Securing access to systems, files, applications, and buildings is necessary. Using good Identity and Access Management (IAM) tools is also critical. Ensuring that the principle of least privilege is followed is important among other similar concerns. This answer option is not about a lack of a risk assessment.

The answer option misconfiguration and inadequate change control does not work as the correct answer because that is the actual setting within the servers, routers, firewalls, etc.

Unsecure third-party resources might actually be what they would find when they needed to put more controls in place if they had done the risk assessment.

89
Q

Which of the following cloud software security considerations is MORE relevant if a company is using microservices?

A. API Security
B. Third-Party Software
C. Open-Source Software
D. Supply Chain Security

A

A. API Security

Explanation:
Some important considerations for secure software development in the cloud include:

API Security: In the cloud, the use of microservices and APIs is common. API security best practices include identifying all APIs, performing regular vulnerability scanning, and implementing access controls to manage access to the APIs.
Supply Chain Security: An attacker may be able to access an organization’s systems via access provided to a partner or vendor, or a failure of a provider’s systems may place an organization’s security at risk. Companies should assess their vendors’ security and ability to provide services via SOC2 and ISO 27001 certifications.
Third-Party Software: Third-party software may contain vulnerabilities or malicious functionality introduced by an attacker. Also, the use of third-party software is often managed via licensing, with whose terms an organization must comply. Visibility into the use of third-party software is essential for security and legal compliance.
Open Source Software: Most software uses third-party and open-source libraries and components, which can include malicious functionality or vulnerabilities. Developers should use software composition analysis (SCA) tools to build a software bill of materials (SBOM) to identify any potential vulnerabilities in components used by their applications.
90
Q

Giovanni is working with the legal department on a cloud contract with a Managed Service Provider (MSP). They are working on the language of the Service Level Agreement (SLA) for their performance concerns. They require an uptime of 99.9995%. What performance concern are they addressing?

A. Bandwidth
B. Availability
C. Central Processing Unit (CPU)
D. Memory

A

B. Availability

Explanation:
With an uptime requirement of 99.9995%, they are addressing availability. That availability, or lack thereof, could be from bandwidth issues, CPU issues, memory issues, or others.

So the trick (not meant to be tricky) with this question is that availability is the better answer because it includes the other three. This is how (ISC)2 does “all of the above” type of questions.

91
Q

As the information security manager, you are working with the Information Technology (IT) manager to aid them in stepping up the management of the IT environment to provide a more effectively managed environment. When IT is managed well, reliability is much higher. This is a fundamental part of information security. To aid the IT manager, you have been tasked with updating the IT best practices for your organization, which includes updating the service strategy to include cloud practices.

Which framework would be perfect here?

A. COBIT (formerly Control OBjectives for Information Technology)
B. National Institute of Standards and Technology (NIST) CyberSecurity Framework (CSF)
C. ITIL (formerly Information Technology Infrastructure Library)
D. International Standards Organization/ International Electrotechnical Commission (ISO/IEC) 27001

A

C. ITIL (formerly Information Technology Infrastructure Library)

Explanation:
Your organization is most likely using IT Infrastructure Library (ITIL) because it is an IT best practices framework. Its five core subjects are Service strategy, Service design, Service transition, Service operation, and Continual improvement.

NIST CSF provides cybersecurity guidance, not IT best practices. This is advice for cybersecurity. It consists of standards, guidelines, and best practices to manage cybersecurity risk.

ISOIEC 27001 provides requirements for an Information Security Management System (ISMS). This encompasses IT, but it is such a bigger topic. This is about managing information security throughout the entire business. So it is not specific to the topic of IT best practices.

COBIT is a business framework for enterprise governance of IT. This is similar to ISO/IEC 27001. It is a much bigger topic than IT best practices. It is governance of IT.

92
Q

A multinational bank has discovered that a sensitive database in their Infrastructure as a Service (IaaS) deployment has been compromised. They have contacted law enforcement for assistance because they believe that it is an external bad actor who is responsible for this breach. Law enforcement needs to pursue, recover, and analyze the digital forensic information to determine who the bad actor is to be able to locate, arrest, and prosecute them for their crime.

Which of the following standards establishes internationally recognized standards for eDiscovery that applies to this scenario?

A. ISO/IEC 27002
B. ISO/IEC 27001
C. ISO/IEC 27018
D. ISO/IEC 27050

A

D. ISO/IEC 27050

Explanation:
ISO/IEC 27050 provides internationally accepted standards related to eDiscovery processes and best practices.

ISO/IEC 27001 is used to create and audit Information Security Management Systems (ISMS).

ISO/IEC 27002 contains descriptions of information security controls that can be used when an ISMS is created.

ISO/IEC 27018 specifies how personal data should be protected by cloud providers acting as data processors. Data processor is a term defined by GDPR as companies that handle and process personal data on behalf of a certain company. It excludes employees of that company.

93
Q

Rebekah has been working with software developers on mechanisms that they can implement to protect data at different times. There is a need to use data from a customer database in another piece of software. However, it is necessary to ensure that all personally identifiable elements are removed first.

The process of removing all identifiable characteristics from data is known as which of the following?

A. Masking
B. Anonymization
C. Obfuscation
D. De-identification

A

B. Anonymization

Explanation:
Anonymization is the removal of all personally identifiable pieces of information, both direct and indirect.

Data de-identification is the removal of all direct identifiers. It leaves the indirect ones in place.

Masking is to cover or hide information. This is commonly seen when a user types in their password, yet all that is seen on the screen are stars or dots.

Obfuscation is to confuse. Encryption is one form of obfuscation, but it can be done with other techniques. For example, instead of transmitting data in base 16, it could be sent in base 64.

94
Q

The speed with which incidents are acknowledged and triaged, the required schedule for notifying customers of planned downtime, monthly uptime percentage, the operating hours of support resources, and the timeframe for reporting service performance are all examples of communication items that should be included in what, between a Cloud Service Provider (CSP) and the Cloud Service Customer (CSC)?

A. Service Level Agreements (SLA)
B. Privacy Level Agreement (PLA)
C. Business Associate Agreement (BAA)
D. Master Services Agreement (MSA)

A

A. Service Level Agreements (SLA)

Explanation:
Levels of communication service from the CSP should be defined and agreed upon by both parties. This is why it is vital for customers to be accountable for setting SLA terms. SLAs may be adjusted to provide further flexibility, albeit at a significant expense.

MSAs define the basic relationship between the two parties, the CSC and the CSP in this example. It defines what each party is responsible for in the relationship.

PLAs are a generic form of the BAA found in HIPAA and the Data Processing Agreement (DPA) under the EU GDPR. IT defines and describes the personal data that will be stored within the cloud and how it needs to be protected.

95
Q

Which of the following determines what is considered a “critical” system when developing plans for disruptive incidents?

A. BCP
B. COOP
C .DRP
D. BIA

A

D. BIA

Explanation:
A business continuity plan (BCP) sustains operations during a disruptive event, such as a natural disaster or network outage. It can also be called a continuity of operations plan (COOP).

A disaster recovery plan (DRP) works to restore the organization to normal operations after such an event has occurred.

The decision of what needs to be included in a business continuity plan is determined by a business impact assessment (BIA), which determines what is necessary for the business to function vs. “nice to have.”
Reference:

96
Q

Which of the following areas is always entirely the CSP’s responsibility regardless of the cloud service model used?

A. Virtual networking
B. Databases
C. Infrastructure
D. Storage

A

C. Infrastructure

Explanation:
The Cloud Service Provider (CSP) is always responsible for managing the infrastructure. The infrastructure includes the servers, routers, switches, firewalls, and so on that make a data center.

The consumer is always responsible for their Governance, Risk management, and Compliance (GRC) and their data.

The operating systems, virtual networking, and storage responsibilities change according to the cloud service model.

97
Q

Danila is working to ensure that the information her corporation puts on the public cloud provider is protected properly. There are laws and regulations that the corporation must be in compliance with. She is looking for a set of best practices that will guide her decisions. The International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018 standard is focused on five key principles of personal data protection. Her concern is that she has seen large providers hit with big fines for not handling cookies properly.

Which principle is directly related to cookies?

A. Audits
B. Transparency
C. Communication
D. Consent

A

D. Consent

Explanation:
ISO/IEC 27018 is an international standard for security and privacy in cloud computing. The five key principles of ISO/IEC 27018 are communication, consent, control, transparency, and independent and yearly audits.

Consent is critical with cookies. When the customer does not have the right to select their preferences for cookies and data collection, they can be fined.

The corporation needs to be transparent with their privacy practices. That includes being clear about what data they are collecting, how long they will store it, and what they will do with it.

The corporation needs to be in communication with their customers and possibly the regulators. They need to ensure that the customer is aware of any changes that the company is making regarding the handling of data. They need to be in communication with regulators and possibly the customers if any issues or breaches are found, and so on.

Auditing the systems, policies, and handling of data should be done on a regular basis—perhaps at least on a yearly basis.

98
Q

In which cloud service model is the customer responsible for securing VMs and the software installed in them, while the cloud provider is responsible for the physical components, virtualization software, and network infrastructure?

A. PaaS
B. IaaS
C. SaaS
D. All service models

A

B. IaaS

Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.

However, at the software level, responsibility depends on the cloud service model in use, including:

Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them.
Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use.
Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
99
Q

Which of the following organizations publishes the CWE Top 25 describing the 25 most dangerous weaknesses and errors in software each year?

A. NIST
B. OWASP
C. CSA
D. SANS

A

D. SANS

Explanation:
Several organizations provide resources designed to teach about the most common vulnerabilities in different environments. Some examples include:

Cloud Security Alliance Top Threats to Cloud Computing: These lists name the most common threats, such as the Egregious 11. According to the CSA, the top cloud security threats include data breaches; misconfiguration and inadequate change control; lack of cloud security architecture and strategy; and insufficient identity, credential access, and key management.
OWASP Top 10: The Open Web Application Security Project (OWASP) maintains multiple top 10 lists, but its web application list is the most famous and is updated every few years. The top four threats in the 2021 list were broken access control, cryptographic failures, injection, and insecure design.
SANS CWE Top 25: SANS maintains a Common Weakness Enumeration (CWE) that describes all common security errors. Its Top 25 list highlights the most dangerous and impactful weaknesses each year. In 2021, the top four were out-of-bounds write, improper neutralization of input during web page generation (cross-site scripting), out-of-bounds read, and improper input validation.

NIST doesn’t publish regular lists of top vulnerabilities.

100
Q

Which of the following is an example of proper data sanitization in an Infrastructure as a Service (IaaS) cloud environment when a contract for service is being terminated?

A. The customer corporation copies all their files to their on-premises data center and then has the cloud drives removed and shredded
B. The cloud provider moves the customer’s files to a new set of hard drives for transport to the on-premises data center
C. The customer corporation copies their files to drives within a new cloud provider and encrypts their old environment and destroys the key
D. The customer corporation creates a Platform as a Service (PaaS) replacement within the same cloud provider and moves their files to the new servers

A

C. The customer corporation copies their files to drives within a new cloud provider and encrypts their old environment and destroys the key

Explanation:
Data sanitization is the process of destroying data so that it cannot be retrieved later off of those drives. In a PaaS environment, it is not possible for the cloud customer to take traditional actions like shredding the hard drive. It would be good to ensure that the contract has that as an action the provider will do once a drive is removed from service. However, the customer’s data is not going to be on just a few drives as a result of data dispersion unless this is a private cloud, which there is no evidence of in the question or the answers. So crypto shredding is the best option, which is to encrypt the data and then destroy the key. A couple of rounds of this would be even more beneficial.

Since this is not a private cloud, it is not reasonable to think that the provider would shred the drives when the customer leaves.

Simply moving the files to new drives to change services (to PaaS) or to move the files on external drives for transport to the data center will not sanitize the files that were in the IaaS environment. Action must be taken to permanently remove the files from the previous locations.

101
Q
A