Pocket Prep 12 Flashcards
In which cloud service model does the CSP’s responsibility extend to securing operating systems, database management systems (DBMSs), and similar components made available to the cloud customer?
A. PaaS
B. IaaS
C. All service models
D. SaaS
A. PaaS
Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.
However, at the software level, responsibility depends on the cloud service model in use, including:
Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them. Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use. Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
Which framework, developed by the International Data Center Authority (IDCA), covers all aspects of data center design, including cabling, location, connectivity, and security?
A. Infinity Paradigm
B. HITRUST
C. OCTAVE
D. Risk Management Framework
A. Infinity Paradigm
Explanation:
The International Data Center Authority (IDCA)) is responsible for developing the Infinity Paradigm, which is a framework intended to be used for operations and data center design. The Infinity Paradigm covers aspects of data center design, which include location, cabling, security, connectivity, and much more.
Risk Management Framework (RMF) is defined by NIST as “a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. The risk-based approach to control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, Executive Orders, policies, standards, or regulations.”
The Health Information Trust Alliance (HITRUST) is a non-profit organization. They are best known for developing the HITRUST Common Security Framework (CSF), in collaboration with healthcare, technology, and information security organizations around the world. It aligns standards from ISO, NIST, PCI, and regulations like HIPAA.
The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a software threat modeling technique by Carnegie Mellon University that was developed for the US Department of Defense (DoD).
Ardal is the information security manager working for a manufacturing company that specializes in molded silicon kitchen products. They are moving their customer data and product information into a Platform as a Service (PaaS) public cloud environment. Ardal and his team have been analyzing the risks associated with this move so that they can ensure the most appropriate security controls are in place.
Which of the following is TRUE regarding the transfer of risk?
A. Transfer of risk is often the cheapest option for responding to risk
B. RIsk is never truly transferred. Transference simply shares the risk with another company.
C. Risk transfer should always be the first avenue that an organization takes to respond to risk
D. Risk transfer can only be done when the organization has exhausted all other risk responses
B. RIsk is never truly transferred. Transference simply shares the risk with another company.
Explanation:
Risk transference is better stated as risk sharing, although transfer is the common word in use. When data is placed on a cloud provider’s infrastructure, it does not remove the risk for the customer. It does not give the risk to the provider. The customer is always responsible for their data.
Risk transfer/share simply means that the cloud provider here has a responsibility to also care for the data. The critical word in that last sentence is also. Under GDPR, the cloud provider is required to care for the data, and a Data Processing Agreement (DPA) should be created to inform the provider of their responsibilities. A DPA is a Privacy Level Agreement (PLA) more generically.
Risk transfer can be done at any time and is not necessarily the cheapest of the options.
Risk transfer is not the first avenue for risk management. There are four options. This is just one of them. The other three are risk reduction/mitigation, risk avoidance, and risk acceptance.
Your organization is in the process of migrating to the cloud. Mid-migration you come across details in an agreement that may leave you non-compliant with a particular law. Who would be the BEST contact to discuss your cloud-environment compliance with legal jurisdictions?
A. Stakeholder
B. Consultant
C. Regulator
D. Partner
C. Regulator
Explanation:
As a CCSP, you are responsible for ensuring that your organization’s cloud environment adheres to all applicable regulatory requirements. By staying current on regulatory communications surrounding cloud computing and maintaining contact with approved advisors and, most crucially, regulators, you should be able to assure compliance with legal jurisdictions.
A partner is a generic term that can be used to refer to many different companies. For example, an auditor can be considered a partner.
A stakeholder is someone who has responsibility for caring for a part of the business.
A consultant could assist with just about anything. It all depends on what their skills are. It is plausible that a consultant could help with legal issues. However, regulators definitely understand the laws, so that makes for the best answer.
Rafferty just configured the server-based Platform as a Service (PaaS) that they are using for their company, a government contractor. The server will be used to perform computations related to customer actions on their e-commerce website. He is concerned that they may not have enough CPU and memory allocated to them when they need it.
What should he do?
A. Ensure the limits will not cause any problems with the service
B. Make sure that the server has available share space
C. Ensure a reservation is made at the minimum level needed
D. Set a limit to make sure that the service will work correctly
C. Ensure a reservation is made at the minimum level needed
Explanation:
A minimum resource that is granted to a cloud customer within a cloud environment is known as a reservation. With a reservation, the cloud customer should always have, at the minimum, the amount of resources needed to power and operate any of their services.
On the flip side, limits are the opposite of reservations. A limit is the maximum utilization of memory or processing allowed for a cloud customer. It is a good idea to set to control costs, especially on a new service.
The share space is what is available for any customer to utilize. The cloud works on a first come, first served approach.
An organization has decided that the best course of action to handle a specific risk is to obtain an insurance policy. The insurance policy will cover any financial costs of a successful risk exploit. Which type of risk response is this an example of?
A. Risk transference
B. Risk mitigation
C. Risk avoidance
D. Risk acceptance
A. Risk transference
Explanation:
When an organization obtains an insurance policy to cover the financial burden of a successful risk exploit, this is known as risk transference or risk sharing. It’s important to note that with risk transference, only the financial losses would be covered by the policy, but it would not do anything to cover the loss of reputation the organization might face.
Risk avoidance is when a decision is made to not engage in, or to stop engaging in, risky behavior.
Risk mitigation or risk reduction is when controls are put in place to reduce the chance of a threat being realized or to minimize the impact of it once it does happen.
Risk acceptance always needs to be done because no matter how much of the other three options are done, risk cannot be eliminated. Who accepts the risk, though, is something that a business needs to carefully consider.
A malicious actor created a free trial account for a cloud service using a fake identity. Once the free trial cloud environment was up and running, they used it as a launch pad for several cloud-based attacks. Because they used a fake identity to set up the free trial, it would be difficult (if not impossible) for the attacks to be traced back to them.
What type of cloud-based threat is being described here?
A. Denial-of-service
B. Shared technology issues
C. Abuse or nefarious use of cloud services
D. Advanced persistent threats
C. Abuse or nefarious use of cloud services
Explanation:
Abuse or nefarious use of cloud services is listed as one of the top twelve threats to cloud environments by the Cloud Security Alliance. Abuse or nefarious use of cloud services occurs when an attacker is able to launch attacks from a cloud environment either by gaining access to a poorly secured cloud or using a free trial of cloud service. Often, when using a free trial, the attacker will configure everything using a fake identity so attacks can’t be traced back to them.
A Denial-of-Service (DoS) attack is when the bad actor causes a system to max out or fill up so that a user is not able to do any work.
Shared technology is the core nature of clouds, especially public clouds. If the cloud provider does not take care to ensure that each tenant is not properly isolated or they do not take care of the operating systems, it could lead to so many possible problems. If the hypervisors, Microsoft servers, Linux servers, or any of the other software is not patched or configured properly, it is possible that data could leak between tenants or cause other issues.
Advance Persistent Threats (APT) are when very skilled and aggressive bad actors, probably operating on behalf of a government, create software that will slowly cause problems for another country or business. So the word “advanced” speaks to the skill of the bad actors. The word “persistent” speaks to malicious software being in place over a long period of time to cause a great number of problems. If you are unfamiliar with APTs, do a little research into Stuxnet.
If software developers and the supporting team were to ask the following four questions, what would they be doing?
What are we working on? What could go wrong? What are we going to do about it? Did we do a good job?
A. Evaluating the Recovery Point Objective (RPO)
B. Determining Maximum Tolerable Downtime (MTD)
C. Performing threat modeling
D. Performing a quantitative risk assessment
C. Performing threat modeling
Explanation:
The four questions are the basic idea behind threat modeling. Threat modeling allows the team to identify, communicate, and understand threats and mitigations. There are several techniques, such as STRIDE, PASTA, TRIKE, and OCTAVE.
STRIDE is one of the most prominent models used for threat modeling. Tampering with data is included in the STRIDE model. DREAD is another model, but it does not include tampering with data as a category. TOGAF and REST are not threat models. STRIDE includes the following six categories:
Spoofing identify Tampering with data Repudiation Information disclosure Denial of service Elevation of privileges
A quantitative risk assessment is when the Single Loss Expectency (SLE), Annual Rate of Occurrence (ARO), and Annualized Loss Expectency (ALE) are calculated based on the threat to a specific asset.
Determining the MTD is a step in Business Continuity/Disaster Recovery/Continuity planning. It answers the question of how long an asset can be unavailable before it is a significant problem for the business.
Evaluating the RPO is also a part of Business Continuity/Disaster Recovery/Continuity planning. The RPO is the value that represents how much data can be lost before it too is a problem for the business.
Brocky has been working with a project team analyzing the risks that could occur as this project progresses. The analysis that their team has been performing used descriptive information rather than financial numbers. Which type of assessment have they been performing?
A. Quantitative assessment
B. Fault tree analysis
C. Qualitative assessment
D. Root cause analysis
C. Qualitative assessment
Explanation:
There are two main assessment types that can be done for assessing risk: qualitative assessments and quantitative assessments. While quantitative assessments are data driven, focusing on items such as Single Loss Expectancy (SLE), Annual Rate of Occurrence (ARO), and Annual Loss Expectancy (ALE), qualitative assessments are descriptive in nature and not data driven.
Fault tree analysis is actually a combination of quantitative and qualitative assessments. The question is looking for something that is not financial and that would be the quantitative. So this is more than what the question is about.
Root cause analysis is what is done in problem management from ITIL. Root cause analysis analyzes why some bad event has happened so that the root cause can be found and fixed so that it does not happen again.
Hillary is working to ensure that her company receives the services it requires from its cloud service provider. They have a contract with Service Level Agreements (SLAs) for their bandwidth and uptime. What is Hillary doing?
A. Change management
B. Information Technology Service Management (ITSM)
C. Business Continuity Planning (BCP)
D. ITIL (formerly Information Technology Infrastructure Library)
B. Information Technology Service Management (ITSM)
Explanation:
ITSM is effectively ISO 20000-1 and is based on ITIL. Managing the services from the cloud provider matches ITSM slightly better than ITIL, but ITIL was included as an answer option for discussion purposes. ITSM is a comprehensive approach to designing, delivering, managing, and improving IT services within an organization. It focuses on aligning IT services with the needs of the business and ensuring that the IT services provided are efficient, reliable, and of high quality. ITSM involves a set of practices, processes, and policies that guide the entire service lifecycle, from service strategy and design to service transition, operation, and continual service improvement.
Key characteristics of ITSM include:
Customer-centric: ITSM emphasizes understanding and meeting the needs of customers and end-users. It aims to improve customer satisfaction and overall service experience. Process-oriented: ITSM adopts a process-driven approach, defining workflows and procedures to ensure consistent and repeatable service delivery. Focus on continual improvement: ITSM encourages regular evaluation and optimization of IT services and processes to increase efficiency and effectiveness.
ITIL involves managing data centers more specifically, so it matches the work of the cloud provider slightly better.
Key characteristics of ITIL include:
Service lifecycle approach: ITIL is structured around the service lifecycle, consisting of five core stages: Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. Process framework: ITIL defines a range of processes that cover various aspects of IT service management, including incident management, problem management, change management, service level management, and more. Widely adopted standard: ITIL has become a de facto standard for ITSM and is widely adopted by organizations globally.
Change management is a structured and organized approach to managing and implementing organizational changes. It involves planning, coordinating, communicating, and monitoring modifications to various aspects of the organization, such as processes, systems, technology, culture, or organizational structure.
BCP is about planning for when there are failures, not the basic management of a cloud vendor.
Gherorghe is working with the cloud operations department after a variety of strange behaviors have been seen in their Infrastructure as a Service (IaaS) environment. They are now looking for a tool or toolset that can help them identify fraudulent, illegal, or other undesirable behavior within their client-server datasets.
What tool or toolset can provide assistance with this?
A. Database Activity Monitor (DAM)
B. Application Programming Interface (API) gateway
C. Web Application Firewall (WAF)
D. eXtensible Markup Language (XML) firewall
A. Database Activity Monitor (DAM)
Explanation:
Gartner defines DAMs as “a suite of tools that can be used to support the ability to identify and report on fraudulent, illegal or other undesirable behavior.” These tools include Oracle’s Enterprise Manager. These tools have evolved from monitoring user traffic in databases. They are useful for so many different uses to know what is going on with user traffic in and out of databases.
A WAF is a layer 7 firewall that monitors web applications, HTML, and HTTP traffic.
An API gateway is also a layer 7 device. However, this one monitors APIs. This would include SOAP and REpresentation State Transfer (REST).
XML firewalls also exist at layer 7. This monitors XML traffic only. APIs would include XML and JavaScript Object Notation (JSON).
Biometrics and passwords are part of which stage of IAM?
A. Authentication
B. Accountability
C. Authorization
D. Identification
A. Authentication
Explanation:
Identity and Access Management (IAM) services have four main practices, including:
Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering. Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in. Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this. Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
Antonia has recently been hired by a cancer treatment facility. One of the first training programs that she is required to go through at the office is related to the protection of individually identifiable health information. Which law is this related to and which country does it apply to?
A. Health Insurance Portability and Accountability Act (HIPAA), USA
B. Health Insurance Portability and Accountability Act (HIPAA), Canada
C. Gramm-Leach-Bliley Act (GLBA), USA
D. General Data Protection Regulation (GDPR), Germany
A. Health Insurance Portability and Accountability Act (HIPAA), USA
Explanation:
The Health Insurance Portability and Accountability Act (HIPAA) is concerned with the security controls and confidentiality of Protected Health Information (PHI). It’s vital that anyone working in any healthcare facility be aware of HIPAA regulations.
The Gramm-Leach-Bliley Act, officially named the Financial Modernization Act of 1999, focuses on PII as it pertains to financial institutions, such as banks.
GDPR is an EU specific regulation that encompasses all organizations in all different industries.
The privacy act of 1988 is an Australian law that requires the protection of personal data.
Oya and her risk assessment team are working on preparing to perform their annual assessment of the risks that their cloud data center could experience. What is the correct order of risk management steps?
A. Prepare, categorize, select, implement, assess, authorize, monitor
B. Authorize, prepare, assess, categorize, select, implement, monitor
C. Assess, authorize, prepare, categorize, select, implement, monitor
D. Prepare, assess, categorize, select, implement, authorize, monitor
A. Prepare, categorize, select, implement, assess, authorize, monitor
Explanation:
The National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) lists the correct order of the risk management steps as the following: prepare, categorize, select, implement, assess, authorize, and monitor. The prepare phase is where Oya and her team are. They are getting into the process of analyzing the risks for the cloud data center. Then they will categorize the risks and threats based on the impact they could have on the organization. The select phase is when controls are selected to reduce the likelihood or impact of the threats.
If there are new controls or simply new settings that need to be configured, this is done in the implement phase. When the assess phase is active, the team is looking to see if the controls are in place and working properly. The authorize phase is when senior management is informed of all that can be found, determined, chosen, and analyzed, and they authorize their business to have their production environments configured in this new way. Lastly, there is ongoing monitoring that is performed.
The Business Continuity/Disaster Recovery (BC/DR) team has been working for months to update their corporate DR plan. The PRIMARY goal of a DR test is to ensure which of the following?
A. All production systems are brought back online
B. Management is satisfied with the BC/DR plan
C. Recovery Time Objective (RTO) goals are met
D. Each administrator knows all the steps of the plan
C. Recovery Time Objective (RTO) goals are met
Explanation:
With any Business Continuity and Disaster Recovery (BCDR) test, the main goal and purpose is to ensure that Recovery Time Objective (RTO) and Recovery Point Objective (RPO) goals are met. When planning the test, staff should consider how to properly follow the objectives and decisions made as part of RPO and RTO analysis.
It is unlikely that all production systems will be brought back on line in the event of a disaster. If the plan is just switching the cloud setup from one region to another, all systems could be brought online. However, there is nothing in the question that says that all production systems are in the cloud or what type of disaster this even is. So, in traditional BC/DR planning, it is not expected that all production systems will be brought back online in the alternate configuration.
Management does need to be satisfied with the plans that are built, but the question is about the goal of the test. The test needs to show that the plan will work. That should make management happy. The immediate answer to the question is to match the RTO goals.
Administrators do not need to know every step of the plan. All administrators need to know is what they need to know, which would likely not be all the steps.
Which of the Trust Services principles must be included in a Service Organization Controls (SOC) 2 audit?
A. Privacy
B. Security
C. Availability
D. Confidentiality
B. Security
Explanation:
The Trust Service Criteria from the American Institute of Certified Public Accountants (AICPA) for the Security Organization Controls (SOC) 2 audit report is made up of five key principles: Availability, Confidentiality, Process integrity, Privacy, and Security. Security is always required as part of a SOC 2 audit. The other four principles are optional.
A bad actor working for an enemy state has created malware that has the purpose of stealing data from the other country regarding their military and its products and capabilities. The bad actor has planted malware on the enemy’s systems and has left it, undetected, for eight months. What is the name of this type of attack?
A. Insecure Application Programming Interface (API)
B. Human error
C. Advanced persistent threat (APT)
D. Malicious insider
C. Advanced persistent threat (APT)
Explanation:
Many types of malware and malicious programs are loud and aim to disrupt a system or network. Advanced Persistent Threats (APTs) are the opposite. APTs are attacks that attempt to steal data and stay hidden in the system or network for as long as possible. The longer the APT can stay in the system, the more data it is able to collect. The advanced part of APT is in reference to the skill level of the bad actor.
A malicious insider would be performing bad actions within the business acting without their knowledge. The enemy is probably operating with knowledge inside the government.
Human error is a problem for a business, but it is an accident. Creating malware is not accidental; it is intentional and malicious.
An insecure API is not an attack. It is a vulnerability. There is some weakness in the coding or implementation that leaves it vulnerable.
Cloud security is a difficult task, made all the more difficult by laws and regulations imposing restrictions on cross-border data transfers. The actual hardware in the cloud can be located anywhere, so it is critical to understand where your data resides. Which of the following statements is true regarding who is responsible for the data?
A. The cloud service provider (CSP) retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
B. The cloud administrator retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
C. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
D. Both the cloud service provider (CSP) and the cloud service customer (CSC) retain responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
C. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
Explanation:
Correct answer: The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
Regardless of whether cloud or non-cloud services are utilized, the data controller [the Cloud Service Customer (CSC)] is ultimately responsible for the data’s security. Cloud security encompasses more than data protection; it also encompasses applications and infrastructure.
According to the European Union (EU) General Data Protection Regulation (GDPR) requirements, the cloud provider is responsible for the data in its protection. The reason the answer that says “both” is not correct is the correct answer contains the word ultimate. Ultimately, the cloud customer is always responsible for their data.
This question also does not mention GDPR, so it is difficult to determine if there is a legal responsibility for the data while it is in the cloud provider’s care, as we do not actually know where on the planet the question is referring to.
It is necessary within a business to control data at all stages of the lifecycle. Erika is working at a corporation to setup, deploy, and monitor a Data Loss Prevention (DLP) solution. Which component of DLP is involved in the process of applying corporate policy regarding storage of data?
A. Enforcement
B. Identification
C. Discovery
D. Monitoring
A. Enforcement
Explanation:
DLP is made up of three major components. They include discovery, monitoring, and enforcement. Enforcement is the final stage of DLP implementation. It is the enforcement component that applies policies and then takes actions, such as deleting data.
Identification is the first piece of IAAA and is the statement of who you claim to be, such as a user ID.
The CSA SecaaS Category 2 document is a good read on the topic of DLP and the cloud and is highly recommended.
Blythe has been working for a Fortune 500 healthcare company for many years now. They are beginning to transition from their on-prem data center to a cloud-based solution. She and her team are working to put together information to present to the Board of Directors (BoD) regarding what they can expect from a move to the cloud.
Which of the following statements is most likely true when moving from an on-prem data center to Infrastructure as a Service (IaaS)?
A. Moving to the cloud will have a predictable OpEx. However, the security in the cloud is higher.
B. A traditional data center will have lower costs on the Operational Expenditures (OpEx) side and higher Capital Expenditures (CapEx)
C. A traditional data center has a more secure operating environment than a cloud environment
D. The pricing for cloud computing will be less predictable than that of a traditional data center
D. The pricing for cloud computing will be less predictable than that of a traditional data center
Explanation:
A traditional on-prem data center has a higher CapEx, but OpEx is not lower. It is likely the same or higher. The operating environment could be more secure in either environment. The security of the cloud-based IaaS depends on two factors: the security of the cloud provider’s data center and the configurations within the IaaS. It could be more secure in the cloud. The OpEx in the cloud may eventually be predictable, but especially when moving to the cloud, it is not as predictable as some may prefer.
So, with each of those thoughts, that leaves the best answer as “the pricing in the cloud is less predictable than an on-prem data center.”
Bai is working on moving the company’s critical infrastructure to a public cloud provider. Knowing that she has to ensure that the company is in compliance with the requirements of the European Union’s (EU) General Data Protection Regulation (GDPR) country specific laws since the cloud provider is the data processor, at what point should she begin discussions with the cloud provider about this specific protection?
A. Configuration of the Platform as a Service (PaaS) windows servers
B. Establishment of Service Level Agreements (SLA)
C. At the moment of reversing their cloud status
D. Data Processing Agreement (DPA) negotiation
D. Data Processing Agreement (DPA) negotiation
Explanation:
Under the EU’s GDPR requirements for each country, there is a requirement for a cloud customer to inform the cloud provider that they will be storing personal data (a.k.a. Personally Identifiable Information—PII) on their servers. This is stated in the DPA, which is more generically called a Privacy Level Agreement (PLA). The cloud provider is a processor because they will be storing or holding the data. It is not necessary for the provider to ever use that data to be considered a processor. So, the first point for discussion with the cloud provider regarding the four answer options listed is the DPA negotiation.
The SLAs are part of contract negotiation, but the DPA is specific to the storage of personal data in the cloud, which is the topic of the question. The configuration of the servers and the removal of data from the cloud provider’s environment (reversibility) would involve concerns about personal data. The DPA negotiation is a better answer because the question asks at what point should Bai “begin discussions” with the cloud provider.
Haile is a cloud operator who has been reviewing the Indications of Compromise (IoC) from the company’s Security Information and Event Manager (SIEM). The SIEM reviews the log outputs to find these possible compromises. Where should detailed logging be in place within the cloud?
A. Only access to the hypervisor and the management plane
B. Wherever the client accesses the management plane only
C. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
D. Only specific levels of the virtualization structure
C. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
Explanation:
Logging is imperative for a cloud environment. Role-based access should be implemented, and logging should be done at each and every level of the virtualization infrastructure as well as wherever the client accesses the management plane (such as a web portal).
The SIEM cannot analyze the logs to find the possible compromise points unless logging is enabled, and the logs are delivered to that central point. This is necessary in case there is a compromise, which could happen anywhere within the cloud.
Which of the following is MOST relevant to an organization’s network of applications and APIs in the cloud?
A. Service Access
B. User Access
C. Privilege Access
D. Physical Access
A. Service Access
Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:
User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources. Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring. Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.
Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.
Winta is using a program to create a spreadsheet after having collected information regarding the sales cycle that the business has just completed. What phase of the cloud data lifecycle is occurring?
A. Archive
B. Store
C. Create
D. Share
C. Create
Explanation:
Generating a new spreadsheet is the create phase of the data lifecycle. Create is the generation of new data/voice/video in any manner. The Cloud Security Alliance (CSA) also indicates that the create phase is when data is modified. Not everyone agrees with that last sentence, but this is an exam that is a joint venture between the CSA and (ISC)2, so it is good to know that is what they say in the guidance 4.0 document.
As soon as the data is created, it needs to be moved to persistent storage (hard disk drive or solid state drive).
If that spreadsheet is moved into long-term storage for future reference, then, if needed, that would be the archive phase.
Sending the spreadsheet to the boss for their review (or to anyone else) would be the share phase.
Which of the following types of SOC reports provides high-level information about an organization’s controls intended for public dissemination?
A. SOC 2 Type II
B. SOC 1
C. SOC 3
D. SOC 2 Type I
C. SOC 3
Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:
SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability. SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report. SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.
SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.
Rufus is working for a growing manufacturing business. They have been upgrading their manufacturing equipment over the years to product versions that include internet connectivity for maintenance and management information. This has increased the amount of logs that need to be filtered. Due to the volume of log data generated by systems, it poses a challenge for his organization to perform log reviews efficiently and effectively.
What can his organization implement to help solve this issue?
A. Secure Shell (SSH)
B. System Logging protocol (syslog) server
C. Security Information and Event Manager (SIEM)
D. Data Loss Prevention (DLP)
C. Security Information and Event Manager (SIEM)
Explanation:
An organization’s logs are valuable only if the organization makes use of them to identify activity that is unauthorized or compromising. Due to the volume of log data generated by systems, the organization can implement a System Information and Event Monitoring (SIEM) system to overcome these challenges. The SIEM system provides the following:
Log centralization and aggregation Data integrity Normalization Automated or continuous monitoring Alerting Investigative monitoring
A syslog server is a centralized logging system that collects, stores, and manages log messages generated by various devices and applications within a network. It provides a way to consolidate and analyze logs from different sources, allowing administrators to monitor system activity, troubleshoot issues, and maintain security. However, it does not help to correlate the logs as the SIEM does.
SSH is a networking protocol that encrypts transmissions. It works at layer 5 of the OSI model. It is commonly used to transmit logs to the syslog server. It is not helpful when analyzing logs. It only secures the transmission of the logs.
DLP tools are used to monitor and manage the transmission or storage of data to ensure that it is done properly. With DLP, the concern is that there will be a data breach/leak unintentionally by the users.
When developing a business continuity and disaster recovery (BC/DR) plan, what step should be completed after the scope has been defined?
A. Test the plan
B. Recovery strategies
C. Embed in the user community
D. Business Impact Assessment (BIA)
D. Business Impact Assessment (BIA)
Explanation:
After defining the scope, the next step of developing a BC/DR plan is to perform a business impact assessment. This stage determines what should be included in the plan and looks at items such as the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). It will be necessary during this stage to identify critical systems within the environment.
Based on the knowledge found during the BIA, it is then necessary to develop the recovery strategy. For example, when a failure occurs, will the business fail to a different region within that cloud provider or will they fail to a different cloud provider.
The solution must be tested to ensure that it will work when needed.
At the end of the BC/DR planning process, everyone who needs to know about the plan is aware.
Freeya has been assisting cloud data architects with planning how they will securely store data in their Platform as a Service implementation. They know that leaving a key with the encrypted data is not advised. If someone has the key, they can read the data. They are exploring options in the cloud to protect those keys without costing too much money.
What is the most efficient and cost effective way of storing a key for data that is not exceedingly sensitive?
A. Utilize client-side encryption and decryption with the key stored in the virtual machine
B. Utilize server-side encryption and decryption with the key stored in the virtual machine
C. Utilize a cloud Key Management Service (KMS) to encrypt the data encryption key
D. Utilize a cloud Hardware Security Module (HSM) to encrypt and decrypt the data
C. Utilize a cloud Key Management Service (KMS) to encrypt the data encryption key
Explanation:
It is essential that we understand the options for how and where to encrypt data in the cloud. There are many solutions that vary per cloud service provider.
Using a KMS is a great solution today for most companies. The cost is fairly low, if not free for KMS. With KMS, the customer generates a Data Encryption Key (DEK) that is used to encrypt the actual data. The DEK is then encrypted with a Customer Master Key (CMK). The CMK is stored in the KMS. The plaintext data is only available on the server or the customer side.
An HSM is a much more expensive option. The encryption and decryption of the data actually occurs within the HSM.
Client-side encryption and decryption is not so bad, but the key should never be stored in the VM. When the image is decrypted to be spun up, the key will be accessible to anyone who can see that image.
The same can be said about server-side encryption and decryption. It is not entirely wrong, but the key should never be stored within the image.
Reference:
Your company is looking for a way to ensure that their most critical servers are online when needed. They are exploring the options that their Platform as a Service (PaaS) cloud provider can offer them. The one that they are most interested in has the highest level of availability possible. After a cost-benefit analysis based on their threat assessment, they think that this will be the best option. The cloud provider describes the option as a grouping of resources with a coordinating software agent that facilitates communication, resource sharing, and routing of tasks.
What term matches this option?
A. Server cluster
B. Server redundancy
C. Storage controller
D. Security group
A. Server cluster
Explanation:
Server clusters are a collection of resources linked together by a software agent that enables communication, resource sharing, and task routing. Server clusters are considered active-active since they include at least two servers (and any other needed resources) that are both active at the same time.
Server redundancy is usually considered active-passive. Only one server is active at a time. The second waits for a failure to occur; then, it will take over.
Storage controllers are used for storage area networks. It is possible that the servers in the question are storage servers, but more likely they contain the applications that the users and/or the customers require. Therefore, server clustering is the correct answer.
Security groups are effectively virtualized local area networks protected by a firewall.
Which of the following cloud service models has the FEWEST potential external risks and threats that the customer must consider?
A. Software as a Service
B. Platform as a Service
C. Function as a Service
D. Infrastructure as a Service
D. Infrastructure as a Service
Explanation:
In an Infrastructure as a Service (IaaS) environment, the customer has the greatest control over its infrastructure stack. This means that it needs to rely less on the service provider than in other service models and, therefore, has fewer potential external security risks and threats.
Bao is able to connect to his home’s thermostat using the internet on his phone and adjust the temperature remotely. This is an example of which type of technology?
A. Blockchain
B. Internet of Things (IoT)
C. Machine learning (ML)
D. Artificial Intelligence (AI)
B. Internet of Things (IoT)
Explanation:
The Internet of Things (IoT) refers to the use of non-traditional computing devices (such as lamps, thermostats, and other home appliances) accessing the internet. Although some do consider laptops, smart phones, and computers to be part of IoT, for the exam, these items are unlikely to be considered part of the IoT.
Machine Learning (ML) is computers being able to process data to determine answers. The answers could confirm (or not) a hypothesis. Or it is possible for ML to come to a conclusion without any preconceived hypothesis. ML is a component of AI.
AI is having computers process data the same way a human brain could. Arguably, we are not at AI yet. Some say we are at narrow AI. It is still an evolving technology for sure.
Blockchain creates a ledger of transactions that are permanent and verifiable. A common use today is cryptocurrency.
Which of the following characteristics of cloud computing enables a cloud provider to operate cost-effectively by distributing costs across multiple cloud customers?
A. On-Demand Self-Service
B. Rapid Elasticity and Scalability
C. Metered Service
D. Resource Pooling
D. Resource Pooling
Explanation:
The six common characteristics of cloud computing include:
Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols. On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand. Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers. Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure. Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use. Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure
Which of the following terms is MOST related to the chain of custody?
A. Confidentiality
B. Non-repudiation
C. Availability
D. Integrity
B. Non-repudiation
Explanation:
Non-repudiation refers to a person’s inability to deny that they took a particular action. Chain of custody helps to enforce non-repudiation because it demonstrates that the evidence has not been tampered with in a way that could enable someone to deny their actions.
Confidentiality, integrity, and availability are the “CIA triad” that describes the main goals of security.
The organization has deployed a federated single sign-on system (SSO) and is configured to generate tokens for users and send them to the service provider. Which BEST describes this organization’s role?
A. Domain Registrar
B. Certificate Authority (CA)
C. Identity Provider (IdP)
D. Service Provider (SP)
C. Identity Provider (IdP)
Explanation:
The organization would act as the identity provider, while the relying party would act as the service provider. The identity provider is the organization that generates tokens for users because it has the ability to authenticate the users. In this scenario, the organization is authenticating their own employees.
The SP is the organization that provides the service that will be used by the users, for example, a sales force.
The CA is used to verify the X.509 certificates. Encryption should be used within the SSO system, but the question doesn’t mention anything encryption related.
A domain registrar is the business that corporations go to to register a domain name, for example, PocketPrep.com.
Which regulation would be used to build a risk-based policy for cost-effective security for government agencies?
A. Gramm-Leach-Bliley Act (GLBA)
B. Health Information Portability Accountability Act (HIPAA)
C. Federal Information Security Management Act (FISMA)
D. Protected Health Information (PHI)
C. Federal Information Security Management Act (FISMA)
Explanation:
US government agencies must build risk-based policies for cost-effective security. Government agencies are not immune to bad actors attacking them. In the past, the security within government agencies was not very good, so this regulation demands that they do better.
GLBA is an extension to Sarbanes-Oxley that demands that personal data be protected with the financial data. The HIPAA requires that Protected Health Information (PHI) be protected.
Who should have access to the management plane in a cloud environment?
A. A highly vetted and limited set of administrators
B. Security Operation Center (SOC) personnel
C. A single, highly vetted administrator
D. Software developers deploying virtual machines
A. A highly vetted and limited set of administrators
Explanation:
If compromised, the management plane would provide full control of the cloud environment to an attacker. Due to this, only a highly vetted and limited set of administrators should have access to the management plane. However, you will want more than a single administrator. If the single administrator leaves or is no longer able to perform management duties, the ability of the business to manage their cloud environment would be compromised.
Software developers deploying virtual machines may need access, but they would be in the highly vetted group of administrators if that is the case. The same would be true for SOC personnel. They need to be vetted and trusted.
Complete the following sentence with the MOST accurate statement: Cloud environments . . .
A. consist of far fewer systems and servers
B. are generally operated out of one physical location
C. are built of components that are completely different from those used in a traditional environment
D. take the level of concern away from the cloud customer and place it onto the cloud provider
D. take the level of concern away from the cloud customer and place it onto the cloud provider
Explanation:
While it may seem that a cloud infrastructure is completely different from that of a traditional data center, all the components that exist in a traditional data center are still needed in the cloud. The main difference is that within a cloud environment, the responsibility and level of concern is moved away from the cloud customer to the cloud provider. However, not all concerns, but this answer is the best out of the other three statements. The cloud provider is responsible for the physical data center and its security, and depending on the level of service that the customer buys, they are also possibly responsible for the virtual server and applications.
One way to look at the CCSP exam is that it is a data center exam. The language is sometimes changed to the newer cloud language.
Ben is part of an incident response (IR) team that has found that a bad actor has compromised a database full of personal information regarding their customers. What they must do is a good forensic investigation to figure out exactly what has been compromised, how, and hopefully by whom.
Which of the following can provide information regarding the runtime state of a running virtual machine?
A. Technical readiness
B. Digital forensics
C. Virtual Machine Introspection (VMI)
D. Hashing and digital signatures
C. Virtual Machine Introspection (VMI)
Explanation:
VMI is a tool that allows for information regarding the runtime state of a running virtual machine to be monitored. It tracks the events, such as interrupts and memory writes, which allow the content of memory to be collected for a running virtual machine.
Hashing and digital signatures can be used to provide evidence that the digital evidence has not been changed or modified.
Digital forensics is the process of collecting digital forensic evidence and examining it. Digital forensic science includes the analysis of media, software, and networks.
Technical readiness would be getting ready to perform evidence collection and analysis when needed in the future.
Georgi is the data architect for a real estate corporation. He has been designing the data structure that they will use when they move into the cloud using Platform as a Service (PaaS). His team has come to the conclusion that they will use a relational database in the cloud. What is this type of data called?
A. Structured data
B. Semistructured data
C. Unstructured data
D. Unmapped data
A. Structured data
Explanation:
Structured data is data that has a known format and content type. One example of structured data is the data that is housed in relational databases. This data is housed in specific fields that have a known structure and potential values of data. Having the data organized in these fields makes it easy to search and analyze.
Data lakes and big data are considered unstructured data. Unstructured data is unpredictable. It includes all different types of data (e.g., documents, spreadsheet, images, videos, etc.) Since each file is not predictable in size or format, it is considered unstructured.
Semistructured data is a relational database that has a field that allows for unstructured data to be entered and stored in it.
Unmapped data is not really a term, but it could be considered data that is not classified.
In cloud computing, the security of Domain Name System (DNS) is very important to prevent a bad actor from hijacking DNS and redirecting network traffic. To prevent misinformation from being passed throughout the DNS environment, DNS Security (DNSSec) protects the recursive resolver information using what?
A. Symmetric encryption
B. Advanced Encryption Standard
C. Digital signatures
D. Hashing algorithms
C. Digital signatures
Explanation:
DNSSec is a protocol that works as a security addition to the standard DNS protocol. DNSSEC works by ensuring all Fully Qualified Domain Name (FQDN) responses are validated. The recursive resolver uses a private key to digitally sign information that is sent in DNS updates. The signature allows the receiving DNS server to validate any DNS information that it receives, thus preventing a bad actor from redirecting network traffic.
Creation of digital signatures is done with asymmetric algorithms, not symmetric. Symmetric algorithms do not provide a way to prove authenticity because they function with a shared key. The Advanced Encryption Standard (AES) is a symmetric algorithm.
Hashing algorithms are not used to validate the source of information. They can be used to verify the integrity, but it is not possible to verify that a FQDN maps to a specific Internet Protocol (IP) address.
Carin is working at a real estate company as the information security manager. She was recently hired to begin to build a solid information security program. Up until now, the company has only had a few policies and procedures in place as well as desktop firewalls and a network Intrusion Detection System (IDS). She knows there is a lot of work to do to build a secure environment for the users, especially since they handle a lot of sensitive customer personal information. Today she is looking at how a data leak could occur within this business.
If they determine that the data is most likely to be leaked through their website when the bad actor is able to compromise a stored link that redirects the user to the bad actor’s site where they enter and share their credentials with the bad actor, what phase of the data lifecycle would this be?
A. Store
B. Use
C. Archive
D. Destroy
B. Use
Explanation:
Since the user is logging in through the bad actor’s site, this would be the use phase. The user is logging in to view the data. It is not being modified, nor is it being shared with someone else.
The data is stored on the website, or behind the website, but that is not what the user is doing. The user is accessing it now, so that is use.
Archival is when the data is intentionally moved into a long-term storage location. The data is not being moved in this question, only viewed.
Similarly, the data is not being destroyed. The bad actor may destroy it when they log in with the stolen credentials, but that is not the concern at the moment. That is in the future.q