Pocket Prep 12 Flashcards
In which cloud service model does the CSP’s responsibility extend to securing operating systems, database management systems (DBMSs), and similar components made available to the cloud customer?
A. PaaS
B. IaaS
C. All service models
D. SaaS
A. PaaS
Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.
However, at the software level, responsibility depends on the cloud service model in use, including:
Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them. Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use. Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
Which framework, developed by the International Data Center Authority (IDCA), covers all aspects of data center design, including cabling, location, connectivity, and security?
A. Infinity Paradigm
B. HITRUST
C. OCTAVE
D. Risk Management Framework
A. Infinity Paradigm
Explanation:
The International Data Center Authority (IDCA)) is responsible for developing the Infinity Paradigm, which is a framework intended to be used for operations and data center design. The Infinity Paradigm covers aspects of data center design, which include location, cabling, security, connectivity, and much more.
Risk Management Framework (RMF) is defined by NIST as “a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. The risk-based approach to control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, Executive Orders, policies, standards, or regulations.”
The Health Information Trust Alliance (HITRUST) is a non-profit organization. They are best known for developing the HITRUST Common Security Framework (CSF), in collaboration with healthcare, technology, and information security organizations around the world. It aligns standards from ISO, NIST, PCI, and regulations like HIPAA.
The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a software threat modeling technique by Carnegie Mellon University that was developed for the US Department of Defense (DoD).
Ardal is the information security manager working for a manufacturing company that specializes in molded silicon kitchen products. They are moving their customer data and product information into a Platform as a Service (PaaS) public cloud environment. Ardal and his team have been analyzing the risks associated with this move so that they can ensure the most appropriate security controls are in place.
Which of the following is TRUE regarding the transfer of risk?
A. Transfer of risk is often the cheapest option for responding to risk
B. RIsk is never truly transferred. Transference simply shares the risk with another company.
C. Risk transfer should always be the first avenue that an organization takes to respond to risk
D. Risk transfer can only be done when the organization has exhausted all other risk responses
B. RIsk is never truly transferred. Transference simply shares the risk with another company.
Explanation:
Risk transference is better stated as risk sharing, although transfer is the common word in use. When data is placed on a cloud provider’s infrastructure, it does not remove the risk for the customer. It does not give the risk to the provider. The customer is always responsible for their data.
Risk transfer/share simply means that the cloud provider here has a responsibility to also care for the data. The critical word in that last sentence is also. Under GDPR, the cloud provider is required to care for the data, and a Data Processing Agreement (DPA) should be created to inform the provider of their responsibilities. A DPA is a Privacy Level Agreement (PLA) more generically.
Risk transfer can be done at any time and is not necessarily the cheapest of the options.
Risk transfer is not the first avenue for risk management. There are four options. This is just one of them. The other three are risk reduction/mitigation, risk avoidance, and risk acceptance.
Your organization is in the process of migrating to the cloud. Mid-migration you come across details in an agreement that may leave you non-compliant with a particular law. Who would be the BEST contact to discuss your cloud-environment compliance with legal jurisdictions?
A. Stakeholder
B. Consultant
C. Regulator
D. Partner
C. Regulator
Explanation:
As a CCSP, you are responsible for ensuring that your organization’s cloud environment adheres to all applicable regulatory requirements. By staying current on regulatory communications surrounding cloud computing and maintaining contact with approved advisors and, most crucially, regulators, you should be able to assure compliance with legal jurisdictions.
A partner is a generic term that can be used to refer to many different companies. For example, an auditor can be considered a partner.
A stakeholder is someone who has responsibility for caring for a part of the business.
A consultant could assist with just about anything. It all depends on what their skills are. It is plausible that a consultant could help with legal issues. However, regulators definitely understand the laws, so that makes for the best answer.
Rafferty just configured the server-based Platform as a Service (PaaS) that they are using for their company, a government contractor. The server will be used to perform computations related to customer actions on their e-commerce website. He is concerned that they may not have enough CPU and memory allocated to them when they need it.
What should he do?
A. Ensure the limits will not cause any problems with the service
B. Make sure that the server has available share space
C. Ensure a reservation is made at the minimum level needed
D. Set a limit to make sure that the service will work correctly
C. Ensure a reservation is made at the minimum level needed
Explanation:
A minimum resource that is granted to a cloud customer within a cloud environment is known as a reservation. With a reservation, the cloud customer should always have, at the minimum, the amount of resources needed to power and operate any of their services.
On the flip side, limits are the opposite of reservations. A limit is the maximum utilization of memory or processing allowed for a cloud customer. It is a good idea to set to control costs, especially on a new service.
The share space is what is available for any customer to utilize. The cloud works on a first come, first served approach.
An organization has decided that the best course of action to handle a specific risk is to obtain an insurance policy. The insurance policy will cover any financial costs of a successful risk exploit. Which type of risk response is this an example of?
A. Risk transference
B. Risk mitigation
C. Risk avoidance
D. Risk acceptance
A. Risk transference
Explanation:
When an organization obtains an insurance policy to cover the financial burden of a successful risk exploit, this is known as risk transference or risk sharing. It’s important to note that with risk transference, only the financial losses would be covered by the policy, but it would not do anything to cover the loss of reputation the organization might face.
Risk avoidance is when a decision is made to not engage in, or to stop engaging in, risky behavior.
Risk mitigation or risk reduction is when controls are put in place to reduce the chance of a threat being realized or to minimize the impact of it once it does happen.
Risk acceptance always needs to be done because no matter how much of the other three options are done, risk cannot be eliminated. Who accepts the risk, though, is something that a business needs to carefully consider.
A malicious actor created a free trial account for a cloud service using a fake identity. Once the free trial cloud environment was up and running, they used it as a launch pad for several cloud-based attacks. Because they used a fake identity to set up the free trial, it would be difficult (if not impossible) for the attacks to be traced back to them.
What type of cloud-based threat is being described here?
A. Denial-of-service
B. Shared technology issues
C. Abuse or nefarious use of cloud services
D. Advanced persistent threats
C. Abuse or nefarious use of cloud services
Explanation:
Abuse or nefarious use of cloud services is listed as one of the top twelve threats to cloud environments by the Cloud Security Alliance. Abuse or nefarious use of cloud services occurs when an attacker is able to launch attacks from a cloud environment either by gaining access to a poorly secured cloud or using a free trial of cloud service. Often, when using a free trial, the attacker will configure everything using a fake identity so attacks can’t be traced back to them.
A Denial-of-Service (DoS) attack is when the bad actor causes a system to max out or fill up so that a user is not able to do any work.
Shared technology is the core nature of clouds, especially public clouds. If the cloud provider does not take care to ensure that each tenant is not properly isolated or they do not take care of the operating systems, it could lead to so many possible problems. If the hypervisors, Microsoft servers, Linux servers, or any of the other software is not patched or configured properly, it is possible that data could leak between tenants or cause other issues.
Advance Persistent Threats (APT) are when very skilled and aggressive bad actors, probably operating on behalf of a government, create software that will slowly cause problems for another country or business. So the word “advanced” speaks to the skill of the bad actors. The word “persistent” speaks to malicious software being in place over a long period of time to cause a great number of problems. If you are unfamiliar with APTs, do a little research into Stuxnet.
If software developers and the supporting team were to ask the following four questions, what would they be doing?
What are we working on? What could go wrong? What are we going to do about it? Did we do a good job?
A. Evaluating the Recovery Point Objective (RPO)
B. Determining Maximum Tolerable Downtime (MTD)
C. Performing threat modeling
D. Performing a quantitative risk assessment
C. Performing threat modeling
Explanation:
The four questions are the basic idea behind threat modeling. Threat modeling allows the team to identify, communicate, and understand threats and mitigations. There are several techniques, such as STRIDE, PASTA, TRIKE, and OCTAVE.
STRIDE is one of the most prominent models used for threat modeling. Tampering with data is included in the STRIDE model. DREAD is another model, but it does not include tampering with data as a category. TOGAF and REST are not threat models. STRIDE includes the following six categories:
Spoofing identify Tampering with data Repudiation Information disclosure Denial of service Elevation of privileges
A quantitative risk assessment is when the Single Loss Expectency (SLE), Annual Rate of Occurrence (ARO), and Annualized Loss Expectency (ALE) are calculated based on the threat to a specific asset.
Determining the MTD is a step in Business Continuity/Disaster Recovery/Continuity planning. It answers the question of how long an asset can be unavailable before it is a significant problem for the business.
Evaluating the RPO is also a part of Business Continuity/Disaster Recovery/Continuity planning. The RPO is the value that represents how much data can be lost before it too is a problem for the business.
Brocky has been working with a project team analyzing the risks that could occur as this project progresses. The analysis that their team has been performing used descriptive information rather than financial numbers. Which type of assessment have they been performing?
A. Quantitative assessment
B. Fault tree analysis
C. Qualitative assessment
D. Root cause analysis
C. Qualitative assessment
Explanation:
There are two main assessment types that can be done for assessing risk: qualitative assessments and quantitative assessments. While quantitative assessments are data driven, focusing on items such as Single Loss Expectancy (SLE), Annual Rate of Occurrence (ARO), and Annual Loss Expectancy (ALE), qualitative assessments are descriptive in nature and not data driven.
Fault tree analysis is actually a combination of quantitative and qualitative assessments. The question is looking for something that is not financial and that would be the quantitative. So this is more than what the question is about.
Root cause analysis is what is done in problem management from ITIL. Root cause analysis analyzes why some bad event has happened so that the root cause can be found and fixed so that it does not happen again.
Hillary is working to ensure that her company receives the services it requires from its cloud service provider. They have a contract with Service Level Agreements (SLAs) for their bandwidth and uptime. What is Hillary doing?
A. Change management
B. Information Technology Service Management (ITSM)
C. Business Continuity Planning (BCP)
D. ITIL (formerly Information Technology Infrastructure Library)
B. Information Technology Service Management (ITSM)
Explanation:
ITSM is effectively ISO 20000-1 and is based on ITIL. Managing the services from the cloud provider matches ITSM slightly better than ITIL, but ITIL was included as an answer option for discussion purposes. ITSM is a comprehensive approach to designing, delivering, managing, and improving IT services within an organization. It focuses on aligning IT services with the needs of the business and ensuring that the IT services provided are efficient, reliable, and of high quality. ITSM involves a set of practices, processes, and policies that guide the entire service lifecycle, from service strategy and design to service transition, operation, and continual service improvement.
Key characteristics of ITSM include:
Customer-centric: ITSM emphasizes understanding and meeting the needs of customers and end-users. It aims to improve customer satisfaction and overall service experience. Process-oriented: ITSM adopts a process-driven approach, defining workflows and procedures to ensure consistent and repeatable service delivery. Focus on continual improvement: ITSM encourages regular evaluation and optimization of IT services and processes to increase efficiency and effectiveness.
ITIL involves managing data centers more specifically, so it matches the work of the cloud provider slightly better.
Key characteristics of ITIL include:
Service lifecycle approach: ITIL is structured around the service lifecycle, consisting of five core stages: Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. Process framework: ITIL defines a range of processes that cover various aspects of IT service management, including incident management, problem management, change management, service level management, and more. Widely adopted standard: ITIL has become a de facto standard for ITSM and is widely adopted by organizations globally.
Change management is a structured and organized approach to managing and implementing organizational changes. It involves planning, coordinating, communicating, and monitoring modifications to various aspects of the organization, such as processes, systems, technology, culture, or organizational structure.
BCP is about planning for when there are failures, not the basic management of a cloud vendor.
Gherorghe is working with the cloud operations department after a variety of strange behaviors have been seen in their Infrastructure as a Service (IaaS) environment. They are now looking for a tool or toolset that can help them identify fraudulent, illegal, or other undesirable behavior within their client-server datasets.
What tool or toolset can provide assistance with this?
A. Database Activity Monitor (DAM)
B. Application Programming Interface (API) gateway
C. Web Application Firewall (WAF)
D. eXtensible Markup Language (XML) firewall
A. Database Activity Monitor (DAM)
Explanation:
Gartner defines DAMs as “a suite of tools that can be used to support the ability to identify and report on fraudulent, illegal or other undesirable behavior.” These tools include Oracle’s Enterprise Manager. These tools have evolved from monitoring user traffic in databases. They are useful for so many different uses to know what is going on with user traffic in and out of databases.
A WAF is a layer 7 firewall that monitors web applications, HTML, and HTTP traffic.
An API gateway is also a layer 7 device. However, this one monitors APIs. This would include SOAP and REpresentation State Transfer (REST).
XML firewalls also exist at layer 7. This monitors XML traffic only. APIs would include XML and JavaScript Object Notation (JSON).
Biometrics and passwords are part of which stage of IAM?
A. Authentication
B. Accountability
C. Authorization
D. Identification
A. Authentication
Explanation:
Identity and Access Management (IAM) services have four main practices, including:
Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering. Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in. Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this. Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
Antonia has recently been hired by a cancer treatment facility. One of the first training programs that she is required to go through at the office is related to the protection of individually identifiable health information. Which law is this related to and which country does it apply to?
A. Health Insurance Portability and Accountability Act (HIPAA), USA
B. Health Insurance Portability and Accountability Act (HIPAA), Canada
C. Gramm-Leach-Bliley Act (GLBA), USA
D. General Data Protection Regulation (GDPR), Germany
A. Health Insurance Portability and Accountability Act (HIPAA), USA
Explanation:
The Health Insurance Portability and Accountability Act (HIPAA) is concerned with the security controls and confidentiality of Protected Health Information (PHI). It’s vital that anyone working in any healthcare facility be aware of HIPAA regulations.
The Gramm-Leach-Bliley Act, officially named the Financial Modernization Act of 1999, focuses on PII as it pertains to financial institutions, such as banks.
GDPR is an EU specific regulation that encompasses all organizations in all different industries.
The privacy act of 1988 is an Australian law that requires the protection of personal data.
Oya and her risk assessment team are working on preparing to perform their annual assessment of the risks that their cloud data center could experience. What is the correct order of risk management steps?
A. Prepare, categorize, select, implement, assess, authorize, monitor
B. Authorize, prepare, assess, categorize, select, implement, monitor
C. Assess, authorize, prepare, categorize, select, implement, monitor
D. Prepare, assess, categorize, select, implement, authorize, monitor
A. Prepare, categorize, select, implement, assess, authorize, monitor
Explanation:
The National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) lists the correct order of the risk management steps as the following: prepare, categorize, select, implement, assess, authorize, and monitor. The prepare phase is where Oya and her team are. They are getting into the process of analyzing the risks for the cloud data center. Then they will categorize the risks and threats based on the impact they could have on the organization. The select phase is when controls are selected to reduce the likelihood or impact of the threats.
If there are new controls or simply new settings that need to be configured, this is done in the implement phase. When the assess phase is active, the team is looking to see if the controls are in place and working properly. The authorize phase is when senior management is informed of all that can be found, determined, chosen, and analyzed, and they authorize their business to have their production environments configured in this new way. Lastly, there is ongoing monitoring that is performed.
The Business Continuity/Disaster Recovery (BC/DR) team has been working for months to update their corporate DR plan. The PRIMARY goal of a DR test is to ensure which of the following?
A. All production systems are brought back online
B. Management is satisfied with the BC/DR plan
C. Recovery Time Objective (RTO) goals are met
D. Each administrator knows all the steps of the plan
C. Recovery Time Objective (RTO) goals are met
Explanation:
With any Business Continuity and Disaster Recovery (BCDR) test, the main goal and purpose is to ensure that Recovery Time Objective (RTO) and Recovery Point Objective (RPO) goals are met. When planning the test, staff should consider how to properly follow the objectives and decisions made as part of RPO and RTO analysis.
It is unlikely that all production systems will be brought back on line in the event of a disaster. If the plan is just switching the cloud setup from one region to another, all systems could be brought online. However, there is nothing in the question that says that all production systems are in the cloud or what type of disaster this even is. So, in traditional BC/DR planning, it is not expected that all production systems will be brought back online in the alternate configuration.
Management does need to be satisfied with the plans that are built, but the question is about the goal of the test. The test needs to show that the plan will work. That should make management happy. The immediate answer to the question is to match the RTO goals.
Administrators do not need to know every step of the plan. All administrators need to know is what they need to know, which would likely not be all the steps.
Which of the Trust Services principles must be included in a Service Organization Controls (SOC) 2 audit?
A. Privacy
B. Security
C. Availability
D. Confidentiality
B. Security
Explanation:
The Trust Service Criteria from the American Institute of Certified Public Accountants (AICPA) for the Security Organization Controls (SOC) 2 audit report is made up of five key principles: Availability, Confidentiality, Process integrity, Privacy, and Security. Security is always required as part of a SOC 2 audit. The other four principles are optional.
A bad actor working for an enemy state has created malware that has the purpose of stealing data from the other country regarding their military and its products and capabilities. The bad actor has planted malware on the enemy’s systems and has left it, undetected, for eight months. What is the name of this type of attack?
A. Insecure Application Programming Interface (API)
B. Human error
C. Advanced persistent threat (APT)
D. Malicious insider
C. Advanced persistent threat (APT)
Explanation:
Many types of malware and malicious programs are loud and aim to disrupt a system or network. Advanced Persistent Threats (APTs) are the opposite. APTs are attacks that attempt to steal data and stay hidden in the system or network for as long as possible. The longer the APT can stay in the system, the more data it is able to collect. The advanced part of APT is in reference to the skill level of the bad actor.
A malicious insider would be performing bad actions within the business acting without their knowledge. The enemy is probably operating with knowledge inside the government.
Human error is a problem for a business, but it is an accident. Creating malware is not accidental; it is intentional and malicious.
An insecure API is not an attack. It is a vulnerability. There is some weakness in the coding or implementation that leaves it vulnerable.
Cloud security is a difficult task, made all the more difficult by laws and regulations imposing restrictions on cross-border data transfers. The actual hardware in the cloud can be located anywhere, so it is critical to understand where your data resides. Which of the following statements is true regarding who is responsible for the data?
A. The cloud service provider (CSP) retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
B. The cloud administrator retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
C. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
D. Both the cloud service provider (CSP) and the cloud service customer (CSC) retain responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
C. The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
Explanation:
Correct answer: The cloud service customer retains ultimate responsibility for the data’s security regardless of whether cloud or non-cloud services are employed
Regardless of whether cloud or non-cloud services are utilized, the data controller [the Cloud Service Customer (CSC)] is ultimately responsible for the data’s security. Cloud security encompasses more than data protection; it also encompasses applications and infrastructure.
According to the European Union (EU) General Data Protection Regulation (GDPR) requirements, the cloud provider is responsible for the data in its protection. The reason the answer that says “both” is not correct is the correct answer contains the word ultimate. Ultimately, the cloud customer is always responsible for their data.
This question also does not mention GDPR, so it is difficult to determine if there is a legal responsibility for the data while it is in the cloud provider’s care, as we do not actually know where on the planet the question is referring to.
It is necessary within a business to control data at all stages of the lifecycle. Erika is working at a corporation to setup, deploy, and monitor a Data Loss Prevention (DLP) solution. Which component of DLP is involved in the process of applying corporate policy regarding storage of data?
A. Enforcement
B. Identification
C. Discovery
D. Monitoring
A. Enforcement
Explanation:
DLP is made up of three major components. They include discovery, monitoring, and enforcement. Enforcement is the final stage of DLP implementation. It is the enforcement component that applies policies and then takes actions, such as deleting data.
Identification is the first piece of IAAA and is the statement of who you claim to be, such as a user ID.
The CSA SecaaS Category 2 document is a good read on the topic of DLP and the cloud and is highly recommended.
Blythe has been working for a Fortune 500 healthcare company for many years now. They are beginning to transition from their on-prem data center to a cloud-based solution. She and her team are working to put together information to present to the Board of Directors (BoD) regarding what they can expect from a move to the cloud.
Which of the following statements is most likely true when moving from an on-prem data center to Infrastructure as a Service (IaaS)?
A. Moving to the cloud will have a predictable OpEx. However, the security in the cloud is higher.
B. A traditional data center will have lower costs on the Operational Expenditures (OpEx) side and higher Capital Expenditures (CapEx)
C. A traditional data center has a more secure operating environment than a cloud environment
D. The pricing for cloud computing will be less predictable than that of a traditional data center
D. The pricing for cloud computing will be less predictable than that of a traditional data center
Explanation:
A traditional on-prem data center has a higher CapEx, but OpEx is not lower. It is likely the same or higher. The operating environment could be more secure in either environment. The security of the cloud-based IaaS depends on two factors: the security of the cloud provider’s data center and the configurations within the IaaS. It could be more secure in the cloud. The OpEx in the cloud may eventually be predictable, but especially when moving to the cloud, it is not as predictable as some may prefer.
So, with each of those thoughts, that leaves the best answer as “the pricing in the cloud is less predictable than an on-prem data center.”
Bai is working on moving the company’s critical infrastructure to a public cloud provider. Knowing that she has to ensure that the company is in compliance with the requirements of the European Union’s (EU) General Data Protection Regulation (GDPR) country specific laws since the cloud provider is the data processor, at what point should she begin discussions with the cloud provider about this specific protection?
A. Configuration of the Platform as a Service (PaaS) windows servers
B. Establishment of Service Level Agreements (SLA)
C. At the moment of reversing their cloud status
D. Data Processing Agreement (DPA) negotiation
D. Data Processing Agreement (DPA) negotiation
Explanation:
Under the EU’s GDPR requirements for each country, there is a requirement for a cloud customer to inform the cloud provider that they will be storing personal data (a.k.a. Personally Identifiable Information—PII) on their servers. This is stated in the DPA, which is more generically called a Privacy Level Agreement (PLA). The cloud provider is a processor because they will be storing or holding the data. It is not necessary for the provider to ever use that data to be considered a processor. So, the first point for discussion with the cloud provider regarding the four answer options listed is the DPA negotiation.
The SLAs are part of contract negotiation, but the DPA is specific to the storage of personal data in the cloud, which is the topic of the question. The configuration of the servers and the removal of data from the cloud provider’s environment (reversibility) would involve concerns about personal data. The DPA negotiation is a better answer because the question asks at what point should Bai “begin discussions” with the cloud provider.
Haile is a cloud operator who has been reviewing the Indications of Compromise (IoC) from the company’s Security Information and Event Manager (SIEM). The SIEM reviews the log outputs to find these possible compromises. Where should detailed logging be in place within the cloud?
A. Only access to the hypervisor and the management plane
B. Wherever the client accesses the management plane only
C. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
D. Only specific levels of the virtualization structure
C. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
Explanation:
Logging is imperative for a cloud environment. Role-based access should be implemented, and logging should be done at each and every level of the virtualization infrastructure as well as wherever the client accesses the management plane (such as a web portal).
The SIEM cannot analyze the logs to find the possible compromise points unless logging is enabled, and the logs are delivered to that central point. This is necessary in case there is a compromise, which could happen anywhere within the cloud.
Which of the following is MOST relevant to an organization’s network of applications and APIs in the cloud?
A. Service Access
B. User Access
C. Privilege Access
D. Physical Access
A. Service Access
Explanation:
Key components of an identity and access management (IAM) policy in the cloud include:
User Access: User access refers to managing the access and permissions that individual users have within a cloud environment. This can use the cloud provider’s IAM system or a federated system that uses the customer’s IAM system to manage access to cloud services, systems, and other resources. Privilege Access: Privileged accounts have more access and control in the cloud, potentially including management of cloud security controls. These can be controlled in the same way as user accounts but should also include stronger access security controls, such as mandatory multi-factor authentication (MFA) and greater monitoring. Service Access: Service accounts are used by applications that need access to various resources. Cloud environments commonly rely heavily on microservices and APIs, making managing service access essential in the cloud.
Physical access to cloud servers is the responsibility of the cloud service provider, not the customer.
Winta is using a program to create a spreadsheet after having collected information regarding the sales cycle that the business has just completed. What phase of the cloud data lifecycle is occurring?
A. Archive
B. Store
C. Create
D. Share
C. Create
Explanation:
Generating a new spreadsheet is the create phase of the data lifecycle. Create is the generation of new data/voice/video in any manner. The Cloud Security Alliance (CSA) also indicates that the create phase is when data is modified. Not everyone agrees with that last sentence, but this is an exam that is a joint venture between the CSA and (ISC)2, so it is good to know that is what they say in the guidance 4.0 document.
As soon as the data is created, it needs to be moved to persistent storage (hard disk drive or solid state drive).
If that spreadsheet is moved into long-term storage for future reference, then, if needed, that would be the archive phase.
Sending the spreadsheet to the boss for their review (or to anyone else) would be the share phase.