Pocket Prep 16 Flashcards
Which of the following storage types acts like a physical hard drive connected to a VM?
A. Raw
B. Ephemeral
C. Long-Term
D. Volume
D. Volume
Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:
Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted. Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections. Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service. Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything. Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling.
Brocky has been working with a project team analyzing the risks that could occur as this project progresses. The analysis that their team has been performing used descriptive information rather than financial numbers. Which type of assessment have they been performing?
A. Quantitative assessment
B. Fault tree analysis
C. Qualitative assessment
D. Root cause analysis
C. Qualitative assessment
Explanation:
There are two main assessment types that can be done for assessing risk: qualitative assessments and quantitative assessments. While quantitative assessments are data driven, focusing on items such as Single Loss Expectancy (SLE), Annual Rate of Occurrence (ARO), and Annual Loss Expectancy (ALE), qualitative assessments are descriptive in nature and not data driven.
Fault tree analysis is actually a combination of quantitative and qualitative assessments. The question is looking for something that is not financial and that would be the quantitative. So this is more than what the question is about.
Root cause analysis is what is done in problem management from ITIL. Root cause analysis analyzes why some bad event has happened so that the root cause can be found and fixed so that it does not happen again.
Communication, Consent, Control, Transparency, and Independent and yearly audits are the five key principles found in what standard that cloud providers should adhere to?
A. General Data Protection Regulation (GDPR)
B. ISO/IEC 27018
C. Privacy Management Framework (PMF)
D. ISO/IEC 27001
B. ISO/IEC 27018
Explanation:
ISO/IEC 27018 is a standard privacy requirement for cloud service providers to adhere to. It is focused on five key principals: communication, consent, control, transparency, and independent and yearly audits. ISO/IEC 27018 is for cloud providers acting as Data Processors handling Personally Identifiable Information (PII). (A major clue in the question is cloud providers.)
The PMF, formerly known as the Generally Accepted Privacy Principles (GAPP), has nine core principles. One of which is agreement, notice, and communication. Another is collection and creation. It is very similar. However, PMF is not specifically for cloud providers.
GDPR is a European Union requirement for member states to have a law to protect the personal data of natural persons.
ISO/IEC 27001 is an international standard that is used to create and audit Information Security Management Systems (ISMS).
Orlando has been able to determine that they are experiencing a lot of shadow IT. However, he is unsure of the tool that could be used to determine where the users are connecting to. What tool is designed to assist with this process?
A. Data Leak Prevention (DLP)
B. Cloud Access Security Broker (CASB)
C. Cloud Posture Manager (CPM)
D. Cloud broker
B. Cloud Access Security Broker (CASB)
Explanation:
CASBs were originally designed to determine where the users were connecting and using shadow IT. Shadow IT is technology that the users have signed up for (in the cloud) that did not go through the regular acquisition procedures. Today, they can do additional things like DLP.
DLP is designed to determine if a user has just sent (or is trying to send) data someplace it should not go or in a format it should not be in (e.g., not encrypted). It is not designed to determine what web addresses the users are accessing.
Cloud brokers are people/companies that help cloud customers and cloud providers in their negotiations.
CPM tools are even newer. Sometimes called Cloud Security Posture Manager (CSPM), these tools are designed to determine all the paths a user can take to get access to particular resources. It is normal in the cloud that there can be multiple paths to a piece of data. Also, it is normal to assume the role of one of the devices or applications temporarily, which could give a user more access than they should have.
Riki and her team have been working with a Managed Service Provider (MSP) regarding their Infrastructure as a Service (IaaS) deployment. They are working to move an on-premise data center into the cloud. It is essential that they are clear about their availability expectations. These requirements should be spelled out in which part of the contract?
A. Business Associate Agreement (BAA)
B. Master Services Agreement (MSA.
C. Service Level Agreement (SLA)
D. Privacy Level Agreement (PLA)
C. Service Level Agreement (SLA)
Explanation:
The Service Level Agreement (SLA) is made between an organization and a third-party vendor (such as a cloud provider). Availability expectations and needs should be addressed in the SLA.
MSAs define the core responsibility of each company within a contract. For example, the MSA could be responsible for providing the cloud environment and maintaining the physical data center. The customer builds and manages their IaaS. (It is not necessary for the relationship to be exactly this.)
The PLA spells out the types of personal data that would be stored and processed within the cloud and what the expectation of the customer is for the cloud provider to protect that data. A BAA is essentially the same type of document, but it is specific to the US HIPAA regulation.
Jamarcus is looking for a security control that can be used to protect a database within their Platform as a Service (PaaS). What they are concerned about within this business is that the data must be protected. It cannot be viewed by anyone that is not approved, and it cannot be sent anywhere it should not be.
What tool can accomplish this?
A. Federated identification
B. Identity and Access Management (IAM
C. Data Loss Prevention (DLP)
D. Transport Layer Security (TLS)
C. Data Loss Prevention (DLP)
Explanation:
Data loss prevention refers to a set of controls and practices put in place to ensure that data is only accessible to those authorized to access it. DLP also protects data from being lost or improperly used.
IAM is used to control what someone has access to and with what permissions. It is not used to control where data is sent.
TLS is used to encrypt data in transit so that it is not visible to someone who should not be able to see it. It does not control where data can be sent.
Federated identification is another way to control who has access to something. It is not used to control where data is sent.
So, the only tool here that does everything needed by the question is DLP.
Which of the following regulations deals with law enforcement’s access to data that may be located in data centers in other jurisdictions?
A. SOX
B. SCA
C. GLBA
D. US CLOUD Act
D. US CLOUD Act
Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:
General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects. US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country. Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data. Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data. Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens. Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers. Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud. Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors. North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
A cloud information security manager is building the policies and associated documents for handling cloud assets. She is currently detailing how assets will be understood or listed so that access can be controlled, alerts can be created, and billing can be tracked. What tool allows for this?
A. Key
B. Identifier
C. Tags
D. Value
C. Tags
Explanation:
Tags are pervasive in cloud deployments. It is crucial that a plan is built for the corporation on how to tag assets. If it is not done consistently, it is not helpful. A tag is made up of two pieces, a key or name and a value. Key here is not the cryptographic key for encryption and decryption, but it is a word in English that was chosen by some to use here. It is really a name.
You can think of the tag as a type of identifier, but the tool needed to manage assets is called a tag.
Reference:
A cloud data center is being built by a new Cloud Service Provider (CSP). The CSP wants to build a data center that has a level of resilience that will classify it as a Tier III. At which tier is it expected to add generators to backup the power supply?
A. Tier I
B. Tier IV
C. Tier II
D. Tier III
A. Tier I
Explanation:
Generators are added to the requirements from the lowest level, Tier I.
Tier II and above also require those generators to be there. Tier I and II also require Uninterruptible Power Supply (UPS) units.
Tier III requires a redundant distribution path for the data.
Tier IV requires several independent and physically isolated power supplies.
Abigail is designing the infrastructure of Identity and Access Management (IAM) for their future Platform as a Service (PaaS) environment. As she is setting up identities, she knows that which of the following is true of roles?
A. Roles are the same as user identities
B. Roles are temporarily assumed by another identity
C. Roles are assigned to specific users permanently and occasionally assumed
D. Roles are permanently assumed by a user or group
B. Roles are temporarily assumed by another identity
Explanation:
Roles are not the same as they are in traditional data centers. Roles are in a way similar to traditional roles in that they allow a user or group a certain amount of access. The group is closer to what we traditionally called roles in Role Based Access Control (RBAC). In the cloud, roles are assumed temporarily. You can assume roles in a variety of ways, but, again, they are temporary.
The user is not permanently assigned a specific role. A user will log in as their user identity, then assume a role. This is temporary (e.g., for 15 hours or only the life of that session).
Note the distinction between assigning and assuming roles — you might have access to certain permissions, but you only use the role and those permissions occasionally.
An additional resource for your review/study is on the AWS website. Look for the user guide regarding roles.
Which of the following BEST describes the types of applications that create risk in a cloud environment?
A. Small utility scripts
B. Software with administrator privileges
C. Full application suites
D. Every piece of software in the environment
D. Every piece of software in the environment
Explanation:
Any piece of software, from major software suites to small utility scripts, can have possible vulnerabilities. This means that every program and every piece of software in the environment carries an inherent amount of risk with it. Any software that is installed in a cloud environment should be properly vetted and regularly audited.
Rufus is working for a growing manufacturing business. They have been upgrading their manufacturing equipment over the years to product versions that include internet connectivity for maintenance and management information. This has increased the amount of logs that need to be filtered. Due to the volume of log data generated by systems, it poses a challenge for his organization to perform log reviews efficiently and effectively.
What can his organization implement to help solve this issue?
A. Secure Shell (SSH)
B. Security Information and Event Manager (SIEM)
C. Data Loss Prevention (DLP)
D. System Logging protocol (syslog) server
B. Security Information and Event Manager (SIEM)
Explanation:
An organization’s logs are valuable only if the organization makes use of them to identify activity that is unauthorized or compromising. Due to the volume of log data generated by systems, the organization can implement a System Information and Event Monitoring (SIEM) system to overcome these challenges. The SIEM system provides the following:
Log centralization and aggregation Data integrity Normalization Automated or continuous monitoring Alerting Investigative monitoring
A syslog server is a centralized logging system that collects, stores, and manages log messages generated by various devices and applications within a network. It provides a way to consolidate and analyze logs from different sources, allowing administrators to monitor system activity, troubleshoot issues, and maintain security. However, it does not help to correlate the logs as the SIEM does.
SSH is a networking protocol that encrypts transmissions. It works at layer 5 of the OSI model. It is commonly used to transmit logs to the syslog server. It is not helpful when analyzing logs. It only secures the transmission of the logs.
DLP tools are used to monitor and manage the transmission or storage of data to ensure that it is done properly. With DLP, the concern is that there will be a data breach/leak unintentionally by the users.
Your company is looking for a way to ensure that their most critical servers are online when needed. They are exploring the options that their Platform as a Service (PaaS) cloud provider can offer them. The one that they are most interested in has the highest level of availability possible. After a cost-benefit analysis based on their threat assessment, they think that this will be the best option. The cloud provider describes the option as a grouping of resources with a coordinating software agent that facilitates communication, resource sharing, and routing of tasks.
What term matches this option?
A. Server cluster
B. Server redundancy
C. Storage controller
D. Security group
A. Server cluster
Explanation:
Server clusters are a collection of resources linked together by a software agent that enables communication, resource sharing, and task routing. Server clusters are considered active-active since they include at least two servers (and any other needed resources) that are both active at the same time.
Server redundancy is usually considered active-passive. Only one server is active at a time. The second waits for a failure to occur; then, it will take over.
Storage controllers are used for storage area networks. It is possible that the servers in the question are storage servers, but more likely they contain the applications that the users and/or the customers require. Therefore, server clustering is the correct answer.
Security groups are effectively virtualized local area networks protected by a firewall.
Which SIEM feature is MOST vital to maintaining complete visibility in a multi-cloud environment?
A. Automated Monitoring
B. Log Centralization and Aggregation
C. Investigative Monitoring
D. Normalization
B. Log Centralization and Aggregation
Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:
Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources. Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only). Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats. Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident. Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest. Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources.
When a quantitative risk assessment is performed, it is possible to determine how much a threat can cost a business over the course of a year. What term defines this?
A. Annualized Loss Expectancy (ALE)
B. Annual Rate of Occurrence (ARO)
C. Recovery Time Objective (RTO)
D. Single Loss Expectancy (SLE)
A. Annualized Loss Expectancy (ALE)
Explanation:
How much a single occurrence of a threat will cost a business is the SLE. The total number of times this is expected within a year is the ARO. So, the total cost of a threat over a year is calculated by multiplying the ARO times the SLE and that will result in the ALE.
The RTO is the amount of time that is given to the recovery team to perform the recovery actions after a disaster has been declared.
Cloud Service Providers (CSP) and virtualization technologies offer a form of backup that captures all the data on a drive at a point in time and freezes it. What type of backup is this?
A. Guest OS image
B. Incremental backup
C. Data replication
D. Snapshot
D. Snapshot
Explanation:
CSPs and virtualization technologies offer snapshots as a form of backup. A snapshot will capture all the data on a drive at a point in time and freeze it. The snapshot can be used for a number of reasons, including rolling back or restoring a virtual machine to its snapshot state, creating a new virtual machine from the snapshot that serves as an exact replica of the original server, and copying the snapshot to object storage for eventual recovery.
A guest OS image is a file that, when spun-up or run on a hypervisor, becomes the running virtual machine.
Incremental backups are only changes since the last backup of any kind. The last backup could be a full or an incremental backup. So, it basically backs up only “today’s changes” (assuming that backups are done once a day).
Data replication is usually immediate backups to multiple places at the same time. That way, if one copy of the data is gone, another still exists.
Hemi is working for a New Zealand bank, and they are growing nicely. They really need to carefully address their information security program, especially as they grow into their virtual data center that they are building using Infrastructure as a Service (IaaS) technology. As they are planning their information security carefully to ensure they are in compliance with all relevant laws and they provide the level of service their customers have come to expect, they are looking for a document that contains best practices.
What would you recommend?
A. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018
B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27017
C. Federal Information Processing Standard (FIPS) 140-2/3
D. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53
B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27017
Explanation:
ISO/IEC 27017 is Information technology — Security techniques — Code of practice for information security controls based on ISO/IEC 27002 for cloud services. This document pulls the security controls from ISO/IEC 27002 that apply to the cloud.
ISO/IEC 27018 is Information technology — Security techniques — Code of practice for protection of Personally Identifiable Information (PII) in public clouds acting as PII processors. A processor is defined in the European Union (EU) General Data Protection Regulation (GDPR) as “a person who processes data solely on behalf of the controller, excluding the employees of the data controller.” Processing is defined to include storage of data, which then applies to cloud services.
NIST SP 800-53 is: Security and Privacy Controls for Information Systems and Organizations. It is effectively a list of security controls and is similar to ISO/IEC 27002.
FIPS 140-2/3 is Security Requirements for Cryptographic Modules. This is for products such as TPMs and HSMs that store cryptographic keys.
Since the question is about security in the cloud, ISO/IEC 27017 is the best fit of these four documents.
A large social media company that relies on public Infrastructure as a Service (IaaS) for their virtual Data Center (vDC) had an outage. They were not locatable through Domain Name Service (DNS) queries midafternoon one Thursday. In their virtual routers, a configuration was altered incorrectly. What did they fail to manage properly?
A. Change enablement practice
B. Service level management
C. Input validation
D. User training
A. Change enablement practice
Explanation:
ITIL defines change enablement practice as the practice of ensuring that risks are properly assessed, authorizing changes to proceed, and managing a change schedule to maximize the number of successful service and product changes. This is what happened to Facebook/Instagram/WhatsApp/Meta. They have their own network, but the effect would have been the same using AWS as an IaaS. This is change management.
Service level management is defined in ITIL as the practice of setting clear business-based targets for service performance so that the delivery of a service can be properly assessed, monitored, and managed against these targets.
Input validation needs to be performed by software to ensure that the values entered by the users are correct. The main goal of input validation is to prevent the submission of incorrect or malicious data and ensure that the software functions as intended. By checking for errors or malicious input, input validation helps to increase the security and reliability of software.
User training can help reduce the likelihood of errors occurring while using the software. By teaching users how to properly use the software, they become more aware of potential mistakes that may occur and can take measures to prevent them. This can help reduce the occurrence of mistakes, leading to less downtime, more accurate work, and improved outcomes.
U-Jin has been tasked with figuring out how the company should protect the personal information that they have collected about their customers. He knows that they have to be compliant with a couple of different laws from around the world due to the location of their customers.
Under the Payment Card Industry - Data Security Standard (PCI DSS), which of the following requires when data must be encrypted?
A. Data in use
B. Data at rest
C. Data in transit
D. Data in storage
C. Data in transit
Explanation:
The PCI DSS requires that data be encrypted when it is in transit across public networks.
Data must be protected when it is being stored. PCI-DSS does not say that it must be encrypted within the 12 requirements. When it is at rest or in storage, it would be good to encrypt as well as establish and control anyone’s access through Identity and Access Management (IAM). Encrypting data in use is just emerging as an option but is certainly not a requirement, as it is not available in almost all situations today.
Reference:
Which of the following techniques uses context and the meaning of text to identify sensitive data in unstructured data?
A. Pattern Matching
B. Hashing
C. Lexical Analysis
D. Schema Analysis
C. Lexical Analysis
Explanation:
When working with unstructured data, there are a few different techniques that a data discovery tool can use:
Pattern Matching: Pattern matching looks for data formats common to sensitive data, often using regular expressions. For example, the tool might look for 16-digit credit card numbers or numbers structured as XXX-XX-XXXX, which are likely US Social Security Numbers (SSNs). Lexical Analysis: Lexical analysis uses natural language processing (NLP) to analyze the meaning and context of text and identify sensitive data. For example, a discussion of “payment details” or “card numbers” could include a credit card number. Hashing: Hashing can be used to identify known-sensitive files that change infrequently. For example, a DLP solution may have a database of hashes for files containing corporate trade secrets or company applications.
Schema analysis can’t be used with unstructured data because only structured databases have schemas.
Rise is working for a corporation as their cloud architect. He is designing how the Platform as a Service (PaaS) deployment will be used to store sensitive data for one particular application. He is designing a trust zone for that data to be handled inside of. Which of the following BEST defines a trust zone?
A. The ability to share pooled resources among different cloud customers
B. Sets of rules that define which employees have access to which resources
C. Physical, logical, or virtual boundaries around network resources
D. Virtual tunnels that connect resources at different locations
C. Physical, logical, or virtual boundaries around network resources
Explanation:
A cloud-based trust zone is a secure environment created within a cloud infrastructure where only authorized users or systems are allowed to access resources and data. This trust zone is typically created by configuring security measures such as firewalls, access controls, and encryption methods to ensure that only trusted sources can gain access to the data and applications within the zone. The goal of a cloud-based trust zone is to create a secure and reliable environment for sensitive data or critical applications by isolating them from potential threats and unauthorized access. This helps to ensure the confidentiality, integrity, and availability of the resources and data within the trust zone.
A virtual tunnel connecting to another location may be something that needs to be added, but it is not part of describing the zone itself.
Rules that define which employees have access to which resources are something that is needed by a business. This is Identity and Access Management (IAM). It should include information about the resources in a trust zone, but it does not define the actual zone.
The ability to share pooled resources is part of the definition of cloud. It is the opposite of a trust zone. Because resources are shared, many companies are very worried about using the cloud.
Which of the following is a seven-step threat model that views things from the attacker’s perspective?
A. PASTA
B. STRIDE
C. ATASM
D. DREAD
A. PASTA
Explanation:
Several different threat models can be used in the cloud. Common examples include:
STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect. PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and applications from the viewpoint of an attacker.
An organization’s communications with which of the following is MOST likely to include information about planned and unplanned outages and other information designed to protect the brand image?
A. Regulators
B. Customers
C. Vendors
D. Partners
B. Customers
Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:
Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor. Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect the brand image. Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems. Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.
Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.
A corporation is planning their move to the cloud. They have decided to use a cloud provided by a Managed Service Provider (MSP). The MSP will retain ownership and management of the cloud and all the infrastructure. The cloud will be more expensive than a Cloud Service Provider (CSP). However, the level of control that this cloud will offer is expanded from that with a CSP.
What type of cloud have they selected?
A. Private cloud
B. Hybrid cloud
C. Community cloud
D. Public cloud
A. Private cloud
Explanation:
Private clouds can be located at the service provider’s location or the customer’s. It can be owned by either the cloud provider or the customer. It can be managed by either the cloud provider or the customer. It could be with an MSP or a CSP.
If it is with a CSP, it could be public, private, or community. If it is with an MSP, it could be either private or community. As it is just one company in the question, it is not a community, so it must be a private cloud. For more data on this, the Cloud Security Alliance (CSA) guidance 4.0 (or 5 if it has been released) would be a great read.
A hybrid cloud is usually a combination of public and private clouds. The question is specifically about the private cloud though, so this is not the best answer.
Which of the following is a regulation designed to protect the US and Canadian power grids?
A. SCA
B. NERC/CIP
C. SOX
D. GLBA
B. NERC/CIP
Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:
General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects. US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country. Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data. Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data. Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens. Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers. Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud. Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors. North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
Which of the following would benefit the MOST from using a hybrid cloud?
A. A small business that doesn’t have much sensitive data and is only looking to move email to the cloud
B. A group of organizations looking to create a shared service for all their customers to use
C. A healthcare company that needs to ensure that all their data is kep. extremely secure and private, no matter the expense
D. An organization that only requires that certain items are kept very secure, but can’t afford a full private cloud
D. An organization that only requires that certain items are kept very secure, but can’t afford a full private cloud
Explanation:
Correct answer: An organization that only requires that certain items are kept very secure, but can’t afford a full private cloud
Hybrid clouds are the best solution for any organization that requires the security of a private cloud for some, but not all, of their data. By only needing some of the data to be kept in a private cloud, the expense of building a full private cloud can be greatly reduced.
A small business that doesn’t have much sensitive data and only wants email could benefit from a public Software as a Service (SaaS).
A healthcare company, by that description, needs a private cloud.
A group of organizations looking to create a shared service is in need of a community cloud.
Who should have access to the management plane in a cloud environment?
A. A highly vetted and limited set of administrators
B. A single, highly vetted administrator
C. Software developers deploying virtual machines
D. Security Operation Center (SOC) personnel
A. A highly vetted and limited set of administrators
Explanation:
If compromised, the management plane would provide full control of the cloud environment to an attacker. Due to this, only a highly vetted and limited set of administrators should have access to the management plane. However, you will want more than a single administrator. If the single administrator leaves or is no longer able to perform management duties, the ability of the business to manage their cloud environment would be compromised.
Software developers deploying virtual machines may need access, but they would be in the highly vetted group of administrators if that is the case. The same would be true for SOC personnel. They need to be vetted and trusted.
A company using Platform as a Service (PaaS) has discovered that their computing environment has gotten very complex. They are looking for a technology that will assist them in managing deployment and provisioning of all the resources that they now have.
Which technology can this organization implement to assist the administrators in a more agile and efficient manner than manual management?
A. Controller plane
B. Dynamic Host Configuration Protocol
C. Orchestration
D. Management plane
C. Orchestration
Explanation:
Orchestration enables the agile and efficient provisioning and management on demand and at a great scale. Common tools used today are puppet, chef, ansible, and salt.
Dynamic Host Configuration Protocol (DHCP) was similar in its use long ago in the earlier days of local area networks. It allows computers to obtain IP addresses dynamically. This is still needed but is insufficient for managing and provisioning cloud assets.
The management plane is the administrators’ connection into the cloud. This allows them to configure and manage, but it is not going to automate anything. It is the equivalent of establishing an SSH connection to a router. It is simply a protected connection.
The controller plane is found within Software Defined Networking (SDN). It allows switches to communicate with the controller for forwarding decisions.
Which of the following data classification labels might impact where data can be transferred or stored?
A. Type
B. Ownership
C. Criticality
D. Jurisdiction
D. Jurisdiction
Explanation:
Data owners are responsible for data classification, and data is classified based on organizational policies. Some of the criteria commonly used for data classification include:
Type: Specifies the type of data, including whether it has personally identifiable information (PII), intellectual property (IP), or other sensitive data protected by corporate policy or various laws. Sensitivity: Sensitivity refers to the potential results if data is disclosed to an unauthorized party. The Unclassified, Confidential, Secret, and Top Secret labels used by the U.S. government are an example of sensitivity-based classifications. Ownership: Identifies who owns the data if the data is shared across multiple organizations, departments, etc. Jurisdiction: The location where data is collected, processed, or stored may impact which regulations apply to it. For example, GDPR protects the data of EU citizens. Criticality: Criticality refers to how important data is to an organization’s operations.
Which storage type is used by virtual machines as their local drive for processing purposes?
A. Ephemeral
B. Block storage
C. Unstructured storage
D. Raw storage
A. Ephemeral
Explanation:
Temporary storage and data are stored in ephemeral storage solely for processing purposes. Ephemeral storage is not intended to provide long-term data storage. Ephemeral storage is similar to Random Access Memory (RAM) and other non-permanent storage technologies. When a VM shuts down, anything stored in ephemeral storage will be lost.
Raw storage refers to the actual drive such as a Solid State Drive (SSD) or a Hard Disk Drive (HDD). It is not accessed by a virtual machine directly. It will be organized into structured or unstructured storage.
Block storage is another name for structured storage. It is a way to organize the drive space into blocks of space for specific virtual machines or applications. The block could appear as a volume, all depending on the actual allocation by the cloud provider.
Unstructured storage is another name for object storage. Objects are files and can include word documents, spreadsheets, databases stored as a file, and even virtual machine images.
What is the final step in deploying a newly upgraded application into production?
A. Deployment management
B. Service level management
C. Continuity management
D. Configuration management
A. Deployment management
Explanation:
Deployment management includes moving new or changed hardware or software to production.
Configuration management involves managing configuration items or parameters. The question is about an upgraded application. It does not mention the configuration of it, so deployment management is a better fit to answer the question.
Continuity management involves main training availability for a disaster.
Service level management is about setting clear business targets for service performance. There is no mention of a service level in the question.
Which of the following involves verifying that the software has completed all of its required tests and manages the logistics of moving it to the next step?
A. Deployment Management
B. Release Management
C. Configuration Management
D. Change Management
B. Release Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
An organization has just completed the design phase of developing their Business Continuity and Disaster Recovery (BC/DR) plan. What is the next step for this organization?
A. Revise
B. Test the plan
C. Assess risk
D. Implement the plan
D. Implement the plan
Explanation:
The steps of developing a BC/DR plan are as follows: Define scope, gather requirements, analyze, assess risk, design, implement, test, report, and finally, revise. Once an organization has completed all the design phases, they are ready to implement their BC/DR plan. Even though the plan has already gone through design, it will likely require some changes (both technical and policy-wise) during implementation. The key to this is that the work implement is used in many different ways. To people who work in the production environment, it means that “it” is placed into the production environment, whatever “it” is that we are talking about. However, when we are dealing with BC/DR, the alternate site or cloud must be built before it can be tested, which hopefully all occurs before we need it.
Which of the following BEST describes the “create” phase of the cloud data lifecycle?
A. The creation of new or the alteration of existing content
B. The creation or modification of content stored onto a solid state drive (SSD)
C. The creation of new content
D. The creation of new content stored on a hard disk drive (HDD)
A. The creation of new or the alteration of existing content
Explanation:
The Cloud Security Alliance (CSA) defined the create phase of the data lifecycle as the creation of new or the alteration of existing content in their guidance 4.0 document. This exam is a joint venture between (ISC)2 and the CSA, so it is worth knowing what the CSA says. If you disagree with this definition, that is fine, but know that the CSA says this even though most people would put the alteration of content in the use phase. If you know these two options, it will be possible to work through exam questions.
If it is stored on a HDD or SSD, that means that data has moved from the create phase into the store phase. The question only involves the create phase.
In log management, what defines which categories of events are and are NOT written into logs?
A. Quality level
B. Retention level
C. Transparency level
D. Clipping level
D. Clipping level
Explanation:
Many systems and apps allow you to customize what data is written to log files based on the importance of the data. The clipping level determines which events, such as user authentication events, informational system messages, and system restarts, are written in the logs and which are ignored. Clipping levels are used to ensure that the correct logs are being accounted for. They are commonly called thresholds.
Transparency is the visibility of something. Quality is how good something is.Retention is holding on to something. Those three words do not quite apply with the word level. So, this question is mainly about the term clipping level.
Which of the following is NOT one of the three main objectives of IRM?
A. Provisioning
B. Enforcement
C. Access Models
D. Data Rights
B. Enforcement
Explanation:
Information rights management (IRM) involves controlling access to data, including implementing access controls and managing what users can do with the data. The three main objectives of IRM are:
Data Rights: Data rights define what users are permitted to do with data (read, write, execute, forward, etc.). It also deals with how those rights are defined, applied, changed, and revoked. Provisioning: Provisioning is when users are onboarded to a system and rights are assigned to them. Often, this uses roles and groups to improve the consistency and scalability of rights management, as rights can be defined granularly for a particular role or group and then applied to everyone that fits in that group. Access Models: Access models take the means by which data is accessed into account when defining rights. For example, data presented via a web application has different potential rights (read, copy-paste, etc.) than data provided in files (read, write, execute, delete, etc.).
Enforcement is not a main objective of IRM.
Which of the following is TRUE regarding virtualization?
A. Virtual images are susceptible to attacks only when they are online and running
B. It’s more important to secure the virtual images than the management plane in a virtualized environment
C. Virtual images are susceptible to attacks whether they are running or not
D. The most important component to secure in a virtualized environment is the hypervisor
C. Virtual images are susceptible to attacks whether they are running or not
Explanation:
Correct answer: Virtual images are susceptible to attacks whether they are running or not
Virtual images are susceptible to attacks, even when they are not running. Due to this, it’s extremely important to ensure the security of where the images are housed.
Ensuring that the management plane and the hypervisor are secured is the first step to ensuring the virtual images are secure. The management plane is the most important component to secure first because a compromise of the management plane would lead to a compromise of the entire environment.
Hypervisor security is critical, but the management plane is arguably more important. If the management plan is compromised, then everything that a corporation has built can be deleted in a moment. If a hypervisor is compromised, it could result in problems for all customers of a cloud provider. It is, arguably, more likely that the management plane is a target.
A DLP solution is inspecting the contents of an employee’s email. What stage of the DLP process is it MOST likely at?
A. Discovery
B. Mapping
C. Monitoring
D. Enforcement
C. Monitoring
Explanation:
Data loss prevention (DLP) solutions are designed to prevent sensitive data from being leaked or accessed by unauthorized users. In general, DLP solutions consist of three components:
Discovery: During the Discovery phase, the DLP solution identifies data that needs to be protected. Often, this is accomplished by looking for data stored in formats associated with sensitive data. For example, credit card numbers are usually 16 digits long, and US Social Security Numbers (SSNs) have the format XXX-XX-XXXX. The DLP will identify storage locations containing these types of data that require monitoring and protection. Monitoring: After completing discovery, the DLP solution will perform ongoing monitoring of these identified locations. This includes inspecting access requests and data flows to identify potential violations. For example, a DLP solution may be integrated into email software to look for data leaks or monitor for sensitive data stored outside of approved locations. Enforcement: If a DLP solution identifies a violation, it can take action. This may include generating an alert for security personnel to investigate and/or block the unapproved action.
Mapping is not a stage of the DLP process.
Alison is concerned that a malicious individual had gained access to her online health account in which her mental health history was listed. If this is true, what regulation is the health company in violation of?
A. Health Information Portability and Accountability Act (HIPAA)
B. Federal Information Security Management Act (FISMA)
C. General Data Protection Regulation (GDPR)
D. Personal Information Protection and Electronic Documents Act (PIPEDA)
A. Health Information Portability and Accountability Act (HIPAA)
Explanation:
HIPAA is a U.S. regulation that demands the protection of Personal Health Information (PHI), which includes mental health information, physical health information, medical history, and test and lab results as well as a number of other items.
GDPR is a European Union (EU) regulation that requires the protection of personal data. Personal data is often referred to as Personally Identifiable Information (PII) outside of the EU and GDPR. It could include health information, but HIPAA is a more direct fit for the question.
FISMA is a U.S. act that requires U.S. government agencies to build information security programs within their business.
PIPEDA is a Canadian law that requires the protection of personal data.
Which of the following is a strategy for maintaining operations during a business-disrupting event?
A. Operational continuity plan
B. Disaster recovery plan
C. Business continuity plan
D. Ongoing operations plan
C. Business continuity plan
Explanation:
A business continuity plan is a strategy for maintaining operations during a business-disrupting event. A disaster recovery plan is a strategy for restoring normal operations after such an event.
Ongoing operations and operational continuity plans are fabricated terms.