Pocket Prep 20 Flashcards
A financial organization has purchased an Infrastructure as a Service (IaaS) cloud service from their cloud provider. They are consolidating and migrating their on-prem data centers (DC) into the cloud. Once they are setup in the cloud, they will have their servers, routers, and switches configured as needed with all of the network-based security appliances such as firewalls and Intrusion Detection Systems (IDS).
What type of billing model should this organization expect to see?
A. Locked-in monthly payment that never changes
B. Up-front equipment purchase, then a locked-in monthly fee afterward
C. Metered usage that changes based upon resource utilization
D. One up-front cost to purchase cloud equipment
C. Metered usage that changes based upon resource utilization
Explanation:
In an IaaS environment (and Platform as a Service (PaaS) as well as Software as a Service (SaaS)), the customer can expect to only pay for the resources that they are using. This is far more cost effective and allows for greater scalability. However, this type of billing does mean that the price is not locked-in, and it could change as the need for resources either increases or decreases from month to month.
There is no equipment to purchase with cloud services (IaaS, PaaS or SaaS). You could purchase equipment if you want to build a private cloud. However, there is no mention of that in the question. The standard cloud definition excludes the “locked-in monthly payment.” A company could offer that, but it is outside of the cloud as defined in NIST SP 800-145 and ISO 17788.
Cloud Service Providers (CSP) and virtualization technologies offer a form of backup that captures all the data on a drive at a point in time and freezes it. What type of backup is this?
A. Data replication
B. Guest OS image
C. Snapshot
D. Incremental backup
C. Snapshot
Explanation:
CSPs and virtualization technologies offer snapshots as a form of backup. A snapshot will capture all the data on a drive at a point in time and freeze it. The snapshot can be used for a number of reasons, including rolling back or restoring a virtual machine to its snapshot state, creating a new virtual machine from the snapshot that serves as an exact replica of the original server, and copying the snapshot to object storage for eventual recovery.
A guest OS image is a file that, when spun-up or run on a hypervisor, becomes the running virtual machine.
Incremental backups are only changes since the last backup of any kind. The last backup could be a full or an incremental backup. So, it basically backs up only “today’s changes” (assuming that backups are done once a day).
Data replication is usually immediate backups to multiple places at the same time. That way, if one copy of the data is gone, another still exists.
Shai has been working with the Disaster Recovery (DR) teams to build the DR Plans (DRP) for their critical transaction database. They process a great deal of commercial transactions in an hour. They have been able to determine that they need a configuration that will nearly eliminate the risk of the loss of any transactions that are performed. They have a Recovery Point Objective (RPO) that is sub one second.
What technology should they implement?
A. Load balancers that span multiple servers in a single data center
B. A server cluster that spans multiple availability zones with load balancers
C. Redundant servers that are served through multiple data centers
D. A set of redundant servers across multiple availability zones
B. A server cluster that spans multiple availability zones with load balancers
Explanation:
The Recovery Point Objective (RPO) is defined as the period of time during which an organization is willing to accept the risk of missing transactions. With server clusters in a cloud environment that span multiple availability zones that are handled by load balancers, it is unlikely that a single completed transaction would be lost. Incomplete transactions may be lost, but that is probably acceptable for this business.
Redundant servers are not as robust as clusters. Clusters have all the servers active all the time. Redundant servers are not actively processing data unless the primary fails, then it will take over. Redundant servers are often stated as active-passive, where clusters are active-active.
Having multiple servers in a single data center is not as robust as ones that are in different availability zones. If a massive fire happens in one data center, the customer would be offline for awhile (depending on additional configurations) and likely to lose some of the transactions. There are configurations to help with this, but they are missing from the answers, so they cannot be assumed to be there. One configuration would be data mirroring or database shadows, which are nearly instantaneous backups to another server or drive.
Rhonda is working with the company’s public cloud provider to determine what technologies and tools they will need to setup to ensure they will have a functional configuration. The topic she is currently working on is the connection from users at the office connecting to their new Platform as a Service (PaaS) server-based solution. Her concern is others being able to see and access their sensitive corporate data between the office and the cloud.
What solution would work best for this scenario?
A. Distributed Resource Scheduling (DRS)
B. Virtual Private Network (VPN)
C. Software Defined Network (SDN)
D. Advanced Encryption Standard (AES)
B. Virtual Private Network (VPN)
Explanation:
A VPN is an encrypted tunnel or connection between two points. It could be used site-to-site or client-to-server. AES would likely be the encryption algorithm that would be used within the VPN. However, the question is looking for a solution and that makes the VPN a better answer.
DRS is used to automatically (as opposed to manually) find the best servers to start Virtual Machines (VM) on.
SDN is a technology to more effectively use switches and routers within a network.
Rhonda is working with her team to determine if they should code their own API, use an opensource one, or one from a vendor. Which of the following is true about the benefits of each of the API options?
A. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.
B. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have to do all the reviews yourself.
C. A vendor API code is not open, so you do not need to review it. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.
D. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you know who is behind it and that they are updating it. And coding your own API means you have greater control.
A. A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.
Explanation:
Correct answer: A vendor API will be managed and patched by that vendor. An open-source API has the benefit that you can see the code for review purposes. And coding your own API means you have greater control.
With a vendor’s API, it is good to have a company with formal processes behind it so that you know it will be managed and patched by that vendor. A disadvantage is that you cannot see the code for reviewing yourself.
An open-source API has the benefit that you can see the code for review purposes. You do not know for sure who is behind it or whether they are updating it.
And coding your own API means you have greater control. You do also have to do all the review and testing, but that is not necessarily a benefit. It depends on how you look at it.
Keep in mind this is a theoretical exam. So each of these statements could be debated in the real world, and needs to be, but this is a good starting point, which is usually where the exam is at.
Reference:
Noah needs to take a backup of a virtual machine. As the cloud operator, he knows that the most common method for doing this is to use which of the following backup solutions?
A. Agent based
B. Agentless
C. Differential
D. Snapshots
B. Agentless
Explanation:
Taking a snapshot to backup a Virtual Machine (VM) typically uses an agentless approach. In most virtualization environments, VM snapshots are performed at the hypervisor level without requiring any specific software or agents installed within the VM itself.
Agentless backups first take a snapshot which is then a point-in-time copy. After that, differential changes can be tracked to make storage space less for future backups.
Agent-based backup products require the installation of a lightweight piece of software on each virtual machine. The agent software lives at the kernel level in a protected system, so it can easily detect block-level changes on the machine. If these agents are on a lot of installed VMs, it becomes more difficult to manage.
Yamin works for a Cloud Service Provider (CSP) as a technician in one of their data centers. She has been setting up the Fibre Channel equipment over the last week. What part of the cloud is she building?
A. Cabeling
B. Network
C. Compute
D. Storage
D. Storage
Explanation:
The three things that must be built to create a cloud data center are Compute, Network, and Storage. Storage is where the data will be at rest. This involves building Storage Area Networks (SAN). There are two primary SAN protocols: one is Fibre Channel and the other is IP-based Small Computer Serial Interface (iSCSI). What also needs to be constructed within that is how the storage is allocated. Will it be allocated as block storage, file storage, raw storage, etc?
Compute is the computation capability that comes along with a computer. That could be a virtual server of a virtual desktop interface.
The network element is the ability to transmit data to or from storage, to or from the compute elements, and out of the cloud to other destinations. This involves both physical networks and the virtual networks created within a server.
Cables are needed to connect all the physical equipment together. There is even virtual cables found within Infrastruture as a Service (IaaS) environments. This is part of the network element, though.
Which of the following is MOST closely related to data loss prevention (DLP)?
A. Denial-of-Service Prevention
B. Security Function Isolation
C. Boundary Protection
D. Separation of System and User Functionality
C. Boundary Protection
Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:
Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them. Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings. Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors. Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems. Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems. Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
Volume and object are the names of the storage types used in which cloud service model?
A. Platform as a Service (PaaS)
B. Database as a Service (DBaaS)
C. Infrastructure as a Service (IaaS)
D. Software as a Service (SaaS)
C. Infrastructure as a Service (IaaS)
Explanation;
Each cloud service model uses different types of storage as shown below:
Infrastructure as a Service (IaaS): Volume, Object Platform as a Service (PaaS): Structured, Unstructured Software as a Service (SaaS): Content and file storage
DBaaS is not a term used by NIST SP 800-145 or ISO/IEC 17788. However, database would be the storage type. Or more specifically, we could say that it is structured data.
The question is really based on the Cloud Security Alliance guidance 4.0 document. If you read the OSG or the CBK, you will see completely different descriptions in each book. What is wise for the exam is to be familiar with the names and what they mean, not how they link to IaaS, PaaS, or SaaS.
Imani is working with their cloud data architect to design a Storage Area Network (SAN) that will provide the cloud storage needed by the users. They want users to be able to have access to mountable volumes within the Fibre Channel (FC) SAN.
Of the following, which term describes the allocated storage space that is presented to the user as a mountable drive?
A. Logical Unit Number (LUN)
B. World Wide Port Name (WWPN)
C. World Wide Node Name (WWNN)
D. World Wide Names (WWN)
A. Logical Unit Number (LUN)
Explanation:
Storage management is a complex topic that is worth learning more about beyond (ISC)2 books for the exam. When building SANs that are accessed with Fibre Channel (FC) or iSCSI protocols, the server is often built with Redundant Array of Independent Disks (RAID), and it is necessary to slice storage into pieces that are visible to individuals, groups, applications, and so on. The mechanism for identifying the space allocated to users that present as mountable drives is called a Logical Unit Number (LUN).
There is more addressing used within FC, which is WWNs. A WWN is allocated to FC devices on the SAN. The WWN that has been allocated to a node is a WWNN. The WWN allocated to a port is a WWPN.
IBM is a good source of information for study.
Celiene is a cloud data architect. She has been designing cloud solutions for years and has worked with databases, big data, object storage, and so on. One of the types of data that she works with provides the context and details about another data asset. What would that be called?
A. Data Lake
B. Unstructured data
C. Structured data
D. Metadata
D. Metadata
Explanation:
Metadata refers to descriptive information that provides context and details about a particular data asset. It gives information about the characteristics, attributes, and properties of the data, enabling better understanding, organization, and management of the data. Metadata can be found in various domains, including digital content, databases, documents, and research materials.
Data lakes and structured and unstructured data are the data assets that metadata can describe.
A data lake is a centralized repository that stores large volumes of structured, semi-structured, and unstructured data in its raw, unprocessed form. It is designed to handle massive amounts of data from diverse sources, such as transactional systems, log files, social media feeds, IoT devices, and more. Unlike traditional data warehouses that require data to be structured and organized upfront, data lakes store data in its native format, allowing for flexible analysis and exploration.
Structured data refers to data that is organized and formatted according to a predefined data model or schema. It follows a strict and consistent structure, where each data element is assigned a specific data type and resides in a well-defined field or column within a table or database. Structured data is typically stored in relational databases or data warehouses.
Unstructured data refers to data that does not have a predefined or organized format. It lacks a specific data model or schema and does not fit neatly into traditional rows and columns like structured data. Unstructured data is typically found in various forms, such as text documents, images, videos, audio files, social media posts, emails, sensor data, and more.
According to the American Society for Heating, Refrigeration and Air conditioning Engineers (ASHRAE), what is the ideal temperature for a data center?
A. 64.4 - 80.6 degrees F/ 18 - 27 degrees C
B. 70.2 - 85.0 degrees F / 21.2 - 29.4 degrees C
C. 55.7 - 78.5 degrees F / 13.1 - 25.8 degrees C
D. 49.8 - 70.6 degrees F / 9.3 - 21.4 degrees C
A. 64.4 - 80.6 degrees F/ 18 - 27 degrees C
Explanation:
Due to the number of systems running, data centers produce a lot of heat. If the systems in the data center overheat, it could damage the systems and make them unusable. To protect the systems, adequate and redundant cooling systems are needed. The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) recommend that the ideal temperature for a data center is 64.4 - 80.6 degrees F / 18 - 27 degrees C.
During which phase of the software development lifecycle should testing needs be defined?
A. Requirements definition
B. Testing
C. Coding
D. Design
A. Requirements definition
Explanation:
During the phase of the software development lifecycle, sometimes referred to as requirement gathering and feasibility or requirements definition, the testing requirements are defined. Having these requirements in place before development and testing even begins helps to ensure the success of the project.
The coding phase is when the developers are building the application, which is too late to define the test requirements. Testing is when the requirements are met and, again, is too late. The design phase is also a little late and is a better answer than design or testing, but requirements definition is the best answer.
An information security architect is developing a business disaster recovery plan (DRP) for her organization. They have been progressing through the steps to develop their plans that will be utilized in the event of major disruptions to their private cloud datacenter. They have just finished developing the procedural documentation.
What is the next step for them to take?
A. Develop recovery strategies
B. Implement the plan
C. Conduct the Business Impact Analysis (BIA)
D. Test the plan
B. Implement the plan
Explanation:
When developing a Disaster Recovery Plan (DRP), the following order should be followed:
Project management and initiation Business Impact Analysis Develop recovery strategies Develop the documentation Implement the plan Test the plan Report and revise Embed the plan in the user community
As they have just developed the plan, the next step is to implement it. The instinct for most people is to move to test the plan so that it can then be implemented. Since these are the steps to be taken after significant failure, it is necessary to build the alternative cloud to fail into before you can test it.
A medium-sized business is looking to utilize a Storage Area Network in their private cloud. They are looking for the easiest route to build the SAN, utilizing the existing and traditional Local Area Network technology that they already have. Which storage protocol would you recommend?
A. Fibre Channel
B. iSCSI
C. HyperSCSI
D. NVMe over Fabrics
B. iSCSI
Explanation:
iSCSI allows for the transmission of SCSI commands over a TCP-based network. SCSI allows systems to use block-level storage that behaves like a SAN would on physical servers but leverages the TCP network within a virtualized environment. iSCSI is the most commonly used communications protocol for network-based storage.
Fibre Channel (FC) requires a change of cabling from wire to fiber. It also involves different equipment to connect the storage devices for transmission.
HyperSCSI is a proprietary protocol developed by some storage vendors that allows SCSI commands and data to be transmitted over IP networks. Since it is proprietary, it is probably not a good choice. We do not have any information in the question other than they are using traditional LAN technology.
NVMe over Fabrics (NVMe-oF) is an emerging protocol that allows direct access to Non-Volatile Memory Express (NVMe) storage devices over a network. NVMe-oF enables low-latency, high-performance storage connectivity by leveraging high-speed interconnects like Fibre Channel, Ethernet, or InfiniBand. This can use Ethernet, which is the traditional LAN layer 2 protocol, but the storage devices are different.
Daria is working with the software developers to ensure that the sensitive data that the application handles will be well protected. The application they are creating will contain information about their customers, their orders, conversations that have occurred, and so on. What she needs to make sure of is that the payment card numbers that are stored in the application will be properly protected because they must be in compliance with the Payment Card Industry Data Security Standard (PCI DSS). When the customer service representatives are on the phone with the customers and they access a specific customer’s account, they are not allowed to see the full payment card number.
What mechanism can be used to protect the card number?
A. Tokenization
B. Anonymization
C. Masking
D. Obfuscation
C. Masking
Explanation:
Masking is the common way that card numbers are protected in this scenario. Instead of seeing the card number, the customer service representative would only see stars (*) and the last four or five digits of the card number. This is also used to protect passwords when a user is typing them in to the application to protect it from someone’s shoulder surfing.
Masking is a term that is not defined by ISO, NIST, or the CSA. It is a term that software developers use in other ways than in the previous paragraph. However, that use is not consistent. Using the term masking as a way to cover or hide information should be sufficient for this test.
Tokenization is to replace data. So, if it was a credit card number that was being transmitted across a network, which could be compromised, it would be better to replace that number with a token. The bank would need another database to convert that token back to the card number to determine if a purchase can be made. This is how Apple Pay, Google pay, PayPal, and other such payment methods work.
Obfuscation is to confuse. You can say that encryption is a form of obfuscation. If you were looking at the encrypted test it would not make sense. You would be left “confused” by what the text actually says. However, not all obfuscation is encryption. There are other methods. As an example, go to Microsoft Word and change the font to ‘Wingdings.” It is arguable that that is a form of obfuscation.
Anonymization is to remove. Anonymization removes the direct and indirect identifiers from data. Once it is removed, it cannot be recovered. So, if a lab was researching a new medicine, and they needed to review medical records to see how things were working, it would be good to remove the Personally Identifiable Information (PII) from their view, or anonymize it.
An organization’s communications with which of the following is MOST likely to include information about planned and unplanned outages and other information designed to protect the brand image?
A. Regulators
B. Partners
C. Vendors
D. Customers
D. Customers
Explanation:
An organization may need to communicate with various parties as part of its security and risk management process. These include:
Vendors: Companies rely on vendor-provided solutions, and a vendor experiencing problems could result in availability issues or potential vulnerabilities for their customers. Relationships with vendors should be managed via contracts and SLAs, and companies should have clear lines of communication to ensure that customers have advance notice of potential issues and that they can communicate any observed issues to the vendor. Customers: Communications between a company and its customers are important to set SLA terms, notify customers of planned and unplanned service interruptions, and otherwise handle logistics and protect the brand image. Partners: Partners often have more access to corporate data and systems than vendors but are independent organizations. Partners should be treated similarly to employees with defined onboarding/offboarding and management processes. Also, the partnership should begin with mutual due diligence and security reviews before granting access to sensitive data or systems. Regulators: Regulatory requirements also apply to cloud environments. Organizations receive regulatory requirements and may need to demonstrate compliance or report security incidents to relevant regulators.
Organizations may need to communicate with other stakeholders in specific situations. For example, a security incident or business disruption may require communicating with the public, employees, investors, regulators, and other stakeholders. Organizations may also have other reporting requirements, such as quarterly reports to stakeholders, that could include security-related information.
During which phase of the cloud data lifecycle would data undergo cryptographic erasure?
A. Use
B. Archive
C. Destroy
D. Store
C. Destroy
Explanation:
As the name suggests, the destroy phase is where data is removed completely from a system (or “destroyed”) and should be unable to be recovered. In cloud environments, methods such as degassing and shredding can’t be used because they require physical access to the hardware. Instead, in cloud environments, cryptographic erasure can be used by a PaaS or IaaS customer to destroy the data. The cloud provider can also use cryptographic erasure or any of the traditional destruction techniques like shredding. That would be good to inquire into when searching for cloud providers.
The use phase is when someone goes back to existing data and utilizes it in some way. You looking at this question would actually fit into the use phase.
Store and archive are two different stages of holding data. Store is normal storage on persistent drives like HDD or SSD. Archival is long-term storage. For example, holding on to your tax records for the next seven years just in case you need them. (The IRS actually says three not seven, but consult your accountant or lawyer.)
Which of the following storage types acts like a physical hard drive connected to a VM?
A. Ephemeral
B. Volume
C. Raw
D. Long-Term
B. Volume
Explanation:
Cloud-based infrastructure can use a few different forms of data storage, including:
Ephemeral: Ephemeral storage mimics RAM on a computer. It is intended for short-term storage that will be deleted when an instance is deleted. Long-Term: Long-term storage solutions like Amazon Glacier, Azure Archive Storage, and Google Coldline and Archive are designed for long-term data storage. Often, these provide durable, resilient storage with integrity protections. Raw: Raw storage provides direct access to the underlying storage of the server rather than a storage service. Volume: Volume storage behaves like a physical hard drive connected to the cloud customer’s virtual machine. It can either be file storage, which formats the space like a traditional file system, or block storage, which simply provides space for the user to store anything. Object: Object storage stores data as objects with unique identifiers associated with metadata, which can be used for data labeling.
Which of the following is MOST relevant in public cloud environments?
A. Access Controls
B. Tenant Partitioning
C. HVAC
D. Multivendor Pathway Connectivity
B. Tenant Partitioning
Explanation:
Multitenant public cloud environments run the security risk of one tenant being able to access or affect another’s data, applications, etc. Cloud providers enforce tenant partitioning using access controls and similar means, but cloud customers are responsible for protecting their data by using encryption and properly configuring CSP-provided security controls.
Access controls, multivendor pathway connectivity, and HVAC are important in any data center, regardless of who owns it.
Reference:
In their Infrastructure as a Service (IaaS) cloud environment, an organization encounters a catastrophic business impact event. The occurrence happened as a result of an outage in the eastern U.S. region, but the Cloud Service Provider’s (CSP) failover between availability zones was not triggered.
Who would be responsible for configuring the cloud-based resiliency functions?
A. Cloud service auditor
B. Cloud Service Provider (CSP)
C. Cloud service broker
D. Cloud Service Customer (CSC)
D. Cloud Service Customer (CSC)
Explanation:
The consumer will always be responsible for configuring resiliency functions such as automated data replication, failover between CSP availability zones, and network load balancing.
The CSP’s response is to preserve the capabilities that provide these solutions, but the consumer must construct their cloud system to suit their own resiliency requirements.
A cloud service broker is an intermediary between the CSC and the CSP. They can be used to negotiate contracts or manage the services.
Cloud service auditors are a third party that would audit the CSP on behalf of the CSC interests.
Haile is a cloud operator who has been reviewing the Indications of Compromise (IoC) from the company’s Security Information and Event Manager (SIEM). The SIEM reviews the log outputs to find these possible compromises. Where should detailed logging be in place within the cloud?
A. Only access to the hypervisor and the management plane
B. Wherever the client accesses the management plane only
C. Only specific levels of the virtualization structure
D. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
D. Each level of the virtualization infrastructure as well as wherever the client accesses the management plane
Explanation:
Logging is imperative for a cloud environment. Role-based access should be implemented, and logging should be done at each and every level of the virtualization infrastructure as well as wherever the client accesses the management plane (such as a web portal).
The SIEM cannot analyze the logs to find the possible compromise points unless logging is enabled, and the logs are delivered to that central point. This is necessary in case there is a compromise, which could happen anywhere within the cloud.