Pocket Prep 8 Flashcards
A college student is looking to set up their own cloud server so that they can install a few programs and create a lab. They need a cloud option that is cost-effective and will allow them to only pay for what they need. They don’t have the funds to purchase and maintain their own hardware.
Which cloud model would suit this student’s needs the BEST?
A. Public cloud
B. Private cloud
C. Community cloud
D. Hybrid cloud
A. Public cloud
Explanation:
A public cloud would be the best option for this student because it is the least expensive and will allow them to pay only for the resources that they use. Since the student is planning to use the server as a lab environment, it’s unlikely that it will cost much money. In fact, the big public cloud providers even have a free tier of services.
Community cloud is a close second for this question—building a cloud for many students to share. Or rather, the university setting up a community cloud that could be shared among colleges and universities to provide services to specific types of students. However, the question does not mention anything else. You could even look at the question from a selfish perspective. The college student wants to set something up just for themselves, so community is not the right answer.
A private cloud is cost prohibitive. The server, wherever it is, is dedicated to this student, which would cost more than most university students could ever afford.
Since community and private aren’t a consideration for an answer, hybrid is not the correct answer. Hybrid is a combination of either the two or three options: public, private, and community.
If a data center has:
Fault tolerant Uninterruptible Power Supply (UPS) Dual, diverse power path 12-hour fault tolerant standby power Standby power has a continuous or unlimited runtime rating Fault tolerant standby power generation
What tier data center would you have?
A. Tier II
B. Tie IV
C. Tier III
D. Tier I
B. Tier IV
Explanation:
A Tier IV data center is fault tolerant. So, there are many options in that list to confirm this level.
A Tier I data center has the basic capacity and some UPS devices.
A Tier II data center has redundant power and cooling capability.
A Tier III data center has multiple power paths that provide a concurrently maintainable environment. Repair work should not affect production systems.
Switch’s website has patented a Tier V data center, which you can find more information about on their website. There is a link under their awards to an interesting video that gives you a great idea what the data center tiers are all about.
Donna is working with her customer to set up the security controls they need to be able to share their content. They have case studies and suggested configurations for their products. They want to ensure that only their existing customers can access these files. What tool would you recommend?
A. Secure Shell (SSH)
B. Information Rights Management (IRM)
C. Data Rights Management (DRM)
D. Transport Layer Security (TLS)
B. Information Rights Management (IRM)
Explanation:
Information Rights Management (IRM)
Correct answer: Information Rights Management (IRM)
First, this question is not meant to trick you if you answered DRM. It is meant only to point out a possible view on the terms IRM and DRM. DRM is often used for public content like Kindle, iTunes, and Netflix, while IRM is more likely corporate information intended for customers in some manner. Cisco uses Locklizard as their IRM of choice for courseware in their classroom. (ISC)2 uses another IRM tool.
IRM and DRM control the content from how long you have access to the file to print capability and screen shots.
TLS and SSH are not suitable answers because they would only encrypt information in transit. The question is asking about security controls related to sharing. That is a bigger topic and requires IRM or DRM. In this case, IRM because it is corporate, not public sharing.
Which of the following is concerned with the proper restoration of systems after a disaster or unexpected outage?
A. Information security management
B. Continuity management
C. Incident management
D. Change management
B. Continuity management
Explanation:
Continuity management
Correct answer: Continuity management
Continuity management, sometimes known as business continuity management, is concerned with restoring systems and devices after a disaster or unexpected outage has occurred. Business Continuity and Disaster Recovery (BCDR) plans are a part of continuity management. ITIL defines continuity management as the preparation for large scale interruptions. Many people might call this Disaster Recovery (DR) outside of the discipline of ITIL.
Change management is the process of tracking and managing a change throughout its entire lifecycle.
Incident management is the process of responding to an adverse event.
Information security management is arguably all that is done by information security managers within a business, including the cloud and standard data centers.
Service level management is concerned with the oversight of service level agreements (SLAs). SLAs typically are used in contracts between service providers, such as cloud service providers, and their customers, especially when the provider is a public cloud provider. If the Information Technology (IT) department is building a private cloud in their on-premises data center, what would be the equivalent term used between IT and the business units?
A. Annual Rate of Occurrence (ARO)
B. Operational Level Agreement (OLA)
C. Master Services Agreement (MSA)
D. Recovery Time Objectives (RTO)
B. Operational Level Agreement (OLA)
Explanation:
Operational Level Agreement (OLA)
Correct answer: Operational Level Agreement (OLA)
Operational Level Agreements (OLAs) are similar to SLAs, but rather than existing between a customer and an external provider, OLAs exist between two units within the same organization.
MSAs are less specific than SLAs. SLAs typically define items such as uptime or bandwidth requirements. MSAs are used to define the relationships between the two parties in the contract.
AROs define the expected number of times that an incident or disaster could happen within a given year. The RTO is the window of time that is allocated for the recovery team to bring the backup solutions operational.
Simone has been working within the Information Technology (IT) department. While working there, they have been analyzing the golden images that they use when they start new virtual servers. They have used a software tool to analyze it in a running virtual environment. The tool has been able to detect that there are two fixes that need to be applied as a result of a CVE notice that has recently been released. It has been determined that there is a fix from the vendor that they can apply.
What would be the next action they should take?
A. Store a new golden image
B. Confirm the Common Vulnerability Score
C. Patching
D. Run a new scan
C. Patching
Explanation:
Patching is used to fix bugs found in software, apply security vulnerability fixes, introduce new software features, and more. Before patches are applied, they should be properly tested and validated. However, in this particular issue there is no option to test the patch. So, of these options, the patch is the logical next step. Actually, you cannot test the patch until it is applied. The answer “patch the golden image” does not eliminate that as the next step before the image is replaced.
Once it is patched, it is good to test it to ensure that everything is still working as it should be. Part of this could involve running a new scan. Once it is verified as good, a new golden image is stored and made available for use.
When a CVE is released, a score is given to it based on the Common Vulnerability Scoring System (CVSS). This is arguably a good thing to check, but the software tool that pointed to the CVE should also show the CVSS score.
Reference:
(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 221-223.
The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 186.
A cloud customer who has been using a Hardware Security Module (HSM) is migrating to a newer model. They want to ensure that their keys will never be recovered by anyone, so they are taking actions to ensure that. One of the steps that they are taking is to overwrite the erased data with arbitrary data and zero values.
What are they doing?
A. Zeroization
B. Cryptographic erasure
C. Degaussing
D. Data hijacking
A. Zeroization
Explanation:
What are they doing?
A
Zeroization
Correct answer: Zeroization
Zeroization is another term for overwriting. In this process, erased data is overwritten with arbitrary data and zero values as a means of data sanitation.
Cryptographic erasure is when data is encrypted, and then the key that was used is destroyed. This is a possible option for customers when they do not have access to the drives themselves that their data resides on, which would be true for PaaS, SaaS, and possibly IaaS in public clouds.
Degaussing is a process of disrupting the magnetic state on magnetic drives—Hard Disk Drives (HDD). It does render a drive unusable. It would be possible for the cloud provider to perform this sanitization or possibly in a private cloud.
Data hijacking is an odd term, but it means that the bad actors of the world are taking control of someone’s data. Something like ransomware would be like this.
Your organization would like to automate a process that involves two applications. The data that moves between the applications must be synched in real time as well as one system that needs to boot up before the other. What can be used to synchronize the operations of these applications?
A. Application Programming Interface (API) Gateway
B. Tokenization
C. Sandboxing
D. Orchestration
D. Orchestration
Explanation:
Orchestration is a technique for synchronizing and orchestrating the operations of multiple apps that work together to complete a business activity. These are managed groups of applications, and their actions are choreographed based on the rules you establish.
Tokenization is a process that replaces a piece of sensitive data such as a credit card number with another unrelated value. The bank, regarding credit cards, would have an extra database that would allow them to replace the token with the credit card number. This prevents card numbers from needing to be sent across the internet, for example. This is how PayPal, ApplePay, and others work.
An API gateway is a piece of software that enables APIs to be processed through a single entry point when they are actually all processed by different backend services. It also provides threat protection. Gateways in general can be thought of as layer 7 firewalls.
Sandboxing, or process isolation, is a tool used to contain a piece of code, an application, a virtual machine, or others. It isolates it so that it cannot interact out of the sandbox or into the sandbox except as designed.
Reference:
(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 78-79, 158.
The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 13.
Which of the following is NOT true of the responsibility for securing network and communication infrastructure in the cloud?
A. The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.
B. The cloud provider offers tools for securing cloud environments, but the customer is responsible for properly configuring and using them.
C. The cloud provider is responsible for securing the physical infrastructure in its environment.
D. The cloud customer is responsible for securing the physical infrastructure in its data center.
A. The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.
Explanation:
cloud?
A
The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.
Correct answer: The cloud provider is responsible for securing the links between the customer’s and the provider’s environments.
The responsibility for securing network and communication infrastructure is typically shared between the CSP and the cloud customer. The CSP and cloud customer are each responsible for the security of the infrastructure within their facilities, and they share responsibility for securing traffic between them (over the Internet). This is often accomplished using cryptography, with the CSP offering secure protocols and the customer using them. Also, in cloud environments, the CSP is responsible for offering the tools needed to secure an environment (encryption, logging, etc.), but the customer is responsible for configuring and using these tools properly.
Which of the following network security controls involves the principle of least privilege and individually evaluating each access request?
A. Network Security Groups
B. Traffic Inspection
C. Zero Trust Network
D. Geofencing
C. Zero Trust Network
Explanation:
Network security controls that are common in cloud environments include:
Network Security Groups: Network security groups (NSGs) limit access to certain resources, such as firewalls or sensitive VMs or databases. This makes it more difficult for an attacker to access these resources during their attacks. Traffic Inspection: In the cloud, traffic monitoring can be complex since traffic is often sent directly to virtual interfaces. Many cloud environments have traffic mirroring solutions that allow an organization to see and analyze all traffic to its cloud-based resources. Geofencing: Geofencing limits the locations from which a resource can be accessed. This is a helpful security control in the cloud, which is accessible from anywhere. Zero Trust Network: Zero trust networks apply the principle of least privilege, where users, applications, systems, etc. are only granted the access and permissions that they need for their jobs. All requests for access to resources are individually evaluated, so an entity can only access those resources for which they have the proper permissions.
Reference:
(ISC)2 CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 126-127.
The Official (ISC)² CCSP CBK Reference, 4th Edition. Pg 37-39.
D
Geofencing
ITIL provides many different processes that can be instituted within the Information Technology (IT) department. Which management strategy is focused on preventing issues from occurring within a system or process in a proactive manner by looking for the root cause of previous bad events?
A. Release management
B. Availability management
C. Problem management
D. Incident management
C. Problem management
Explanation:
Problem management is focused on preventing potential issues from occurring within a system or process. It is usually done when an incident occurs that has had a big impact on the business. A root cause of the incident is then looked for to prevent the incident from occurring again.
Incident management is reactive in nature. This process is run when something does happen that is causing problems or even damage to systems, applications, data, or even the business.
Release management is the practice that makes available for use new or changed services and features.
Availability management is the process that is followed to ensure that services deliver agreed levels of availability (e.g., 99.99% uptime or 25 CPU hours/month) to the users.
Which of the following is NOT a physical or environmental control?
A. Biometric lock
B. Intrusion Detection System (IDS)
C. Intrusion Prevention System (IPS)
D. Uninterruptible Power Supply (UPS)
C. Intrusion Prevention System (IPS)
Explanation:
An intrusion prevention system helps protect a network from malicious activity and intrusions, and therefore, is not considered a physical or environmental control.
IDSs actually do exist in physical security. A sensor on a door or window that alerts when it is opened is a type of IDS. A biometric lock is a physical control even though they involve biometrics. A UPS is a battery that provides a power source if there’s a power outage, which would be considered a physical control.
Which type of security testing impersonates a normal user with no special knowledge or access to the system?
A. White-box
B. Gray-box
C. Clear-box
D. Black-box
Explanation:
Software testing can be classified as one of a few different types, including:
White-box: In white-box or clear-box testing, the tester has full access to the software and its source code and documentation. Static application security testing (SAST) is an example of this technique. Gray-box: The tester has partial knowledge of and access to the software. For example, they may have access to user documentation and high-level architectural information. Black-box: In this test, the attacker has no specialized knowledge or access. Dynamic application security testing (DAST) is an example of this form of testing.
Sachio is working for a financial trading corporation and has been working with the Incident Response Team (IRT) to do practice drills for a variety of different incidents that they have prepared for. As they move through the phases, there is a point when they should take actions to prevent further damage from this incident to the corporation.
Which phase would this be?
A. Respond phase
B. Post-incident phase
C. Prepare phase
D. Recover phase
A. Respond phase
Explanation:
The first step in the respond phase is containment. The purpose of containment is to protect an organization from further damage caused by a known incident. Disconnecting affected systems, disabling hardware, and disconnecting storage are only a few of the responsibilities.
Prepare phase is the process of building plans to be able to respond to incidents as they happen. It is a crucial step to determine and build teams and build the procedural documents that will be used when an incident does happen. The question says to “prevent further damage from this incident.” So we are in an incident response, not preparing for it. This is true for post-incident and recover phases.
Recover phase is when actions are taken to return things to a normal condition. It is possible more controls could be added or configurations of existing controls could be changed. However, this is to change the effect of future events.
The post-incident phase primarily involves breaking down the steps that were just taken in a specific incident. The point is to find what needs to be improved, not point fingers, and get better at their response capability.
An information security manager is concerned about the security of portable devices in the organization which have been given access to corporate resources. What can this information security manager implement to manage and maintain the devices?
A. Symmetric encryption technology
B. Mobile Device Management (MDM)
C. Remote control to be able to delete files
D. Remote control to be able to disable it
B. Mobile Device Management (MDM)
Explanation:
Mobile Device Management (MDM) is the term used to describe the management and maintenance of mobile devices (such as tablets and mobile phones) that have access to corporate resources. Usually, MDM software will be installed on the devices so that the IT staff can manage the devices remotely in the case of a lost or stolen device.
MDM software usually has the following:
Symmetric encryption technology for the drive on the mobile device Remote control to be able to disable or even 'brick' the device if it is lost or stolen Remote control to be able to delete files in the event the phone is lost or stolen
All the answer options are good, but MDM is the all-inclusive answer, which makes it the best choice.
Sade, the information security manager, has been working with the software development team to ensure that the customer’s desired functionality has been fully defined while ensuring that security is also taken into consideration. Which software development phase are they in?
A. Coding
B. Requirements
C. Design
D. Planning
B. Requirements
Explanation:
The requirements phase of the SDLC is when the desired functionality is defined. This plan will outline the specifications for the features and functionality of the software or application to be created.
The planning phase is where the idea of developing a specific piece of software is considered based on feasibility and costs.
The design phase takes the desired functionality and creates the functionality, architecture, integration points, data flows, and so on and creates the design that can then be followed in the next phase, the coding phase.
The coding phase is where the software developers write the lines of code that will become the application. There is usually functional testing that begins in this phase as well as Static Application Security Testing (SAST).
Which of the following BEST defines the Annual Rate of Occurrence (ARO)?
A. Jenna has determined that the server that could be hit by ransomware is valued at two million USD
B. Jenna has been able to determine that if the server is offline, it must be restored to a functional state within three hours
C. Jenna has been able to determine that ransomware is likely to occur once every three years
D. Jenna has determined that the cost of a ransomware event will likely be five million USD
C. Jenna has been able to determine that ransomware is likely to occur once every three years
Explanation:
ARO stands for Annualized Rate of Occurrence, which is defined by the estimated number of times a threat will successfully exploit a vulnerability in a given year. By multiplying the Single Loss Expectancy (SLE) by the ARO, you are able to determine the Annual Loss Expectancy (ALE).
ARO = Jenna has been able to determine that ransomware is likely to occur once every three years.
SLE = Jenna has determined that the cost of a ransomware event will likely be five million USD.
Asset value = Jenna has determined that the server that could be hit by ransomware is valued at two million USD.
Maximum Tolerable Downtime (MTD) = Jenna has been able to determine that if the server is offline, it must be restored to a functional state within three hours.
Your organization must be able to rapidly scale resources up or down, as required, to meet future needs and from a variety of cloud geographical regions. Which cloud characteristic is required in this scenario?
A. Elasticity
B. Resource pooling
C. On-demand
D. High availability
A. Elasticity
Explanation:
Elasticity increases and decreases resources as needed, but unlike scalability, elasticity is done automatically. Elastic resources are based on the current needs and resources are added or removed dynamically to meet those needs from a variety of geographical locations.
On-demand is the cloud characteristic in which you will be able to access a web page and request, configure, and build a cloud service without having to interact with the cloud provider.
High availability is a characteristic often found with firewalls and with other similar devices that enable communication between redundant devices so that if one fails, the other can continue the communication without disruption to the user.
Resource pooling includes the pooling of resources within a server such as a CPU and memory as well as the pool of resources within a data center, for example, servers.
Which of the following organizations publishes Top 10 lists describing the most common vulnerabilities in various types of software (web apps, web APIs, etc.)?
A. NIST
B. OWASP
C. SANS
D. CSA
B. OWASP
Explanation:
Several organizations provide resources designed to teach about the most common vulnerabilities in different environments. Some examples include:
Cloud Security Alliance Top Threats to Cloud Computing: These lists name the most common threats, such as the Egregious 11. According to the CSA, the top cloud security threats include data breaches; misconfiguration and inadequate change control; lack of cloud security architecture and strategy; and insufficient identity, credential access, and key management. OWASP Top 10: The Open Web Application Security Project (OWASP) maintains multiple top 10 lists, but its web application list is the most famous and is updated every few years. The top four threats in the 2021 list were broken access control, cryptographic failures, injection, and insecure design. SANS CWE Top 25: SANS maintains a Common Weakness Enumeration (CWE) that describes all common security errors. Its Top 25 list highlights the most dangerous and impactful weaknesses each year. In 2021, the top four were out-of-bounds write, improper neutralization of input during web page generation (cross-site scripting), out-of-bounds read, and improper input validation.
NIST doesn’t publish regular lists of top vulnerabilities.
There are many reasons a company must work to ensure that the information that they possess is managed and handled properly. One of the key elements of the European Union’s (EU) General Data Protection Regulation (GDPR) is that data shall not be stored longer than needed.
This speaks to the requirement in information security to create which of the following?
A. Data classification policy
B. Retention periods
C. Retention policy
D. Archiving and retrieval procedures
C. Retention policy
Explanation:
It is necessary to create a retention policy within businesses regarding information security. In that policy, it should specify how long data can be retained within a business. This is often connected to data classification. The retention policy answer is better because the question is asking about how long the data should be stored. That is specified by the retention policy.
The retention period is what is being pointed to, but that would be in the retention policy. The wording also does not work: “create a retention period.” It is not a matter of creating it, it is a matter of specifying the retention period.
Archiving and retrieval procedures should also be spelled out with details on handling data.
A large financial institution is using a Platform as a Service (PaaS) deployment on a public Cloud Service Providers (CSP) network. The company has been moving their systems from a traditional data center to the CSP carefully over the last year. They are now going to deploy a new service that will handle sensitive data, so they need to ensure that the information is properly protected. The protection that they are setting up first is encryption. A cloud administrator has been tasked with safeguarding the encryption keys in a centralized setting.
What tool can be used for this?
A. Key Management Integrity Protocol (KMIP)
B. Key Management Service (KMS)
C. Public-Key Cryptography Standards (PKCS)
D. Tokenization
B. Key Management Service (KMS)
Explanation:
Key Management Service (KMS) is a cloud-based service that provides secure and centralized management of cryptographic keys for encrypting and decrypting sensitive data in cloud environments. It is designed to simplify the process of key management and enhance the security of data at rest or in transit.
Key Management Integrity Protocol (KMIP) is a communication protocol that enables secure and standardized management of cryptographic keys and related objects across different Key Management Services (KMS). It allows organizations to centralize and streamline their key management operations, ensuring interoperability and ease of integration between different key management solutions. This is the second best answer behind KMS. The focus of the question is the centralizing of the storage of the keys. That is your KMS. KMIP then works with the KMS.
PKCS stands for Public-Key Cryptography Standards. It is a set of standards developed by RSA Laboratories to facilitate the secure use and implementation of public-key cryptography. PKCS cover various aspects of public-key cryptography, including key management, digital signatures, encryption, and certificate handling.
Tokenization is a data protection technique that replaces sensitive information, such as credit card numbers or Personally Identifiable Information (PII) with a unique identifier called a token. The original data is securely stored and replaced with a randomly generated token that has no meaningful correlation to the original data.
Ada has been tasked with implementing a solution for her organization that will assist with the Incident Response (IR) process. She is looking for a tool that will help analyze all the logs that are coming in from all the different virtual network devices, security products, and end systems.
Which of the following is a solution that Ada could implement?
A. Security Information Event Manager (SIEM)
B. Data Loss Prevention (DLP)
C. Intrusion Detection System (IDS)
D. Intrusion Prevention System (IPS)
A. Security Information Event Manager (SIEM)
Explanation:
The product that is useful for Incident Response as well as just management of a network is a SIEM. A SIEM collects logs from all the products in the network, cloud, or traditional. These products include routers, switches, servers, virtual servers, firewalls, IDS, IPS, DLP, and more—all virtual or physical. The SIEM then correlates all these events and looks for Indications of Compromise (IoC). These IoCs are then analyzed, probably by a Security Operations Center (SOC). If a compromise is found, the IR team could be started.
A DLP is a tool that was originally developed to watch the network for traffic that was leaving that would be considered a leak or a loss. It can do more now, such as watching emails for phishing attacks.
IDS and IPS products watch for intrusive traffic on either a network or end system.
A cloud provider is offering an online data storage solution in a Software as a Service (SaaS) environment. As a customer, there is a Graphical User Interface (GUI) that allows the user to store data and easily access it when needed. The Cloud Service Provider (CSP) actually stores the data within a database that they maintain.
What type of storage is described here?
A. Block storage
B. Object storage
C. Ephemeral storage
D. Information storage and management
D. Information storage and management
Explanation:
Information storage and management is the classic form of storing data within databases that the application uses and maintains. This storage method is used in Software as a Service (SaaS) offerings.
Ephemeral storage is temporary storage. It is used by virtual machines. When a windows server is running in a virtual environment, it must believe that it has local storage. It is how the server software was written. So until there is a major rewrite, we must give it temporary storage to use while it is running. Anything that is stored only in ephemeral storage will be lost if the virtual machine shuts down before it is moved to persistent storage.
Block storage is a term used by Infrastructure as a Service (IaaS). Blocks are assigned in set amounts of space at a time (e.g., 1 terabyte of space that could be increased by 1 terabyte at a time). A block is like a drive on a traditional computer. It is fixed in size. In the cloud it can be expanded though. Inside of blocks you find volumes. A volume is like a file folder.
Inside volumes you find objects. An object is a file. The file could be a document, a movie, a sound file, etc.
Many CSPs that want to work with US government contractors have cloud services that are audited against which of the following?
A. G-Cloud
B. ISO/IEC 27017
C. PCI DSS
D. FedRAMP
D. FedRAMP
Explanation:
Cloud service providers may have their environments verified against certain standards, including:
ISO/IEC 27017 and 27018: The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud. PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments. Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources.