Pocket Prep 19 Flashcards
Dawson is an information security manager for a Fortune 500 company. He and his team have been working on revising their data governance strategy and the resulting policy. They have decided that they will need to deploy more Data Loss Prevention systems to inspect data on their file systems. They have been experiencing small breaches of data, and they are looking for the source.
What phase of the cloud data lifecycle are they in?
A. Store
B. Use
C. Archive
D. Share
A. Store
Explanation:
Since the data is sitting on a file server, the data is in storage. Archival is a type of storage, but there is nothing in the question to lead us to archive. So, store fits the environment of the question better.
They might be losing control of data when it is shared, but DLP to inspect the file systems is not data in transit, it is data at rest. Traditionally, DLP systems only helped us when the data was in transit, but that is no longer the case.
The data breach may be caused by some action a user is taking when they are in the use phase, but again, the DLP system is inspecting the file system, which is data at rest.
Which of the following common contractual terms is MOST related to customers’ efforts to avoid vendor lock-in?
A. Compliance
B. Litigation
C. Right to Audit
D. Access to Cloud/Data
D. Access to Cloud/Data
Explanation:
A contract between a customer and a vendor can have various terms. Some of the most common include:
Right to Audit: CSPs rarely allow customers to perform their own audits, but contracts commonly include acceptance of a third-party audit in the form of a SOC 2 or ISO 27001 certification. Metrics: The contract may define metrics used to measure the service provided and assess compliance with service level agreements (SLAs). Definitions: Contracts will define various relevant terms (security, privacy, breach notification requirements, etc.) to ensure a common understanding between the two parties. Termination: The contract will define the terms by which it may be ended, including failure to provide service, failure to pay, a set duration, or with a certain amount of notice. Litigation: Contracts may include litigation terms such as requiring arbitration rather than a trial in court. Assurance: Assurance requirements set expectations for both parties. For example, the provider may be required to provide an annual SOC 2 audit report to demonstrate the effectiveness of its controls. Compliance: Cloud providers will need to have controls in place and undergo audits to ensure that their systems meet the compliance requirements of regulations and standards that apply to their customers. Access to Cloud/Data: Contracts may ensure access to services and data to protect a customer against vendor lock-in.
A corporation has systems that apply the principles of data science to uncover useful information from the data that enables them to make better business decisions. It is using which of the following?
A. Artificial intelligence
B. Quantum computing
C. Blockchain
D. Machine learning
D. Machine learning
Explanation:
Machine Learning (ML) applies the logic of data science to uncover information hidden within the massive quantity of data that corporations collect today. There are a few ways to use ML. It could be to analyze data to confirm a hypothesis or to uncover information without a preconceived hypothesis.
ML is a subset of Artificial Intelligence (AI). AI seeks to mimic human thought processes. Arguably, we have only achieved narrow AI today. AI is not the correct answer because there is nothing in the question about mimicking human brain capability.
Blockchain is technology that creates an unalterable record of transactions.
Quantum computing is a different physical structure to a computer. Instead of processing electrical bits like our current computers, quantum computing processes multidimensional quantum bits or qubits.
An information security manager has been working with the Security Operations Center (SOC) to prepare plans and put processes in place that will allow the impact of something like ransomware to be minimized if/when it does occur. What type of management process is this engineer involved in?
A. Incident management
B. Problem management
C. Change management
D. Deployment management
A. Incident management
Explanation:
Any event that causes disruptions within an organization is known as an incident; this includes security events as well. Processes and procedures put in place to limit the effects of these incidents are known as incident management.
Problem management includes the processes that allow an organization to get to the root cause of incidents that continue to happen.
Deployment management involves the processes of adding products to the production environment.
Change management is about managing alterations that need to be made to the production environment.
Each of these is defined within ITIL or IT Service Management (ITSM), according to ISO 20000.
Nicole has been evaluating a potential Cloud Service Provider (CSP). She has been looking at the requirements needed in the contract, what kind of ongoing monitoring they will need to put in place, what audits and certifications the CSP has been through, and their exit strategy. What has she been doing?
A. Vendor risk management
B. Dynamic software management
C. Due process
D. Verified secure software
A. Vendor risk management
Explanation:
Vendor risk management involves all those items listed in the question. The question asks how risky is it to use this CSP and what can be done to satisfy the cloud customer’s needs?
Due process is the right to a fair trial. Due process is a legal principle that ensures fair treatment and protection of individual rights in legal proceedings. It refers to the set of procedures and safeguards that must be followed by the government or any entity with authority when depriving a person of life, liberty, or property.
Dynamic software management, also known as dynamic application management, refers to the process of managing and controlling software applications in a dynamic and flexible manner. It involves the deployment, configuration, monitoring, and updating of software applications to ensure optimal performance, security, and efficiency.
Verified secure software refers to software that has undergone rigorous testing and verification processes to ensure its security and reliability. It involves the use of formal verification methods, code analysis, and testing techniques to detect and eliminate vulnerabilities and weaknesses in the software.
An organization has implemented a new client-server application. The security and compliance officer has been tasked with the responsibility of ensuring that the foundations for all security actions are covered in documentation by setting purpose, scope, roles, and responsibilities. What control is being described?
A. Transport Layer Security (TLS)
B. Policy and baselines
C. Guidelines
D. Procedures and guidelines
B. Policy and baselines
Explanation:
The question is asking about documentation. Policies and baselines are a critical start to the controls needed to ensure systems are protected properly. Defining the objective, scope, roles, and responsibilities of all security actions, policies, and baselines establishes a codified framework for all security actions.
Guidelines are documentation, but they only serve as suggestions. It is not part of setting purpose, scope, etc.
Procedures and guidelines should be defined, but this does not get us to purpose and scope. Procedures are step-by-step instructions on how to do something.
TLS should be part of what is defined within the baselines to fulfill the policy requirement. But again, the question is asking about documentation. TLS is the technology.
When Alastair decided to use a Software as a Service (SaaS) provider, he inquired into how the data would be handled. He was told that pieces of his file are stored on different servers throughout a data center (DC). What technology is the cloud provider describing?
A. Data dispersion
B. Data mining
C. Neural networks
D. Bit splitting
A. Data dispersion
Explanation:
Data dispersion takes a file and breaks it into many pieces, sometimes called fragments, shards, or chunks. These pieces are then stored on different servers/storage nodes.
Bit splitting is a different technology that spreads data throughout a data center, but it is at the bit level. It has a slightly different description than using the words pieces, fragments, shards, or chunks.
Neural networks are found within machine learning. It is the stages or decision points that a computer that is trying to emulate human brains will go through to make decisions about something.
Data mining is looking for information within data bases or other storage mechanisms.
Which of the following is a seven-step threat model that views things from the attacker’s perspective?
A. ATASM
B. PASTA
C. DREAD
D. STRIDE
B. PASTA
Explanation:
Several different threat models can be used in the cloud. Common examples include:
STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect. PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and applications from the viewpoint of an attacker.
Your company is looking for a way to ensure that their most critical servers are online when needed. They are exploring the options that their Platform as a Service (PaaS) cloud provider can offer them. The one that they are most interested in has the highest level of availability possible. After a cost-benefit analysis based on their threat assessment, they think that this will be the best option. The cloud provider describes the option as a grouping of resources with a coordinating software agent that facilitates communication, resource sharing, and routing of tasks.
What term matches this option?
A. Storage controller
B. Server redundancy
C. Security group
D. Server cluster
D. Server cluster
Explanation:
Server clusters are a collection of resources linked together by a software agent that enables communication, resource sharing, and task routing. Server clusters are considered active-active since they include at least two servers (and any other needed resources) that are both active at the same time.
Server redundancy is usually considered active-passive. Only one server is active at a time. The second waits for a failure to occur; then, it will take over.
Storage controllers are used for storage area networks. It is possible that the servers in the question are storage servers, but more likely they contain the applications that the users and/or the customers require. Therefore, server clustering is the correct answer.
Security groups are effectively virtualized local area networks protected by a firewall.
Reference:
Containerization is an example of which of the following?
A. Serverless
B. Microservices
C. Application virtualization
D. Sandboxing
C. Application virtualization
Explanation:
Application virtualization creates a virtual interface between an application and the underlying operating system, making it possible to run the same app in various environments. One way to accomplish this is containerization, which combines an application and all of its dependencies into a container that can be run on an OS running the containerization software (Docker, etc.). Microservices and containerized applications commonly require orchestration solutions such as Kubernetes to manage resources and ensure that updates are properly applied.
Sandboxing is when applications are run in an isolated environment, often without access to the Internet or other external systems. Sandboxing can be used for testing application code without placing the rest of the environment at risk or evaluating whether a piece of software contains malicious functionality.
Serverless applications are hosted in a Platform as a Service (PaaS) cloud environment, where management of the underlying servers and infrastructure is the responsibility of the cloud provider, not the cloud customer.
A cloud architect is designing a Disaster Recovery (DR) solution for the bank that they work at. For their most critical server, they have determined that it can only be offline at any point in time for no more than 10 minutes, and they cannot lose more than 2 seconds worth of data.
When choosing if they should fail within their current cloud provider to another region or to another cloud provider, they need to base that decision mainly on which of the following?
A. The Recovery Point Objective (RPO)
B. The Maximum Tolerable Downtime (MTD)
C. The Recovery Service Level (RSL)
D. The Recovery Time Objective (RTO)
B. The Maximum Tolerable Downtime (MTD)
Explanation:
The MTD is the amount of time that a server can be offline. There are a variety of different considerations between the two options, such as 1) Will it be possible to fail to another region? Or will it also be offline? and 2) How long will it take to fail to the other provider? There are other considerations for being able to choose the best option, such as a cost/benefit analysis, but that is not an option within this question.
The RPO is how much data that they can lose. In this question, it is two seconds. Since the question is not asking which data backup service they have to choose from, that is not the right answer.
RSL is the percentage of functionality that must be in the DR alternative. That would be a consideration, but the question does not go far enough to indicate that we are talking about the RSL.
RTO is the time that the administrators would have to do the work of switching services to the other region or the other provider and takes less time than the MTD, but this is not part of what is discussed in the question.
Which of the following blockchain types requires permission to join but can be open and utilized by a group of different organizations working together?
A. Private
B. Permissioned
C. Public
D. Consortium
D. Consortium
Explanation:
Consortium blockchains are a hybrid of public and private blockchains. They are operated by a consortium or a group of organizations that have a shared interest in a particular industry or use case. Consortium blockchains provide a controlled and permissioned environment while still allowing multiple entities to participate in the consensus and decision-making process.
Public blockchains, such as Bitcoin and Ethereum, are open to anyone and allow anyone to participate in the network, verify transactions, and create new blocks. They are decentralized and provide a high level of transparency and security. Public blockchains use consensus mechanisms, such as Proof of Work (PoW) or Proof of Stake (PoS), to validate transactions and secure the network.
Private blockchains are restricted to a specific group of participants who are granted access and permission to the network. They are typically used within organizations or consortia where participants trust each other and require more control over the network. Private blockchains offer higher transaction speeds and privacy but sacrifice decentralization compared to public blockchains.
Permissioned blockchains require users to have permission to join and participate in the network. They are typically used in enterprise settings where access control and governance are critical. Permissioned blockchains offer faster transaction speeds and are more scalable than public blockchains, but they sacrifice some decentralization and censorship resistance.
Olivia, an information security manager, is working on the Disaster Recovery (DR) team for a medium-sized government contractor. They provide a service for the government that has a requirement of being highly available. Which cloud-based strategy can provide the fastest Recovery Time Objective (RTO) for a critical application in the event of a disaster?
A. Creating regular backups of the application and data to an on-premises storage system
B. Replicating the application and data to multiple geographically dispersed regions within a cloud provider’s infrastructure
C. Implementing a hybrid cloud model with a secondary data center for failover and recovery
D. Leveraging a cloud provider’s infrastructure for real-time replication and failover of the application and data
D. Leveraging a cloud provider’s infrastructure for real-time replication and failover of the application and data
Explanation:
Correct answer: Leveraging a cloud provider’s infrastructure for real-time replication and failover of the application and data
Leveraging the cloud provider’s infrastructure with real-time replication allows for immediate failover in case of a disaster. With real-time replication, the application and data are continuously synchronized between primary and secondary environments, ensuring minimal data loss and the ability to quickly switch to the secondary environment for seamless operation.
Regular backups is always a good idea. An even better idea is to test those backups. However, the questions is about the speed it takes to do the recovery work. If the question was about the Recovery Point Objective (RPO), then the data backup strategy would be critical to look at.
A secondary data center is an expensive option, especially when we are trying to leverage the cloud.
Replicating the application and data to multiple geographically dispersed regions is the next best answer. However, the question does not give us specifics that drive us to that answer. So, the more generic “leveraging a cloud provider’s infrastructure” is a better answer.
Reference:
Which cloud service role negotiates relationships between cloud customers’ relationships with cloud providers?
A. Cloud service partner
B. Cloud service broker
C. Cloud auditor
D. Cloud service user
B. Cloud service broker
Explanation:
The cloud service broker is responsible for negotiating relationships between the customer and the provider. They would be considered independent of both.
Cloud service partners are defined in ISO/IEC 17788 as a party that is engaged in support of, or auxiliary to, either the cloud service customer or the cloud service provider.
The cloud auditor is defined in ISO/IEC 17788 as a partner that audits the provision and use of cloud services.
The cloud auditors and cloud service broker would be considered cloud service partners. The partner is a more generic role.
The cloud service customer is defined in ISO/IEC 17788 as a natural person associated with the cloud service customer.
When enforcing OS baselines, which of the following is LEAST likely to be covered?
A. Data retention
B. Approved protocols
C. Approved access methods
D. Compliance requirements
A. Data retention
Explanation:
OS baselines establish and enforce known good states of system configuration and focus on ensuring least privilege and other security OS and application best practices. Each configuration option should match a risk mitigation (security control objective). Security objectives often address compliance requirements. The need to match an acceptable level of risk could require certain protocols to be disabled within the OS, such as telnet or ping.
Data retention and other data-specific requirements are not commonly part of an OS baseline.
Which of the following tools attempts to identify vulnerabilities with NO knowledge of an application’s internals?
A. SCA
B. SAST
C. DAST
D. IAST
C. DAST
Explanation:
Some common tools for application security testing include:
Static Application Security Testing (SAST): SAST tools inspect the source code of an application for vulnerable code patterns. It can be performed early in the software development lifecycle but can’t catch some vulnerabilities, such as those visible only at runtime. Dynamic Application Security Testing (DAST): DAST bombards a running application with anomalous inputs or attempted exploits for known vulnerabilities. It has no knowledge of the application’s internals, so it can miss vulnerabilities. However, it is capable of detecting runtime vulnerabilities and configuration errors (unlike SAST). Interactive Application Security Testing (IAST): IAST places an agent inside an application and monitors its internal state while it is running. This enables it to identify unknown vulnerabilities based on their effects on the application. Software Composition Analysis (SCA): SCA is used to identify the third-party dependencies included in an application and may generate a software bill of materials (SBOM). This enables the developer to identify vulnerabilities that exist in this third-party code.
Reference:
A cloud security architect and administrator are working together to determine the best configuration for their virtual machines in an Infrastructure as a Service (IaaS) environment. They are looking for a technology that would allow their Virtual Machines (VM) to be dynamically managed and moved as necessary across a cluster of physical servers.
What would you recommend?
A. Transport Layer Security (TLS)
B. Dynamic Optimization (DO)
C. Software Defined Networking (SDN)
D. Distributed Resource Scheduling (DRS)
D. Distributed Resource Scheduling (DRS)
Explanation:
DRS and DO are two similar yet distinct technologies. DRS allows for automatic load balancing of VMs across a cluster of physical servers. DO allows for the automatic adjustment of Virtual Machine (VM) resources, such as CPU, memory, and storage, based on changing workload demands.
SDN optimizes the way that routers and switches function. It adds a centralized management server called a controller to manage flow path decisions, among a few other capabilities.
TLS is a networking protocol that allows secure (encrypted) sessions to be established across web sessions (and possibly other communications).
Which of the following focuses on personally identifiable information (PII) as it pertains to financial institutions?
A. Gramm-Leach-Bliley Act (GLBA)
B. General Data Protection Regulation (GDPR)
C. Sarbanes-Oxley (SOX)
D. Health Insurance Portability Accountability Act (HIPAA)
A. Gramm-Leach-Bliley Act (GLBA)
Explanation:
The Gramm-Leach-Bliley Act is a U.S. act officially named the Financial Modernization Act of 1999. It focuses on PII as it pertains to financial institutions, such as banks.
HIPAA is a U.S. regulation that is concerned with the privacy of protected healthcare information and healthcare facilities.
GDPR is an EU specific regulation that encompasses all organizations in all different industries.
SOX is a U.S. regulation about protecting financial data.
Which essential characteristic of the cloud says that an organization only pays for what it uses rather than maintaining dedicated servers, operating systems, virtual machines, and so on?
A. On-demand self-service
B. Measured service
C. Multi-tenancy
D. Broad network access
B. Measured service
Explanation:
Measured service means that Cloud Service Providers (CSP) bill for resources consumed. With a measured service, everyone pays for the resources they are using.
On-demand self-service means that the user/customer/tenant can go to a web portal, select their service, configure it, and get it up and running without interaction with the CSP.
Broad network access means that as long as the user/customer/tenant has access to the network (the “cloud” is on), they will be able to use that service using standard mechanisms.
Multi-tenancy is a characteristic that exists with all cloud deployment models (public, private, and community). It means that there are multiple users/customers/tenants using the same physical server. The hypervisor has the responsibility of isolating them from each other. In a private cloud, the different users or tenants would be different business units or different projects. A good read is the free ISO standard 17788. Pay particular attention to the definition of multi-tenancy.
Controlling your corporation’s intellectual property (IP) is an essential element of information security. Your organization is considering using a data rights management (DRM) solution that provides persistent protection. One of the biggest concerns is that once the IP is in the customers’ hands, it could be stolen and used inappropriately.
Which characteristic of DRM would be of most interest to the corporation?
A. Permissions can be modified after a document has been shared
B. The illicit or unauthorized copying of data is prohibited
C. Dates and time-limitations can be applied
D. Data is secure no matter where it is stored
D. Data is secure no matter where it is stored
Explanation:
There are many options that DRM tools can provide. This includes having dates and time-limitations applied, the ability to change permissions once the document has been shared, and prohibiting the illicit or unauthorized copying of data.
So, what you have here is a question that matches how (ISC)2 does “all of the above” answers. “Data is secure no matter where it is stored” is a summary of the other three options.