Pocket Prep 17 Flashcards
A cloud administrator needs to ensure that data has been completely removed from cloud servers after data migration. The company has an Infrastructure as a Service (IaaS) noSQL (Structured Query Language) server on a public cloud provider. Which data sanitization technique can be used in this cloud environment successfully?
A. A contract that has a clause that the Cloud Service Provider (CSP) physically degausses all drives that held customer data combined with the customer performing an overwrite of their data
B. A contract that has a clause that the Cloud Service Provider (CSP) physically destroys all drives that held customer data combined with the customer performing an overwrite of their data
C. A contract that has a clause that the Cloud Service Provider (CSP) physically shreds all drives that held customer data combined with cryptographic erasure of the database encryption key
D. A contract that has a clause that the Cloud Service Provider (CSP) performs 11 overwrites on all drives that held customer data combined with proper erasure of the database encryption key
C. A contract that has a clause that the Cloud Service Provider (CSP) physically shreds all drives that held customer data combined with cryptographic erasure of the database encryption key
Explanation:
In a cloud environment, where the customer cannot access or control the physical hardware, sanitization methods such as incineration, destruction, and degaussing are not an option. As a result, it is critical to ensure that the CSP handles drives securely. This should be discussed and addressed through the use of the contract.
If given a choice, physical destruction, preferably shredding, is better than 11 overwrites of the data. It would be OK if those two things were combined.
Degaussing can only be done on magnetic Hard Disk Drives (HDD). There is no guarantee that the drives the customer’s data is on would be HDD versus Solid State Drives (SSD). So shredding is the better choice.
Even with an IaaS implementation, it is not possible for the customer to perform an overwrite of their data. The way that the cloud will store the data makes that impossible. If the data were overwritten in the customer’s view, that would mean that the first data was deleted, and then new data is added. It would not be written to the same sectors on the drives. A deletion also only removes pointers to the data. It does not actually remove the data from the drives.
Taking out an insurance policy is an example of which of the following risk treatment strategies?
A. Transference
B. Avoidance
C. Mitigation
D. Acceptance
A. Transference
Explanation:
Risk treatment refers to the ways that an organization manages potential risks. There are a few different risk treatment strategies, including:
Avoidance: The organization chooses not to engage in risky activity. This creates potential opportunity costs for the organization. Mitigation: The organization places controls in place that reduce or eliminate the likelihood or impact of the risk. Any risk that is left over after the security controls are in place is called residual risk. Transference: The organization transfers the risk to a third party. Insurance is a prime example of risk transference. Acceptance: The organization takes no action to manage the risk. Risk acceptance depends on the organization’s risk appetite or the amount of risk that it is willing to accept.
Which of the following data security techniques is commonly used to ensure data integrity?
A. Encryption
B. Tokenization
C. Masking
D. Hashing
D. Hashing
Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:
Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules. Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions. Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number. Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data. Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
Which of the following operational controls and standards defines how the organization will respond to a business-disrupting event?
A. Information Security Management
B. Availability Management
C. Service Level Management
D. Continuity Management
D. Continuity Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
Jonas is working to design their new web application’s encryption requirements. The current decision he and his team are making is which encryption protocol to use. Which protocol would you recommend?
A. Transport Layer Security (TLS)
B. Secure Shell (SSH)
C. Advanced Encryption Standard (AES)
D. Internet Protocol Security (IPSec)
A. Transport Layer Security (TLS)
Explanation:
Transport Layer Security (TLS) is a protocol that was designed for web applications. It is most commonly used to encrypt Hyper Text Transfer Protocol (HTTP).
Secure Shell is most commonly used to encrypt administrator and operator traffic, as they configure and manage network attached devices.
AES is an encryption algorithm that is commonly used in TLS, SSH, and IPSec. The key word here that makes this the wrong answer is algorithm. The question is asking for a protocol.
IPSec is commonly used to connect sites to each other, for example, a router on-prem to a router in the cloud.
TLS, SSH, and IPSec can be used in other places. However, the only clue in the question is web application.
Dion is working with the operation team to deploy security tools within the cloud. They are looking for something that could detect, identify, isolate, and analyze an attack by distracting them. What would you recommend?
A. Honeypot
B. Network Security Group (NSG)
C. Firewall
D. Intrustion Detection System (IDS)
A. Honeypot
Explanation:
A honeypot consists of a computer, data, or a network site that appears to be part of a network but is actually isolated and monitored and that seems to contain information or a resource of value to attackers.
What makes honeypot a better answer than IDS is the final part to the question: “by distracting them.” An IDS could detect and identify the attack, but the bad actor would not know it was there and be distracted by it. It is a tool that only monitors traffic.
A firewall might distract the bad actor but not in the same sense as the question indicates. The bad actor might take some time to explore the firewall, but a firewall is a real device. It is not advisable to add a firewall with the intention of distracting the bad actor, unless it was part of a honeypot.
An NSG is effectively a firewalled group in the cloud. So the statement above about firewalls applies the same to the NSG.
Which of the following data and media sanitization methods is MOST effective in the cloud?
A. Physical destruction
B. Cryptographic erasure
C. Overwriting
D. Degaussing
B. Cryptographic erasure
Explanation:
When disposing of potentially sensitive data, organizations can use a few different data and media sanitization techniques, including:
Overwriting: Overwriting involves writing random data or all 0’s or all 1’s over sensitive data. This may be less effective in the cloud if the customer can guarantee access to certain regions of memory on the underlying server. Cryptographic Erasure: Cryptographic erasure involves destroying the encryption keys used to protect sensitive data. This can easily be accomplished in the cloud by deleting keys from the key management system (KMS).
Physical destruction of media and degaussing are not options in the cloud because the customer lacks access to and control over the physical media used to store data.
Willa is working for an insurance company as an information security manager. They have recently started using a Platform as a Service (PaaS) implementation for their database that contains their current customers’ personal data. She has created a new privacy policy that will appear on their website to explain their practices to their customers.
What principle of the Privacy Management Framerwork (PMF) [formerly the Generally Accepted Privacy Principles (GAPP)] is she addressing?
A. Agreement, notice, and communication
B. Disclosure to third parties
C. Monitoring and enforcement
D. Use, retention, and disposal
A. Agreement, notice, and communication
Explanation:
The PMF was developed by the American Institute of Certified Public Accountants and the Canadian Institute for Chartered Accountants. It includes nine key privacy principles as listed below:
Management Agreement, notice, and communication Collection and creation Use, retention, and disposal Access Disclosure to third parties Security for privacy Data integrity and quality Monitoring and enforcement
Agreement, notice, and communication is what Willa is doing by creating the privacy policy and adding it to the website for the customers to be able to view. What the privacy policy should state is their practices regarding use, retention, disposal, disclosure to third parties, monitoring, and enforcement.
The use, retention, and disposal elements should be clear as to what the business will be using the data for, how long they will be storing that data, and when and how they dispose of that data.
Disclosure to third parties involves selling the data or making it available to business partners.
Monitoring involves logging and reviewing the logs of people who access the data and what it is used for. Enforcement includes the proper removal of the data when the retention period is reached.
Ellie has been working with her team of information security professionals at a major financial institution on their data retention plan. They are required to retain customer data for seven years after an account has been closed. What phase of the data lifecycle are they addressing?
A. Use phase
B. Share phase
C. Archive phase
D. Destroy phase
C. Archive phase
Explanation:
The archive phase is when data is removed from the system and moved to long-term storage. In many cases, archived data is stored offsite for disaster recovery purposes.
The destroy phase would occur at the end of the seven years. Data should be properly destroyed at that time.
If data is exchanged with someone else, that would be the share phase.
If a user is looking at the data at any point, that is considered the use phase.
From a legal perspective, what is the MAIN factor that differentiates a cloud environment from a traditional data center?
A. Multitenancy
B. Self-service
C. Repudiation
D. Rapid elasticity
A. Multitenancy
Explanation:
Multitenancy is the main factor, from a legal perspective, which differentiates a cloud environment from a traditional data center. Multitenancy is a concept in cloud computing in which multiple cloud customers may be housed in the same cloud environment and share the same resources. Because of this, the cloud provider has legal obligations to all cloud customers housed on its hardware. If a server is ever seized from a cloud provider by law enforcement, it will likely contain assets from many different customers.
Rapid elasticity allows the system to utilize more resources as needed. That includes CPU, memory, network speed, etc.
Repudiation is the ability for a user to deny that they did something.
Self-service is the web portal that customers can use to setup and manage virtual machines of many different types.
The information security manager is working with the cloud deployment team as they prepare to move their data center to the cloud. An important part of their plan is how they are going to get out of the cloud. They would like to reduce the risk of vendor lock-in. What cloud shared consideration should the administrator be looking for?
A. Portability
B. Availability
C. Reversibility
D. Interoperability
C. Reversibility
Explanation:
Reversibility is defined in ISO/IEC 17788 as the “process for cloud service customers to retrieve their cloud service customer data and application artifacts and for the cloud service provider to delete all cloud service customer data as well as contractually specified cloud service derived data after an agreed period.” Based on this definition, reversibility is the best fit for this scenario.
Interoperability is defined in ISO/IEC 17788 as the “ability of two or more systems or applications to exchange information and to mutually use the information that has been exchanged.” That is not the correct answer because they are planning on how to get out.
Portability is defined in ISO/IEC 17788 as the “ability to easily transfer data from one system to another without being required to re-enter the data.” This is not the correct answer because they are planning on how to get out of the cloud.
Availability is defined in ISO/IEC 17788 as the “property of being accessible and usable upon demand by an authorized entity.”
Which of the following emerging technologies REDUCES the amount of computation performed on cloud servers?
A. Blockchain
B. Internet of Things
C. Edge Computing
D. Artificial Intelligence
C. Edge Computing
Explanation:
Cloud computing is closely related to many emerging technologies. Some examples include:
Machine Learning and Artificial Intelligence (ML/AI): Machine learning is a subset of AI and includes algorithms that are designed to learn from data and build models to identify trends, perform classifications, and other tasks. Cloud computing is linked to the rise of ML/AI because it provides the computing power needed to train the models used by ML/AI and operate these technologies at scale. Blockchain: Blockchain technology creates an immutable digital ledger in a decentralized fashion. It is used to support cryptocurrencies, track ownership of assets, and implement various other functions without relying on a centralized authority or single point of failure. Cloud computing is related to blockchain because many of the nodes used to maintain and operate blockchain networks run on cloud computing platforms. Internet of Things (IoT): IoT systems include smart devices that can perform data collection or interact with their environments. These devices often have poor security and rely on cloud-based servers to process collected data and issue commands back to the IoT systems (which have limited computational power, etc.). Edge and Fog Computing: Edge and fog computing move computations from centralized servers to devices at the network edge, enabling faster responses and less usage of bandwidth and computational power by cloud servers. Edge computing performs computing on IoT devices, while fog computing uses gateways at the edge to collect data from these devices and perform computation there.
Which of the following types of testing verifies that a module fits properly into the system as a whole?
A. Usability Testing
B. Integration Testing
C. Regression Testing
D. Unit Testing
B. Integration Testing
Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:
Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended. Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed. Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience. Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.
Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.
A corporation is working with their lawyers to ensure that they have accounted for the laws that they must be in compliance with. The corporation is located in Brazil, but a lot of their customers are in the European Union. When collecting and storing personal data regarding their customers, which law must they be in compliance with?
A. Health Information Portability and Accountability Act (HIPAA)
B. General Data Protection Regulation (GDPR)
C. Payment Card Industry Data Security Standard (PCI-DSS)
D. Asia Pacific Economic Cooperation (APEC)
B. General Data Protection Regulation (GDPR)
Explanation:
The General Data Protection Regulation (GDPR) is the European Union’s (EU) law that demands that personal data of natural persons within the EU is protected when collected and stored by corporations. That includes any corporation around the planet.
HIPAA is a United States of America (USA) law that requires that Protected Health Information (PHI) is protected.
APEC is an agreement among 21 member economies around the Pacific Ocean that is designed to promote free trade.
PCI-DSS is the standard from payment card providers that requires a certain level of information security present around systems that hold and process credit card data.
Dana is working for a cloud provider responsible for architecting solutions for their customer’s Platform as a Service (PaaS) server-based options. As she and her team develop new server deployment options, they want to ensure that the virtual machine files are configured with the best possible options to satisfy their customers’ needs.
These configurations should be documented in a:
A. Baseline procedures
B. Security procedures
C. Security baseline
D. Security policy
C. Security baseline
Explanation:
Security baselines contain the configuration information for systems. This includes both physical and virtual. They should reflect the best practices that fit specific systems the best.
A security policy could be either a corporate level policy or the configuration within a firewall, which is commonly referred to as policies. Either way, it does not match the needs of the question, which is the configuration of a server.
Security procedures are the step-by-step documented procedure to complete a process or a configuration or something else of that sort.
Baseline procedures is not the best answer here. A baseline is the configuration. The procedure is how you do something. The term baseline procedure is not common security language.
Kody has been working with technicians in the Security Operation Center (SOC) to resolve an issue that has just occurred. Three of the cloud providers’ biggest customers have experienced a Denial of Service (DoS). It appears that there is an issue with the configuration of the Dynamic Optimization (DO) functionality. Kody is worried that this could occur again in the future, so she wants to uncover the root cause in their investigation.
What would this process be called?
A. Change management
B. Incident management
C. Problem management
D. Release and deployment management
C. Problem management
Explanation:
The focus of problem management is to identify and analyze potential issues in an organization to determine the root cause of that issue. Problem management is responsible for implementing processes to prevent these issues from occurring in the future.
Incident management is the initial response to the denial of service. This is what Kody was initially working on with the technicians. This is not the right answer because the question moves on to uncovering the root cause.
Release and deployment management is the process of releasing new hardware, software, or functions into the production environment.
Change management is the process of controlling any changes to the production environment; however, this definition does not seem to explain change management, although that is the description within ITIL. It covers many different scenarios.
Which of the following types of data is the HARDEST to perform data discovery on?
A. Unstructured
B. Structured
C. Loosely structured
D. Semi-structured
A. Unstructured
Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:
Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data. Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own. Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.
Loosely structured is not a common classification for data.
Certain data will require more advanced security controls in addition to traditional security controls. This may include extra access controls lists placed on the data or having additional permission requirements to access the data, especially when that data is the Intellectual Property (IP) of the organization.
This extension of normal data protection is known as which of the following?
A. Data rights management (DRM)
B. Threat and vulnerability management
C. Identify Access Management (IAM)
D. Infrastructure and access management
A. Data rights management (DRM)
Explanation:
Data Rights Management (DRM), also known as Information Rights Management (IRM), is an extension of normal data protection. Normal controls would include encryption, hashes, and access rights. In DRM, advanced security controls such as extra ACLs and constraints such as print capability, or inability, are placed onto the data.
Infrastructure and access management would be access to the data center. Infrastructure is the actual physical routers, switches, servers, and so on.
IAM involves controlling access by all the users to all the servers, applications, and so on. IAM also includes access, for example, by contractors and vendors. It is a very big topic in comparison to the question, which is looking for protection of the data.
Threat and vulnerability management includes risk assessment, analysis, and mitigations. It is not about controlling access, although that would be a topic of concern at some point in the discussion on threat management.
Load balancing and redundancy are solutions designed to address which of the following?
A. Interoperability
B. Resiliency
C. Reversibility
D. Availability
B. Resiliency
Explanation:
Some important cloud considerations have to do with its effects on operations. These include:
Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time. Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure. Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load. Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes. Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore original operations after a transition to an outsourced service. Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in. Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating. Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.
Reference:
Which of the following regulations deals with law enforcement’s access to data that may be located in data centers in other jurisdictions?
A. GLBA
B. US CLOUD Act
C. SCA
D. SOX
B. US CLOUD Act
Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:
General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects. US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country. Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data. Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data. Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens. Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers. Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud. Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors. North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
Which of the following threat models focuses on identifying intersections between an organization’s attack surface and an attacker’s capabilities?
A. ATASM
B. STRIDE
C. DREAD
D. PASTA
A. ATASM
Explanation:
Several different threat models can be used in the cloud. Common examples include:
STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect. PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and ap
Which of the following measures the LONGEST amount of time that a system can be down before causing significant harm to the organization?
A. RSL
B. MTD
C. RPO
D. RTO
B. MTD
Explanation:
A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:
Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business. Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations. Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.
Reference:
Rise is working for a corporation as their cloud architect. He is designing how the Platform as a Service (PaaS) deployment will be used to store sensitive data for one particular application. He is designing a trust zone for that data to be handled inside of. Which of the following BEST defines a trust zone?
A. Sets of rules that define which employees have access to which resources
B. The ability to share pooled resources among different cloud customers
C., Virtual tunnels that connect resources at different locations
D. Physical, logical, or virtual boundaries around network resources
D. Physical, logical, or virtual boundaries around network resources
Explanation:
A cloud-based trust zone is a secure environment created within a cloud infrastructure where only authorized users or systems are allowed to access resources and data. This trust zone is typically created by configuring security measures such as firewalls, access controls, and encryption methods to ensure that only trusted sources can gain access to the data and applications within the zone. The goal of a cloud-based trust zone is to create a secure and reliable environment for sensitive data or critical applications by isolating them from potential threats and unauthorized access. This helps to ensure the confidentiality, integrity, and availability of the resources and data within the trust zone.
A virtual tunnel connecting to another location may be something that needs to be added, but it is not part of describing the zone itself.
Rules that define which employees have access to which resources are something that is needed by a business. This is Identity and Access Management (IAM). It should include information about the resources in a trust zone, but it does not define the actual zone.
The ability to share pooled resources is part of the definition of cloud. It is the opposite of a trust zone. Because resources are shared, many companies are very worried about using the cloud.
A cloud information security manager is building the policies and associated documents for handling cloud assets. She is currently detailing how assets will be understood or listed so that access can be controlled, alerts can be created, and billing can be tracked. What tool allows for this?
A. Value
B. Key
C. Identifier
D. Tags
D. Tags
Explanation:
Tags are pervasive in cloud deployments. It is crucial that a plan is built for the corporation on how to tag assets. If it is not done consistently, it is not helpful. A tag is made up of two pieces, a key or name and a value. Key here is not the cryptographic key for encryption and decryption, but it is a word in English that was chosen by some to use here. It is really a name.
You can think of the tag as a type of identifier, but the tool needed to manage assets is called a tag.
As part of the risk management process, an information security professional has been asked to perform an assessment of hard values. The values that the corporation is looking for involve understanding how much money a specific threat could cost the business. To understand that, they need to be clear about how many times each specific threat that they analyze could happen each year.
Which type of assessment has this engineer been asked to perform?
A. Bow tie analysis
B. Quantitative assessment
C. Cost-benefit analysis
D. Qualitative assessment
B. Quantitative assessment
Explanation:
The two main types of assessment used in the risk management process are quantitative assessments and qualitative assessments. Quantitative assessments use values such as Single Loss Expectancy (SLE), Annual Loss Expectancy (ALE), and Annual Rate of Occurrence (ARO) for a numeric approach. The ARO is how many times they believe a specific threat could occur in a year. The SLE calculates how much a single incident would cost.
Qualitative assessments are nonnumerical assessments.
From that information, it is possible to do a cost-benefit analysis comparing the cost of the controls and the benefits gained by reducing the likelihood or impact of the threats.
A bow tie analysis creates a visual display of proactive and reactive controls.
Reference:
As the information security manager working in a neurological doctor’s office, you are accountable for the security of medical records. Which types of data are you safeguarding?
A. Personal data
B. Credit and debit card data
C. Personally Identifiable Information (PII)
D. Protected Health Information (PHI)
D. Protected Health Information (PHI)
Explanation:
You are safeguarding Protected Health Information (PHI) that may be contained within the medical records you are accountable for. These can be in the form of lab reports, visit summaries, or other types of medical records.
Personally Identifiable Information (PII) is unique to an individual, such as a social security number or phone number. Personal data is effectively the same. It is the term used within the European Union’s General Data Protection Regulation (EU GDPR).
Credit and debit card data is just what it says—it is the data relevant to the cards in our wallets (the names, addresses, card numbers, expiration dates, and so on).
Traditional encryption methods may become obsolete as the cloud’s computing power and innovative technology improve optimization issues. What kind of advanced technology is potentially capable of defeating today’s encryption methods?
A. Artificial intelligence
B. Machine learning
C. Blockchain
D. Quantum computing
D. Quantum computing
Explanation:
Quantum computing is capable of solving problems that traditional computers are incapable of solving. When quantum computing becomes widely accessible to the general public, it will almost certainly be via the cloud due to the substantial processing resources necessary to do quantum calculations.
A side note: The encryption we have today will likely be broken, especially algorithms such as RSA and Diffie-Hellman. NIST began a competition in 2016 to get ahead of this and design encryption algorithms that can be used in the age of quantum computers safely. For information about this, refer to NIST’s website (csrc) and look for post quantum cryptoography and post quantum cryptography standardization.
Machine learning is the ability we now have for computers to be able to process a lot of data and provide us with information. It could be that they aid us in verifying a hypothesis, or they determine the idea that we need to address, or can address.
Machine learning is arguably a subset of Artificial Intelligence (AI). We keep making advances in technology that are getting us closer to true AI. We have robots that can navigate terrain all on their own, and we have chatGPT that can answer questions as if it is thinking on its own rather than just citing or quoting a source.
Blockchains give us the ability to track something, such as cryptocurrency, with an immutable or unchangeable record.
We have been working with different Artificial Intelligence (AI) methods for years. While we may not be at a true AI just yet, there have been several great advances in this technology. Which method has the software working to understand, interpret, and generate text that seems to be from a live human?
A. Natural Language Processing (NLP)
B. Bayesian Networks
C. Machine Learning (ML)
D. Deep Learning (DL)
A. Natural Language Processing (NLP)
Explanation:
Natural Language Processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. NLP methods involve techniques such as text classification, sentiment analysis, named entity recognition, machine translation, and question-answering systems.
AI methods, such as the one above (NLP) and the three below (the other answer options) refer to the various techniques and approaches used in the field of Artificial Intelligence to solve problems, make predictions, and perform tasks that typically require human intelligence. These methods encompass a broad range of algorithms, models, and methodologies that enable machines to learn, reason, and make decisions autonomously.
Machine Learning (ML) is a subset of AI that focuses on designing algorithms and models that allow computers to learn from data without being explicitly programmed. ML methods include supervised learning, unsupervised learning, and reinforcement learning, where algorithms learn patterns and make predictions based on training examples or feedback.
Deep Learning (DL) is a subfield of ML that involves training artificial neural networks with multiple layers to recognize patterns and extract complex representations from data. DL methods, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), have been particularly successful in image recognition, natural language processing, and other complex tasks.
Bayesian networks are probabilistic graphical models that represent relationships among variables using a directed acyclic graph. They apply Bayesian inference to update beliefs and make predictions based on observed evidence and prior knowledge.
An organization has built and deployed an Infrastructure as a Service (IaaS) cloud environment. This company does have several laws and regulations that they must be compliant with. One of which is the European Unions (EU) General Data Protection Regulation (GDPR). To be able to respond as quickly as possible, they have hired a group of information security administrators and operators to focus solely on dealing with security issues that arise within the organization. The group will be centralized at the headquarters location. The group is responsible for monitoring the logs in the Security Information Event Manager (SIEM), responding to incidents and analyzing threats that arise.
What would this group be called?
A. Cloud provider
B. Regulator
C. Network Operations Center (NOC)
D. Security Operations Center (SOC)
D. Security Operations Center (SOC)
Explanation:
A Security Operations Center (SOC) is a group of individuals who focus solely on the monitoring, reporting, and handling of any security issues for an organization. SOC operators and administrators will typically be responsible for monitoring the logs within a SIEM if there is one in place. SOCs are usually staffed 24/7 to ensure that someone is available in the event of a security incident.
A Network Operations Center (NOC) is responsible for managing the network. This includes temperature, humidity, hardware, operating systems, and software. They would take care of equipment that has broken, network connections that have dropped, users who cannot connect due to network issues, etc. Some of what they find might be passed to the Security Operations Center (SOC).
Regulators are not going to monitor a company’s network, logs, or SIEM. They might need to be contacted, though, depending on the actual incidents that the SOC is dealing with.
Cloud providers have their own NOC and SOC. They would not normally manage an organization’s environment. If the cloud is a private or community cloud, it would be possible that the cloud provider is monitoring and managing all that the NOC and SOC manage. However, the question does not specify either of those conditions, so the assumption to be made is that this is a customer of a public cloud provider.
Sa’id is working on configuring the cloud environment for his company. He works for a multinational bank that has offices in the USA, India, and Europe primarily. They have been working within their own datacenters and are not migrating to a public cloud provider. As the number of attacks continues to rise and the number of laws they must be in compliance with increases, he is looking for a security tool to add to the cloud environment. They are building an Infrastructure as a Service (IaaS) environment and have already added their first Network Security Group (NSG). Now he is looking for the next tool to add that would give them information regarding any suspicious activity about a particular cluster of servers.
Which tool would work the best for that?
A. Honeypot
B. Network Intrusion Prevention System (NIPS)
C. Network Intrusion Detection System (NIDS)
D. Host Intrusion Detection System (HIDS)
C. Network Intrusion Detection System (NIDS)
Explanation:
A Network Intrusion Detection System (NIDS) analyzes all the traffic on the network and detects possible intrusions. It can send an alert out to administrators to investigate.
A Host Intrusion Detection System (HIDS) runs on a single host and analyzes all inbound and outbound traffic for that host to detect possible intrusions. Since the question specifies a cluster of servers, the NIDS is a better choice. It is possible to add HIDS to all the clusters; it is just not what the question is driving at.
A Network Intrusion Prevention System (NIPS) works in the same manner as an NIDS, but it also has the capability to prevent attacks rather than just detect them. This is not the best answer because the question is looking for information about intruders.
A honeypot is an isolated system used to trick a bad actor into believing that it is a production system. This should distract them long enough for the Security Operations Center (SOC) to detect the bad actor’s presence and take action to remove them from the systems and network.
Reference:
A corporation is looking for a way to improve their software development. They use Agile Application Security as their main methodology. They know that they need to improve the testing and analysis of their products. They also know that they need to improve the planning of the application’s security as too many things have been overlooked and only caught well into the development process.
What tool can they use for this?
A. Source code review
B. International Standards Organization / Internationa. Electrotechnical Committee (ISO/IEC) 27034
C. Application Security Verification Standard (ASVS)
D. Closed box testing
C. Application Security Verification Standard (ASVS)
Explanation:
Application Security Verification Standard (ASVS) is from OWASP. It can be used in many different ways, one of which is as a driver for Agile Application Security. It can also be used to replace off-the-shelf secure coding checklists, for secure development training, as a guide for automated unit and integration tests as well as for its primary use of being a list of application security requirements or tests that can be used by anyone building, designing, developing, or buying secure applications.
ISO/IEC 27034 provides guidelines and best practices for implementing and managing Application Security. It focuses specifically on the protection of applications throughout their lifecycle, from the design and development stages to deployment, operation, maintenance, and disposal.
Closed box testing is a software testing approach where the tester evaluates the functionality of a software application without having access to its internal structure, code, or implementation details. The tester focuses solely on the inputs and outputs of the software, treating it as a “black box,” where the internal workings are not known or visible. The term black box is falling out of favor, as it is insenitive. Included here only because it may still be in the (ISC)2 database of questions somewhere.
Source code review, also known as code review or static code analysis, is a process of systematically examining the source code of a software application to identify and address potential issues, vulnerabilities, and areas for improvement. It involves manual or automated analysis of the codebase to ensure its quality, maintainability, security, and adherence to coding standards.
FedRAMP-compliant cloud environments offered by a cloud services provider are MOST likely to be an example of which of the following?
A. Multi-Cloud
B. Community Cloud
C. Hybrid Cloud
D. Public Cloud
B. Community Cloud
Explanation:
Cloud services are available under a few different deployment models, including:
Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive. Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider. Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them. Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers. Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
When configuring a new hypervisor, the cloud administrator forgot to change the default administrative credentials. Which type of vulnerability listed on the Open Web Application Security Project (OWASP) Top 10 is this an example of?
A. XML external entities
B. Security misconfiguration
C. Cross-site scripting
D. Insecure deserialization
B. Security misconfiguration
Explanation:
A security misconfiguration occurs whenever systems or applications are configured in a way that makes them insecure. Systems regularly come preconfigured with default administrative credentials. These default credentials are generally easy to find online, so the failure to change them makes it possible for an attacker to gain access. This is an example of a security misconfiguration.
Insecure deserialization is a security vulnerability that arises when an application blindly trusts and processes data received in a serialized format without properly validating or sanitizing it. It can lead to serious consequences, including remote code execution, data tampering, or denial of service attacks.
Cross-Site Scripting (XSS) is a type of security vulnerability that occurs when a web application does not properly validate and sanitize user-supplied input, allowing malicious code to be injected into web pages viewed by other users. It is a prevalent attack vector that can have severe consequences if exploited.
XML External Entity (XXE) refers to a security vulnerability that occurs when an application parses XML input without proper validation, allowing an attacker to include external entities and potentially exploit the system. It can lead to sensitive information disclosure, denial of service attacks, or even remote code execution.
Which of the following can be used on the network to stop attacks automatically when a pattern of packets that is indicative of an attack has been detected?
A. eXtensible Markup Language (XML) firewall
B. Intrustion Prevention System (IPS)
C. Intrustion Detection System (IDS)
D. Database Integrity Monitor (DIM)
B. Intrustion Prevention System (IPS)
Explanation:
An Intrusion Prevention System (IPS) is placed at the network level. It analyzes all traffic on the network in the same way as an IDS. However, rather than simply alerting administrators when an intrusion is detected, it can actually stop and block the malicious traffic and prevent an attack from occurring automatically.
A firewall is used to allow wanted traffic and block everything else. A firewall should block by default. This would stop attacks, except the question is asking for a device that can see that the traffic is specifically an attack. That is an IPS.
A DIM is used to monitor user activity within a DataBase (DB) but from outside. This would make it much harder for the bad actor to know that the device is there. A bad actor would know that there are logs within the DB and try to delete or alter them. The DIM logs would not be seen by the bad actor.
Alexis, the information security manager for a retail company, is looking to find the best cloud solutions to meet their needs. When assessing the different cloud providers, Alexis and her team request the auditor’s reports from the cloud providers. They request the Service Organization Controls (SOC) 1 and 2 reports. These reports are generated after an audit company has completed their third-party audit.
What standard did the cloud providers likely follow when performing the audits?
A. Generally Accepted Privacy Principles (GAPP)
B. Statement of Standards for Attestation Encagement (SSAE)
C. Statement of Auditing Standards (SAS)
D. International Standards Organization (ISO) 27050
B. Statement of Standards for Attestation Encagement (SSAE)
Explanation:
The Statement on Standards for Attestation Engagements (SSAE) is a set of standards defined by the American Institute of Certified Public Accountants (AICPA) to be used when creating SOC reports.
Statement on Auditing Standards (SAS) reports have been replaced by SSAE reports. This is the predecessor to SSAE.
Generally Accepted Privacy Principles (GAPP) is not a reporting standard. This has been replaced by the Privacy Management Framework (PMF). The PMF states the things that a company should do to protect personal data. If you are familiar with the requirements of the EU’s General Data Protection Regulation (GDPR) requirements, you would recognize those ideas in the PMF.
ISO 27050 is a standard for eDiscovery. eDiscovery is critical once there has been an incident, and forensics must be performed.
Amelia works for a medium-sized company as their lead information security manager. She has been working with the development and operations teams on their new application that they are building. They are building an application that will interact with their customers through the use of an Application Programming Interface (API). Due to the nature of the application, it has been decided that they will use SOAP.
That means that the data must be formatted using which of the following?
A. Java Script Object Notation (JSON)
B. Coffee Script Object Notation (CSON)
C. YAML (YAML Ain’t Markup Language)
D. eXtensible Markup Language (XML)
D. eXtensible Markup Language (XML)
Explanation:
The SOAP only permits the use of XML-formatted data, while REpresentational State Transfer (REST) allows for the use of a variety of data formats, including both XML and JSON. SOAP is most commonly used when the use of REST is not possible.
XML, JSON, YAML, CSON are all data formats.
An attack on confidentiality would fall under which letter of the STRIDE acronym for cybersecurity threat modeling?
A. S
B. T
C. D
D. I
D. I
Explanation:
Microsoft’s STRIDE threat model defines threats based on their effects, including:
Spoofing: The attacker pretends to be someone else Tampering: The attacker damages data integrity Repudiation: The attacker can deny that they took some action that they did take Information Disclosure: The attacker gains unauthorized access to sensitive data, harming confidentiality Denial of Service: The attacker can harm the availability of a service Elevation of Privilege: The attacker can access resources that they shouldn’t be able to access
Julez has been tasked with updating the data governance policy for his corporation, a bank. He is currently addressing the requirements that need to be defined for how long data should be stored into the future. What stage of the cloud data lifecycle is he addressing?
A. Use
B. Archive
C. Store
D. Share
B. Archive
Explanation:
The archive phase fits the best because the question says “stored into the future,” which implies archival.
Store would be the second best answer and is something that needs to be done as soon as the data is created. That initial storage location could actually be where the data is when the bank is looking for it somewhere in the future, but again, into the future implies archival.
The use phase is when a user is utilizing the data—just as you are right now reading this.
Share is when it is passed from one user to another, inside or outside the business.
Which of the following SaaS risks is MOST related to how SaaS offerings are made available to customers?
A. Persistent Backdoors
B. Web Application Security
C. Virtualization
D. Proprietary Formats
B. Web Application Security
Explanation:
A Software as a Service (SaaS) environment has all of the risks that IaaS and PaaS environments have, as well as new risks of its own. Some risks unique to SaaS include:
Proprietary Formats: With SaaS, a customer is using a vendor-provided solution. This may use proprietary formats that are incompatible with other software or create a risk of vendor lock-in if the organization’s systems are built around these formats. Virtualization: SaaS uses even more virtualized environments than PaaS, increasing the potential for VM escapes, information bleed, and similar threats. Web Application Security: Most SaaS offerings are web applications with a provided application programming interface (API). Both web apps and APIs have potential vulnerabilities and security risks that could exist in these solutions.
A software development corporation has built an Infrastructure as a Service (IaaS) environment for their software developers to use when building their products. When a virtual machine is running, the software developer will use that platform to build and test their code. The running machines require a type of storage that allows the operating system the ability to store temp files and use as a swap space.
What type of storage is used for that?
A. Structured
B. Object
C. Volume
D. Ephemeral
D. Ephemeral
Explanation:
Cloud storage comes in many shapes and flavors. The storage used by virtual machines to temporarily store files and to use for swap files is called ephemeral. Ephemeral means temporal or fleeting. It will disappear when the virtual machine shuts down. It is not for persistent storage.
Persistent storage includes structured, object, volume, unstructured, block, etc.
Each cloud service model uses a different method of storage as shown below:
Software as a Service (SaaS): content and file storage, information storage and management Platform as a Service (PaaS): structured, unstructured, or block and blob Infrastructure as a Service (IaaS): volume, object
Structured is confusing because it is used to describe a type of data (databases) and a type of data storage. They are not the same thing. Structured storage is a specific allocation of storage space. Block storage is a type of structured storage. It allocates space in units of space (e.g., 16KB).
A volume is analogous to a C:/ directory on a computer. It is often allocated using block storage.
Objects are files. Object storage would be a flat file system. Objects could have metadata attached to them.
Objects are stored in volumes or blocks.
A cloud information security manager needs to ensure their organization is aware of all ten key principles of the Privacy Management Framework (PMF). Which of the following is the principle that is addressed through input validation and hashing?
A. Collection and creation
B. Data integrity and quality
C. Management
D. Security for privacy
B. Data integrity and quality
Explanation:
The PMF replaced GAPP in 2009. The ISC2 outline lists the GAPP document, but it would be good to either know the current one or both. Both of these documents are from the American Institute for Certified Public Accountants (AICPA).
Input validation, sanitization, hashing, integrity check, syntactic and semantic checks, and more address this concern.
The PMF has nine components:
Management Agreement, notice, and communication Collection and creation Use, retention, and disposal Access Disclosure to third parties Security for privacy Data integrity and quality Monitoring and enforcement
Generally Accepted Privacy Principles (GAPP) include 10 key privacy principles:
Management Notice Choice and consent Collection Use, retention, and disposal Access Disclosure to third parties Security for privacy Quality Monitoring and enforcement
Amber is building a new spreadsheet in a Software as a Service (SaaS) environment. As she is working from her computer, what security control can be implemented within the create phase?
A. Intrusion Detection System (IDS)
B. Data Loss Prevention (DLP)
C. Encryption
D. Firewall
C. Encryption
Explanation:
The create phase is an ideal time to implement technologies such as Transport Layer Security (TLS) when the data is inputted or imported. The client-server connection should be protected through encryption.
DLP is a tool that is good if the user is sending data to an inappropriate place, such as sending a credit card number through email or storing information on an inappropriate server. It is not a great match when a user is creating a spreadsheet in SaaS.
Firewalls are used to either block or allow traffic. It could be based on blocking or allowing a layer 7 command such as get or put in File Transfer Protocol (FTP) or a lower layer address or port number, for example.
IDS devices are on the watch for intruders. This is a user doing their work. It would not detect and log events such as creating a spreadsheet.