Pocket Prep 17 Flashcards
A cloud administrator needs to ensure that data has been completely removed from cloud servers after data migration. The company has an Infrastructure as a Service (IaaS) noSQL (Structured Query Language) server on a public cloud provider. Which data sanitization technique can be used in this cloud environment successfully?
A. A contract that has a clause that the Cloud Service Provider (CSP) physically degausses all drives that held customer data combined with the customer performing an overwrite of their data
B. A contract that has a clause that the Cloud Service Provider (CSP) physically destroys all drives that held customer data combined with the customer performing an overwrite of their data
C. A contract that has a clause that the Cloud Service Provider (CSP) physically shreds all drives that held customer data combined with cryptographic erasure of the database encryption key
D. A contract that has a clause that the Cloud Service Provider (CSP) performs 11 overwrites on all drives that held customer data combined with proper erasure of the database encryption key
C. A contract that has a clause that the Cloud Service Provider (CSP) physically shreds all drives that held customer data combined with cryptographic erasure of the database encryption key
Explanation:
In a cloud environment, where the customer cannot access or control the physical hardware, sanitization methods such as incineration, destruction, and degaussing are not an option. As a result, it is critical to ensure that the CSP handles drives securely. This should be discussed and addressed through the use of the contract.
If given a choice, physical destruction, preferably shredding, is better than 11 overwrites of the data. It would be OK if those two things were combined.
Degaussing can only be done on magnetic Hard Disk Drives (HDD). There is no guarantee that the drives the customer’s data is on would be HDD versus Solid State Drives (SSD). So shredding is the better choice.
Even with an IaaS implementation, it is not possible for the customer to perform an overwrite of their data. The way that the cloud will store the data makes that impossible. If the data were overwritten in the customer’s view, that would mean that the first data was deleted, and then new data is added. It would not be written to the same sectors on the drives. A deletion also only removes pointers to the data. It does not actually remove the data from the drives.
Taking out an insurance policy is an example of which of the following risk treatment strategies?
A. Transference
B. Avoidance
C. Mitigation
D. Acceptance
A. Transference
Explanation:
Risk treatment refers to the ways that an organization manages potential risks. There are a few different risk treatment strategies, including:
Avoidance: The organization chooses not to engage in risky activity. This creates potential opportunity costs for the organization. Mitigation: The organization places controls in place that reduce or eliminate the likelihood or impact of the risk. Any risk that is left over after the security controls are in place is called residual risk. Transference: The organization transfers the risk to a third party. Insurance is a prime example of risk transference. Acceptance: The organization takes no action to manage the risk. Risk acceptance depends on the organization’s risk appetite or the amount of risk that it is willing to accept.
Which of the following data security techniques is commonly used to ensure data integrity?
A. Encryption
B. Tokenization
C. Masking
D. Hashing
D. Hashing
Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:
Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules. Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions. Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number. Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data. Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
Which of the following operational controls and standards defines how the organization will respond to a business-disrupting event?
A. Information Security Management
B. Availability Management
C. Service Level Management
D. Continuity Management
D. Continuity Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
Jonas is working to design their new web application’s encryption requirements. The current decision he and his team are making is which encryption protocol to use. Which protocol would you recommend?
A. Transport Layer Security (TLS)
B. Secure Shell (SSH)
C. Advanced Encryption Standard (AES)
D. Internet Protocol Security (IPSec)
A. Transport Layer Security (TLS)
Explanation:
Transport Layer Security (TLS) is a protocol that was designed for web applications. It is most commonly used to encrypt Hyper Text Transfer Protocol (HTTP).
Secure Shell is most commonly used to encrypt administrator and operator traffic, as they configure and manage network attached devices.
AES is an encryption algorithm that is commonly used in TLS, SSH, and IPSec. The key word here that makes this the wrong answer is algorithm. The question is asking for a protocol.
IPSec is commonly used to connect sites to each other, for example, a router on-prem to a router in the cloud.
TLS, SSH, and IPSec can be used in other places. However, the only clue in the question is web application.
Dion is working with the operation team to deploy security tools within the cloud. They are looking for something that could detect, identify, isolate, and analyze an attack by distracting them. What would you recommend?
A. Honeypot
B. Network Security Group (NSG)
C. Firewall
D. Intrustion Detection System (IDS)
A. Honeypot
Explanation:
A honeypot consists of a computer, data, or a network site that appears to be part of a network but is actually isolated and monitored and that seems to contain information or a resource of value to attackers.
What makes honeypot a better answer than IDS is the final part to the question: “by distracting them.” An IDS could detect and identify the attack, but the bad actor would not know it was there and be distracted by it. It is a tool that only monitors traffic.
A firewall might distract the bad actor but not in the same sense as the question indicates. The bad actor might take some time to explore the firewall, but a firewall is a real device. It is not advisable to add a firewall with the intention of distracting the bad actor, unless it was part of a honeypot.
An NSG is effectively a firewalled group in the cloud. So the statement above about firewalls applies the same to the NSG.
Which of the following data and media sanitization methods is MOST effective in the cloud?
A. Physical destruction
B. Cryptographic erasure
C. Overwriting
D. Degaussing
B. Cryptographic erasure
Explanation:
When disposing of potentially sensitive data, organizations can use a few different data and media sanitization techniques, including:
Overwriting: Overwriting involves writing random data or all 0’s or all 1’s over sensitive data. This may be less effective in the cloud if the customer can guarantee access to certain regions of memory on the underlying server. Cryptographic Erasure: Cryptographic erasure involves destroying the encryption keys used to protect sensitive data. This can easily be accomplished in the cloud by deleting keys from the key management system (KMS).
Physical destruction of media and degaussing are not options in the cloud because the customer lacks access to and control over the physical media used to store data.
Willa is working for an insurance company as an information security manager. They have recently started using a Platform as a Service (PaaS) implementation for their database that contains their current customers’ personal data. She has created a new privacy policy that will appear on their website to explain their practices to their customers.
What principle of the Privacy Management Framerwork (PMF) [formerly the Generally Accepted Privacy Principles (GAPP)] is she addressing?
A. Agreement, notice, and communication
B. Disclosure to third parties
C. Monitoring and enforcement
D. Use, retention, and disposal
A. Agreement, notice, and communication
Explanation:
The PMF was developed by the American Institute of Certified Public Accountants and the Canadian Institute for Chartered Accountants. It includes nine key privacy principles as listed below:
Management Agreement, notice, and communication Collection and creation Use, retention, and disposal Access Disclosure to third parties Security for privacy Data integrity and quality Monitoring and enforcement
Agreement, notice, and communication is what Willa is doing by creating the privacy policy and adding it to the website for the customers to be able to view. What the privacy policy should state is their practices regarding use, retention, disposal, disclosure to third parties, monitoring, and enforcement.
The use, retention, and disposal elements should be clear as to what the business will be using the data for, how long they will be storing that data, and when and how they dispose of that data.
Disclosure to third parties involves selling the data or making it available to business partners.
Monitoring involves logging and reviewing the logs of people who access the data and what it is used for. Enforcement includes the proper removal of the data when the retention period is reached.
Ellie has been working with her team of information security professionals at a major financial institution on their data retention plan. They are required to retain customer data for seven years after an account has been closed. What phase of the data lifecycle are they addressing?
A. Use phase
B. Share phase
C. Archive phase
D. Destroy phase
C. Archive phase
Explanation:
The archive phase is when data is removed from the system and moved to long-term storage. In many cases, archived data is stored offsite for disaster recovery purposes.
The destroy phase would occur at the end of the seven years. Data should be properly destroyed at that time.
If data is exchanged with someone else, that would be the share phase.
If a user is looking at the data at any point, that is considered the use phase.
From a legal perspective, what is the MAIN factor that differentiates a cloud environment from a traditional data center?
A. Multitenancy
B. Self-service
C. Repudiation
D. Rapid elasticity
A. Multitenancy
Explanation:
Multitenancy is the main factor, from a legal perspective, which differentiates a cloud environment from a traditional data center. Multitenancy is a concept in cloud computing in which multiple cloud customers may be housed in the same cloud environment and share the same resources. Because of this, the cloud provider has legal obligations to all cloud customers housed on its hardware. If a server is ever seized from a cloud provider by law enforcement, it will likely contain assets from many different customers.
Rapid elasticity allows the system to utilize more resources as needed. That includes CPU, memory, network speed, etc.
Repudiation is the ability for a user to deny that they did something.
Self-service is the web portal that customers can use to setup and manage virtual machines of many different types.
The information security manager is working with the cloud deployment team as they prepare to move their data center to the cloud. An important part of their plan is how they are going to get out of the cloud. They would like to reduce the risk of vendor lock-in. What cloud shared consideration should the administrator be looking for?
A. Portability
B. Availability
C. Reversibility
D. Interoperability
C. Reversibility
Explanation:
Reversibility is defined in ISO/IEC 17788 as the “process for cloud service customers to retrieve their cloud service customer data and application artifacts and for the cloud service provider to delete all cloud service customer data as well as contractually specified cloud service derived data after an agreed period.” Based on this definition, reversibility is the best fit for this scenario.
Interoperability is defined in ISO/IEC 17788 as the “ability of two or more systems or applications to exchange information and to mutually use the information that has been exchanged.” That is not the correct answer because they are planning on how to get out.
Portability is defined in ISO/IEC 17788 as the “ability to easily transfer data from one system to another without being required to re-enter the data.” This is not the correct answer because they are planning on how to get out of the cloud.
Availability is defined in ISO/IEC 17788 as the “property of being accessible and usable upon demand by an authorized entity.”
Which of the following emerging technologies REDUCES the amount of computation performed on cloud servers?
A. Blockchain
B. Internet of Things
C. Edge Computing
D. Artificial Intelligence
C. Edge Computing
Explanation:
Cloud computing is closely related to many emerging technologies. Some examples include:
Machine Learning and Artificial Intelligence (ML/AI): Machine learning is a subset of AI and includes algorithms that are designed to learn from data and build models to identify trends, perform classifications, and other tasks. Cloud computing is linked to the rise of ML/AI because it provides the computing power needed to train the models used by ML/AI and operate these technologies at scale. Blockchain: Blockchain technology creates an immutable digital ledger in a decentralized fashion. It is used to support cryptocurrencies, track ownership of assets, and implement various other functions without relying on a centralized authority or single point of failure. Cloud computing is related to blockchain because many of the nodes used to maintain and operate blockchain networks run on cloud computing platforms. Internet of Things (IoT): IoT systems include smart devices that can perform data collection or interact with their environments. These devices often have poor security and rely on cloud-based servers to process collected data and issue commands back to the IoT systems (which have limited computational power, etc.). Edge and Fog Computing: Edge and fog computing move computations from centralized servers to devices at the network edge, enabling faster responses and less usage of bandwidth and computational power by cloud servers. Edge computing performs computing on IoT devices, while fog computing uses gateways at the edge to collect data from these devices and perform computation there.
Which of the following types of testing verifies that a module fits properly into the system as a whole?
A. Usability Testing
B. Integration Testing
C. Regression Testing
D. Unit Testing
B. Integration Testing
Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:
Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended. Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed. Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience. Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.
Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.
A corporation is working with their lawyers to ensure that they have accounted for the laws that they must be in compliance with. The corporation is located in Brazil, but a lot of their customers are in the European Union. When collecting and storing personal data regarding their customers, which law must they be in compliance with?
A. Health Information Portability and Accountability Act (HIPAA)
B. General Data Protection Regulation (GDPR)
C. Payment Card Industry Data Security Standard (PCI-DSS)
D. Asia Pacific Economic Cooperation (APEC)
B. General Data Protection Regulation (GDPR)
Explanation:
The General Data Protection Regulation (GDPR) is the European Union’s (EU) law that demands that personal data of natural persons within the EU is protected when collected and stored by corporations. That includes any corporation around the planet.
HIPAA is a United States of America (USA) law that requires that Protected Health Information (PHI) is protected.
APEC is an agreement among 21 member economies around the Pacific Ocean that is designed to promote free trade.
PCI-DSS is the standard from payment card providers that requires a certain level of information security present around systems that hold and process credit card data.
Dana is working for a cloud provider responsible for architecting solutions for their customer’s Platform as a Service (PaaS) server-based options. As she and her team develop new server deployment options, they want to ensure that the virtual machine files are configured with the best possible options to satisfy their customers’ needs.
These configurations should be documented in a:
A. Baseline procedures
B. Security procedures
C. Security baseline
D. Security policy
C. Security baseline
Explanation:
Security baselines contain the configuration information for systems. This includes both physical and virtual. They should reflect the best practices that fit specific systems the best.
A security policy could be either a corporate level policy or the configuration within a firewall, which is commonly referred to as policies. Either way, it does not match the needs of the question, which is the configuration of a server.
Security procedures are the step-by-step documented procedure to complete a process or a configuration or something else of that sort.
Baseline procedures is not the best answer here. A baseline is the configuration. The procedure is how you do something. The term baseline procedure is not common security language.
Kody has been working with technicians in the Security Operation Center (SOC) to resolve an issue that has just occurred. Three of the cloud providers’ biggest customers have experienced a Denial of Service (DoS). It appears that there is an issue with the configuration of the Dynamic Optimization (DO) functionality. Kody is worried that this could occur again in the future, so she wants to uncover the root cause in their investigation.
What would this process be called?
A. Change management
B. Incident management
C. Problem management
D. Release and deployment management
C. Problem management
Explanation:
The focus of problem management is to identify and analyze potential issues in an organization to determine the root cause of that issue. Problem management is responsible for implementing processes to prevent these issues from occurring in the future.
Incident management is the initial response to the denial of service. This is what Kody was initially working on with the technicians. This is not the right answer because the question moves on to uncovering the root cause.
Release and deployment management is the process of releasing new hardware, software, or functions into the production environment.
Change management is the process of controlling any changes to the production environment; however, this definition does not seem to explain change management, although that is the description within ITIL. It covers many different scenarios.
Which of the following types of data is the HARDEST to perform data discovery on?
A. Unstructured
B. Structured
C. Loosely structured
D. Semi-structured
A. Unstructured
Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:
Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data. Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own. Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.
Loosely structured is not a common classification for data.
Certain data will require more advanced security controls in addition to traditional security controls. This may include extra access controls lists placed on the data or having additional permission requirements to access the data, especially when that data is the Intellectual Property (IP) of the organization.
This extension of normal data protection is known as which of the following?
A. Data rights management (DRM)
B. Threat and vulnerability management
C. Identify Access Management (IAM)
D. Infrastructure and access management
A. Data rights management (DRM)
Explanation:
Data Rights Management (DRM), also known as Information Rights Management (IRM), is an extension of normal data protection. Normal controls would include encryption, hashes, and access rights. In DRM, advanced security controls such as extra ACLs and constraints such as print capability, or inability, are placed onto the data.
Infrastructure and access management would be access to the data center. Infrastructure is the actual physical routers, switches, servers, and so on.
IAM involves controlling access by all the users to all the servers, applications, and so on. IAM also includes access, for example, by contractors and vendors. It is a very big topic in comparison to the question, which is looking for protection of the data.
Threat and vulnerability management includes risk assessment, analysis, and mitigations. It is not about controlling access, although that would be a topic of concern at some point in the discussion on threat management.
Load balancing and redundancy are solutions designed to address which of the following?
A. Interoperability
B. Resiliency
C. Reversibility
D. Availability
B. Resiliency
Explanation:
Some important cloud considerations have to do with its effects on operations. These include:
Availability: The data and applications that an organization hosts in the cloud must be available to provide value to the company. Contracts with cloud providers commonly include service level agreements (SLAs) mandating that the service is available a certain percentage of the time. Resiliency: Resiliency refers to the ability of a system to weather disruptions. Resiliency in the cloud may include the use of redundancy and load balancing to avoid single points of failure. Performance: Cloud contracts also often include SLAs regarding performance. This ensures that the cloud-based services can maintain an acceptable level of operations even under heavy load. Maintenance and Versioning: Maintenance and versioning help to manage the process of changing software and other systems. Updates should only be made via clear, well-defined processes. Reversibility: Reversibility refers to the ability to recover from a change that went wrong. For example, how difficult it is to restore original operations after a transition to an outsourced service. Portability: Different cloud providers have different infrastructures and may do things in different ways. If an organization’s cloud environment relies too much on a provider’s unique implementation or the provider doesn’t offer easy export, the company may be stuck with that provider due to vendor lock-in. Interoperability: With multi-cloud environments, an organization may have data and services hosted in different providers’ environments. In this case, it is important to ensure that these platforms and the applications hosted on them are capable of interoperating. Outsourcing: Using cloud environments requires handing over control of a portion of an organization’s infrastructure to a third party, which introduces operational and security concerns.
Reference:
Which of the following regulations deals with law enforcement’s access to data that may be located in data centers in other jurisdictions?
A. GLBA
B. US CLOUD Act
C. SCA
D. SOX
B. US CLOUD Act
Explanation:
A company may be subject to various regulations that mandate certain controls be in place to protect customers’ sensitive data or ensure regulatory transparency. Some examples of regulations that can affect cloud infrastructure include:
General Data Protection Regulation (GDPR): GDPR is a regulation protecting the personal data of EU citizens. It defines required security controls for their data, export controls, and rights for data subjects. US CLOUD Act: The US CLOUD Act creates a framework for handling cross-border data requests from cloud providers. The US law enforcement and their counterparts in countries with similar laws can request data hosted in a data center in a different country. Privacy Shield: Privacy Shield is a program designed to bring the US into partial compliance with GDPR and allow US companies to transfer EU citizen data outside of the US. The main reason that the US is not GDPR compliant is that federal agencies have unrestricted access to non-citizens’ data. Gramm-Leach-Bliley Act (GLBA): GLBA requires financial services organizations to disclose to customers how they use those customers’ personal data. Stored Communications Act of 1986 (SCA): SCA provides privacy protection for the electronic communications (email, etc.) of US citizens. Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act: HIPAA and HITECH are US regulations that protect the protected health information (PHI) that patients give to medical providers. Payment Card Industry Data Security Standard (PCI DSS): PCI DSS is a standard defined by major payment card brands to secure payment data and protect against fraud. Sarbanes Oxley (SOX): SOX is a US regulation that applies to publicly-traded companies and requires annual disclosures to protect investors. North American Electric Reliability Corporation/Critical Infrastructure Protection (NERC/CIP): NERC/CIP are regulations designed to protect the power grid in the US and Canada by ensuring that power providers have certain controls in place.
Which of the following threat models focuses on identifying intersections between an organization’s attack surface and an attacker’s capabilities?
A. ATASM
B. STRIDE
C. DREAD
D. PASTA
A. ATASM
Explanation:
Several different threat models can be used in the cloud. Common examples include:
STRIDE: STRIDE was developed by Microsoft and identifies threats based on their effects/attributes. Its acronym stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. DREAD: DREAD was also created by Microsoft but is no longer in common use. It classifies risk based on Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. ATASM: ATASM stands for Architecture, Threats, Attack Surfaces, and Mitigations and was developed by Brook Schoenfield. It focuses on understanding an organization’s attack surfaces and potential threats and how these two would intersect. PASTA: PASTA is the Process for Attack Simulation and Threat Analysis. It is a seven-stage framework that tries to look at infrastructure and ap
Which of the following measures the LONGEST amount of time that a system can be down before causing significant harm to the organization?
A. RSL
B. MTD
C. RPO
D. RTO
B. MTD
Explanation:
A business continuity and disaster recovery (BC/DR) plan uses various business requirements and metrics, including:
Recovery Time Objective (RTO): The RTO is the amount of time that an organization is willing to have a particular system be down. This should be less than the maximum tolerable downtime (MTD), which is the maximum amount of time that a system can be down before causing significant harm to the business. Recovery Point Objective (RPO): The RPO measures the maximum amount of data that the company is willing to lose due to an event. Typically, this is based on the age of the last backup when the system is restored to normal operations. Recovery Service Level (RSL): The RSL measures the percentage of compute resources needed to keep production environments running while shutting down development, testing, etc.
Reference:
Rise is working for a corporation as their cloud architect. He is designing how the Platform as a Service (PaaS) deployment will be used to store sensitive data for one particular application. He is designing a trust zone for that data to be handled inside of. Which of the following BEST defines a trust zone?
A. Sets of rules that define which employees have access to which resources
B. The ability to share pooled resources among different cloud customers
C., Virtual tunnels that connect resources at different locations
D. Physical, logical, or virtual boundaries around network resources
D. Physical, logical, or virtual boundaries around network resources
Explanation:
A cloud-based trust zone is a secure environment created within a cloud infrastructure where only authorized users or systems are allowed to access resources and data. This trust zone is typically created by configuring security measures such as firewalls, access controls, and encryption methods to ensure that only trusted sources can gain access to the data and applications within the zone. The goal of a cloud-based trust zone is to create a secure and reliable environment for sensitive data or critical applications by isolating them from potential threats and unauthorized access. This helps to ensure the confidentiality, integrity, and availability of the resources and data within the trust zone.
A virtual tunnel connecting to another location may be something that needs to be added, but it is not part of describing the zone itself.
Rules that define which employees have access to which resources are something that is needed by a business. This is Identity and Access Management (IAM). It should include information about the resources in a trust zone, but it does not define the actual zone.
The ability to share pooled resources is part of the definition of cloud. It is the opposite of a trust zone. Because resources are shared, many companies are very worried about using the cloud.
A cloud information security manager is building the policies and associated documents for handling cloud assets. She is currently detailing how assets will be understood or listed so that access can be controlled, alerts can be created, and billing can be tracked. What tool allows for this?
A. Value
B. Key
C. Identifier
D. Tags
D. Tags
Explanation:
Tags are pervasive in cloud deployments. It is crucial that a plan is built for the corporation on how to tag assets. If it is not done consistently, it is not helpful. A tag is made up of two pieces, a key or name and a value. Key here is not the cryptographic key for encryption and decryption, but it is a word in English that was chosen by some to use here. It is really a name.
You can think of the tag as a type of identifier, but the tool needed to manage assets is called a tag.