Pocket Prep 15 Flashcards
Wyatt has just been made aware that they have discovered a problem that they need to fix. They have a server that will allow a Uniform Resource Identifier (URI) of “file:///etc/passwd” to be entered successfully on one of their websites. What is this an example of?
A. Injection
B. Insecure design
C. Cryptographic failure
D. Security Misconfiguration
D. Security Misconfiguration
Explanation:
This is an example of CWE-611: Improper Restriction of XML External Entity Reference. The OWASP top 10 merged XML External Entitites into Security Misconfiguration in 2021.
Injection would be something like adding a Structured Query Language (SQL) command in the URL. This is asking for a specific file folder and file on the computer, which is not the same.
Insecure design could be the problem here, but that is a bit removed from an actual flow. This is all about doing threat modeling and following security design patterns and principles.
Cryptographic failure was known on the last OWASP list as Sensitive Data Exposure. That is the problem. The cause is a failure to encrypt the data or properly encrypt the data.
Antonia has recently been hired by a cancer treatment facility. One of the first training programs that she is required to go through at the office is related to the protection of individually identifiable health information. Which law is this related to and which country does it apply to?
A. Health Insurance Portability and Accountability Act (HIPAA), Canada
B. Gramm-Leach-Bliley Act (GLBA), USA
C. Health Insurance Portability and Accountability Act (HIPAA), USA
D. General Data Protection Regulation (GDPR), Germany
C. Health Insurance Portability and Accountability Act (HIPAA), USA
Explanation:
The Health Insurance Portability and Accountability Act (HIPAA) is concerned with the security controls and confidentiality of Protected Health Information (PHI). It’s vital that anyone working in any healthcare facility be aware of HIPAA regulations.
The Gramm-Leach-Bliley Act, officially named the Financial Modernization Act of 1999, focuses on PII as it pertains to financial institutions, such as banks.
GDPR is an EU specific regulation that encompasses all organizations in all different industries.
The privacy act of 1988 is an Australian law that requires the protection of personal data.
Defining clear, measurable, and usable metrics is a core component of which of the following operational controls and standards?
A. Continuity Management
B. Change Management
C. Information Security Management
D. Continual Service Improvement Management
D. Continual Service Improvement Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
Holand is working with the US government and is in charge of securing some of the information systems within her agency. What regulation requires her agency to protect the government systems and the data that they hold?
A. Federal Information Security Management Act (FISMA)
B. Health Information Portability and Accountability Act (HIPAA)
C. Sarbanes Oxley (SOX)
D.Privacy Act of 1988
A. Federal Information Security Management Act (FISMA)
Explanation:
FISMA was enacted to provide a comprehensive framework for securing federal government information and systems. Its primary objectives are to enhance the security of federal information systems, promote a risk-based approach to information security management, and establish a consistent level of security across federal agencies.
Sarbanes-Oxley Act is a US federal law enacted in 2002 in response to accounting scandals that occurred in several major corporations. The Sarbanes-Oxley Act aims to protect investors and improve the accuracy and reliability of corporate financial disclosures.
HIPAA is a United States federal law enacted in 1996 that establishes privacy and security standards for protecting individuals’ health information. HIPAA applies to various entities, including healthcare providers, health plans, and healthcare clearinghouses as well as their business associates.
The Australian Privacy Act of 1988 is a federal law in Australia that governs the handling of personal information by Australian government agencies and certain private sector organizations. The Act aims to protect the privacy of individuals by regulating the collection, use, disclosure, and storage of their personal information.
Eila works for a large government contractor. As their lead information security professional working on the business case for their potential move to the cloud, she knows that it is critical to define and defend her reasons for moving to the cloud. Of the following statements, which is the MOST accurate?
A. Cloud platforms offer increased scalability and performance
B. Traditional data centers and cloud environments have the exact same risks
C. There are no security risks associated with moving to a cloud environment
D. Cloud platforms are always less expensive than on-prem solutions
A. Cloud platforms offer increased scalability and performance
Explanation:
Cloud environments are attractive to organizations because they offer increased scalability and performance.
While it’s possible that moving to the cloud can be less expensive than traditional data centers, that is not always the case. Sometimes cloud platforms can come with hidden costs that weren’t initially expected. Cloud platforms come with their own set of security risks and, while some are the same as the risks you’d see in a traditional data center, some are different as well.
Which of the following characteristics of cloud computing enables a cloud provider to operate cost-effectively by distributing costs across multiple cloud customers?
A. On-Demand Self-Service
B. Metered Service
C. Resource Pooling
D. Rapid Elasticity and Scalability
C. Resource Pooling
Explanation:
The six common characteristics of cloud computing include:
Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols. On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand. Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers. Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure. Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use. Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure
The government’s Unclassified/Confidential/Secret/Top Secret system classifies data based on which of the following?
A. Ownership
B. Type
C. Sensitivity
D. Criticality
C. Sensitivity
Explanation:
Data owners are responsible for data classification, and data is classified based on organizational policies. Some of the criteria commonly used for data classification include:
Type: Specifies the type of data, including whether it has personally identifiable information (PII), intellectual property (IP), or other sensitive data protected by corporate policy or various laws. Sensitivity: Sensitivity refers to the potential results if data is disclosed to an unauthorized party. The Unclassified, Confidential, Secret, and Top Secret labels used by the U.S. government are an example of sensitivity-based classifications. Ownership: Identifies who owns the data if the data is shared across multiple organizations, departments, etc. Jurisdiction: The location where data is collected, processed, or stored may impact which regulations apply to it. For example, GDPR protects the data of EU citizens. Criticality: Criticality refers to how important data is to an organization’s operations
Which of the following roles is defined as the role that authorizes the processing of personal data according to the European Union (EU) General Data Protection Regulation (GDPR)?
A. Data owner
B. Data processor
C. Data custodian
D. Data controller
D. Data controller
Explanation:
The data controller is “a person who alone or jointly with others processes or controls or authorizes the processing of data,” according to the GDPR.
The data processor is “a person who processes data solely on behalf of the controller, excluding the employees of the data controller,” according to the GDPR.
The data owner is defined by the Cloud Security Alliance in their guidance 4.0 as someone responsible for a piece or set of data and then responsible for classifying that piece or set of data. This is also the use of the term within governments, and it has been that way for decades.
The data custodian is in possession of the data and needs to follow the corporate policies regarding its handling.
Yair and his team are building a piece of software that will be deployed into their cloud environment. They have a variety of virtual machines from virtual servers to virtual desktops that they use throughout the business. The software that will be deployed needs to be able to run on multiple different operating systems. So, they need something that allows for portability of the application.
Which of the following technologies could they use?
A. Hypervisors
B. Virtual machines
C. Application virtualization
D. Orchestration
C. Application virtualization
Explanation:
Application virtualization is a technology that allows applications to run in isolated environments, separate from the underlying operating system and hardware. It enables the delivery of applications to end users without the need for traditional installation or compatibility issues. Instead of installing applications directly on individual machines, they are encapsulated into virtualized packages that can be executed on demand. Examples are Microsoft App-V, VMware ThinApp, and Citrix Virtual Apps.
Hypervisors are software or firmware components that enable the virtualization of physical computer hardware. They allow multiple Virtual Machines (VMs) to run on a single physical machine, effectively abstracting and managing the underlying hardware resources. The VMs are built on a specific hypervisor and only work with that hypervisor. It is a bit more difficult when the question is asking for portability and many operating systems.
Cloud orchestration is the automated management and coordination of multiple cloud resources and services to ensure efficient and optimized delivery of cloud-based applications and workflows. It involves the automation of various tasks, such as provisioning, configuration, deployment, scaling, and monitoring of cloud resources.
Padma is the information security manager involved with the DevOps teams. Their goal is to create an environment that allows the teams to control the roll out to production of the infrastructure. They are looking for a way that integrates software development techniques that include version control and continuous integration techniques.
What are they looking for?
A. Database as a Service (DBaaS)
B. Identity as a Service (IDaaS)
C. Immutable infrastructure
D. Infrastructure as Code (IaC)
D. Infrastructure as Code (IaC)
Explanation:
The use of Infrastructure as Code allows the DevOps team to control the deployment of infrastructure such as virtual servers to production. Control of deployment needs to be very carefully controlled. That is a lesson that has been understood for quite some time. If the infrastructure is defined and stored as files, it is possible to deploy defined and updated systems without worry. IaC integrates version control and continuous integration techniques.
Immutable infrastructure is the idea that the infrastructure is not upgraded or changed. Once a VM is deployed, it stays as it is configured. If it is necessary to upgrade, a new VM is built and deployed. If functional, the traffic can be redirected from the old VM to the new. Mutable architecture is when the deployed VM can be changed, often with orchestration tools such as chef or puppet.
DBaaS is a platform as a Service option from a cloud provider. It could be a SeQueL (SQL) database or noSQL, etc.
IDaaS is a service to identify and authenticate users. Facebook (Meta), Google, and others provide this service using Security Assertion Markup Language (SAML) or other options.
Which of the Trust Services principles must be included in a Service Organization Controls (SOC) 2 audit?
A. Availability
B. Privacy
C. Security
D. Confidentiality
C. Security
Explanation:
The Trust Service Criteria from the American Institute of Certified Public Accountants (AICPA) for the Security Organization Controls (SOC) 2 audit report is made up of five key principles: Availability, Confidentiality, Process integrity, Privacy, and Security. Security is always required as part of a SOC 2 audit. The other four principles are optional.
Royce works for a Cloud Service Provider (CSP). She has been involved with the setup and configuration of the servers in the data center. The hypervisors they have installed allow for the virtualization of servers and desktops for the customer to purchase and use.
If the CSP is selling Infrastructure as a Service (IaaS), what is the breakdown of responsibility under the cloud shared security model?
A. The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)
B. The Cloud Service Customer (CSC) is responsible for the hypervisor and the Cloud Service Provider (CSP) is responsible for the Virtual Machines (VMs)
C. The Cloud Service Customer (CSC) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)
D. The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Provider (CSP) is responsible for the Virtual Machines (VMs)
A. The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)
Explanation:
Correct answer: The Cloud Service Provider (CSP) is responsible for the hypervisor and the Cloud Service Customer (CSC) is responsible for the Virtual Machines (VMs)
The shared security model does differ with each cloud provider. However, we can make the assumption that the CSP is responsible for the hypervisor. It is effectively the Operating System (OS) for the servers that Royce is installing.
In IaaS, the customer then buys and brings their OSs with them for the virtual machines. Since the OSs belong to the customer, it is their responsibility to care for them.
Which of the following tools looks for vulnerabilities in the source code of an application?
A. DAST
B. IAST
C. SCA
D. SAST
D. SAST
Explanation:
Static Application Security Testing (SAST): SAST tools inspect the source code of an application for vulnerable code patterns. It can be performed early in the software development lifecycle but can’t catch some vulnerabilities, such as those visible only at runtime.
Dynamic Application Security Testing (DAST): DAST bombards a running application with anomalous inputs or attempted exploits for known vulnerabilities. It has no knowledge of the application’s internals, so it can miss vulnerabilities. However, it is capable of detecting runtime vulnerabilities and configuration errors (unlike SAST).
Interactive Application Security Testing (IAST): IAST places an agent inside an application and monitors its internal state while it is running. This enables it to identify unknown vulnerabilities based on their effects on the application.
Software Composition Analysis (SCA): SCA is used to identify the third-party dependencies included in an application and may generate a software bill of materials (SBOM). This enables the developer to identify vulnerabilities that exist in this third-party code.
Properly scaling network and computing resources is an important part of which of the following system and communication protections?
A. Cryptographic Key Establishment and Management
B. Security Function Isolation
C. Denial-of-Service Prevention
D. Boundary Protection
C. Denial-of-Service Prevention
Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:
Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them. Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings. Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors. Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems. Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems. Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
Which of the following is NOT an example of a “something you know” factor for MFA?
A. Password
B. PIN
C. Security Question
D. OTP
D. OTP
Explanation:
Multi-factor authentication requires a user to provide multiple authentication factors to gain access to their account. These factors must come from two or more of the following categories:
Something You Know: Passwords, security questions, and PINs are examples of knowledge-based factors. Something You Have: These factors include hardware tokens, smart cards, or smartphones that can receive or generate a one-time password (OTP). Something You Are: Biometric factors include fingerprints, facial recognition, and similar technologies.
While these are the most common types of MFA factors, others can be used as well. For example, a “somewhere you are” factor could use an IP address or geolocation to determine the likelihood that a request is authentic.
Padma works for a financial trading company and is in charge of revising their data retention policy. She knows that it is essential to control how long data is maintained. There are several laws that demand that data will not be in their control longer than it should be. Which phase of the data lifecycle requires the most attention over time?
A. Share
B. Use
C. Archive
D. Destroy
C. Archive
Explanation:
The data retention policy will have an effect on the data lifecycle’s archive phase. It is necessary to review the data that is in the archive phase to make sure that data is pulled out and destroyed when necessary.
The other phases do demand that data is protected appropriately. The share phase occurs when data is being shared with someone else. Use phase is when a user is utilizing the data that has been created previously.
The destroy phase would occur when data is pulled out of storage or archival and removed from existence within the business. This is almost the right answer, but it is the archival data that needs to be reviewed. This can take a lot of time and energy to carefully review data properly.
Which of the following is a system/subsystem product certification?
A. Common Criteria
B. G-Cloud
C. PCI DSS
D. FedRAMP
A. Common Criteria
Explanation:
The Common Criteria are system/subsystem product certifications that show the level of testing that a particular system or subsystem has undergone. Cloud service providers may have their environment as a whole verified against certain standards, including:
ISO/IEC 27017 and 27018: The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud. PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments. Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources.
Which of the following schemes relies on a lookup table stored in a secure environment?
A. Tokenization
B. Encryption
C. Hashing
D. Masking
A. Tokenization
Explanation:
Cloud customers can use various strategies to protect sensitive data against unauthorized access, including:
Encryption: Encryption performs a reversible transformation on data that renders it unreadable without knowledge of the decryption key. If data is encrypted with a secure algorithm, the primary security concerns are generating random encryption keys and protecting them against unauthorized access. FIPS 140-3 is a US government standard used to evaluate cryptographic modules. Hashing: Hashing is a one-way function used to ensure the integrity of data. Hashing the same input will always produce the same output, but it is infeasible to derive the input to the hash function from the corresponding output. Applications of hash functions include file integrity monitoring and digital signatures. FIPS 140-4 is a US government standard for hash functions. Masking: Masking involves replacing sensitive data with non-sensitive characters. A common example of this is using asterisks to mask a password on a computer or all but the last four digits of a credit card number. Anonymization: Anonymization and de-identification involve destroying or replacing all parts of a record that can be used to uniquely identify an individual. While many regulations require anonymization for data use outside of certain contexts, it is very difficult to fully anonymize data. Tokenization: Tokenization replaces sensitive data with a non-sensitive token on untrusted systems that don’t require access to the original data. A table mapping tokens to the data is stored in a secure location to enable the original data to be looked up when needed.
Which of the following common characteristics of cloud computing enables cloud customers to access cloud resources on an as-needed basis?
A. On-Demand Self-Service
B. Multitenancy
C. Metered Service
D. Broad Network Access
A. On-Demand Self-Service
Explanation:
The six common characteristics of cloud computing include:
Broad Network Access: Cloud services are widely available over the network, whether using web browsers, secure shell (SSH), or other protocols. On-Demand Self-Service: Cloud customers can redesign their cloud infrastructure at need, leasing additional storage or processing power or specialized components and gaining access to them on-demand. Resource Pooling: Cloud customers lease resources from a shared pool maintained by the cloud provider at need. This enables the cloud provider to take advantage of economies of scale by spreading infrastructure costs over multiple cloud customers. Rapid Elasticity and Scalability: Cloud customers can expand or contract their cloud footprint at need, much faster than would be possible if they were using physical infrastructure. Measured or Metered Service: Cloud providers measure their customers’ usage of the cloud and bill them for the resources that they use. Multitenancy: Public cloud environments are multitenant, meaning that multiple different cloud customers share the same underlying infrastructure.
During which phase of the TLS process is the connection between the two parties negotiated and established?
A. TLS Functional Protocol
B. TLS Negotiation Protocol
C. TLS Handshake Protocol
D. TLS Record Protocol
C. TLS Handshake Protocol
Explanation:
Transport Layer Security (TLS) is broken up into two main phases: TLS Handshake Protocol and TLS Record Protocol. During the TLS Handshake Protocol, the TLS connection between the two parties is negotiated and established.
During the TLS Record Protocol, the actual secure communications method for transmitting data occurs.
It is not called the TLS negotiation protocol, it is the handshake protocol.
TLS functional protocol is not a real phase.
This may be a bit more detail regarding TLS than is needed, but a little extra technical knowledge is useful for this test.
Which of the following organizations publishes security standards applicable to any systems used by the federal government and its contractors?
A. International Standards Organization (ISO)
B. Service Organization Controls (SOC)
C. National Institute of Standards and Technology (NIST)
D. Information Systems Audit and Control Association (ISACA)
C. National Institute of Standards and Technology (NIST)
Explanation:
The National Institute of Standards and Technology (NIST) is a part of the United States government, which is responsible for publishing security standards applicable to any systems used by the federal government and its contractors although they are available to anyone to use.
SOC is the type of audit report that is the result of SSAE 16/18 or ISAE 3402 audits. ISACA is the company behind the CISM and CISA certifications. They are fundamentally a company of IT auditors although they have expanded greatly over the years. ISO is the international body that creates standards for the world to use.
Reference:
As you are drafting your organization’s cloud data destruction policy, which of the following is NOT a consideration that may affect the policy?
A. Data discovery
B. Compliance and governance
C. Retention requirements
D. Business processes
A. Data discovery
Explanation:
You should not consider data discovery when determining an organization ‘s data destruction policy. While you may discover data during other stages of the data lifecycle, this is irrelevant at the time of destruction. Data discovery should not be done during the data destruction phase and should have been done much earlier in the data lifecycle. Remember this exam is about the theory of what we should be doing within business, not what often happens in business.
It is necessary to consider the laws that the corporation must be in compliance with when writing the data destruction policy. For example, GDPR says that you can retain the personal data collected by the business for a reasonable period of time. After that point, the data should be destroyed properly.
GDPR saying you can only retain data for a reasonable period of time is addressing data retention requirements.
Business processes should be considered while developing a data destruction policy for any data that is not regulated by law.
In which of the following cloud models will an organization’s infrastructure DEFINITELY NOT be hosted in its own data center?
A. Community Cloud
B. Hybrid Cloud
C. Private Cloud
D. Public Cloud
D. Public Cloud
Explanation:
The physical environment where cloud resources are hosted depends on the cloud model in use:
Public Cloud: Public cloud infrastructure will be hosted by the CSP within their own data centers. Private Cloud: Private clouds are usually hosted by an organization within its own data center. However, third-party CSPs can also offer virtual private cloud (VPC) services. Community Cloud: In a community cloud, one member of the community hosts the cloud infrastructure in their data center. Third-party CSPs can also host community clouds in an isolated part of their environment.
Hybrid and multi-cloud environments will likely have infrastructure hosted by different organizations. A hybrid cloud combines public and private cloud environments, and a multi-cloud infrastructure uses multiple cloud providers’ services.
Having the ability to move data to another cloud provider without having to re-enter it is known as:
A. Interoperability
B. Reversibility
C. Portability
D. Multitenancy
C. Portability
Explanation:
The ability to move data between multiple cloud providers is known as cloud data portability, while cloud application portability refers, instead, to the ability to move an application between cloud providers.
Multitenancy is the term used to describe a cloud provider housing multiple customers and/or applications within one server.
Interoperability is the ability of two different systems to be able to exchange and use a piece of data, such as one user creating a Microsoft Word document on a Mac and then another user opening and using that word document on a Microsoft Windows machine.
Reversibility is the ability to retrieve data from a cloud provider upon termination of the contract as well as having the data be removed securely from the cloud provider’s systems.
These terms are defined in ISO 17788, which is the ISO version of NIST 800-145. It is a free document (both are actually), and it would be a great idea to look at both of them.
Reference:
Alexei works for a Russian bank and knows as an information security professional that they must be careful with the personal information about their customers that they collect and maintain. Which Russian law states that any collecting, processing, or storing of data on Russian citizens must be done from systems that are physically located in the Russian Federation?
A. General Data Protection Regulation (GDPR)
B. Act on the Protection of Personal Information
C. Gramm-Leach-Bliley Act (GLBA)
D. Federal Law 526-FZ
D. Federal Law 526-FZ
Explanation:
Russian law 526-FZ was enacted in September of 2015. The law states that any collecting, processing, or storing of personal or private data that pertains to Russian citizens must be done from systems and databases that are physically located within the Russian Federation.
GDPR is a European Union law that requires member countries to have privacy laws that are equal or stronger than GDPR. It also governs collection, processing, protection, and storage of personal data.
GLBA is a U.S. Act that requires financial holding companies to protect the personal data that they have in their possession.
Act on the Protection of Personal Information is a law in Japan that is similar to the requirements of GDPR.
HIPAA protects which of the following types of private data?
A. Payment Data
B. Personally Identifiable Information
C. Protected Health Information
D. Contractual Private Data
C. Protected Health Information
Explanation:
Private data can be classified into a few different categories, including:
Personally Identifiable Information (PII): PII is data that can be used to uniquely identify an individual. Many laws, such as the GDPR and CCPA/CPRA, provide protection for PII. Protected Health Information (PHI): PHI includes sensitive medical data collected regarding patients by healthcare providers. In the United States, HIPAA regulates the collection, use, and protection of PHI. Payment Data: Payment data includes sensitive information used to make payments, including credit and debit card numbers, bank account numbers, etc. This information is protected under the Payment Card Industry Data Security Standard (PCI DSS). Contractual Private Data: Contractual private data is sensitive data that is protected under a contract rather than a law or regulation. For example, intellectual property (IP) covered under a non-disclosure agreement (NDA) is contractual private data.
A Fortune 500 company has just performed a Disaster Recovery (DR) test. They have a variety of people that represent a wide variety of roles and responsibilities. The gathered individuals talked through the steps in order while verifying that the DR document had the needed steps. The team members describe how they would carry out their responsibilities in a certain BC/DR scenario.
Which type of disaster recovery plan testing are they conducting?
A. Full
B. Parallel
C. Tabletop
D. Simulation
C. Tabletop
Explanation:
In a tabletop exercise, participants are provided with scenarios and asked to describe how they will carry out their assigned activities in a certain business continuity/disaster recovery scenario. This enables members to comprehend their roles amid a disaster.
A good example of a simulation today is a fire drill. A fire is not started to practice exiting the building. It is simulated. These were more common in the past, especially in on-premise data centers. The exercise would be walking through the data center looking for the spare server or drive in the closet, locating the CD that has the operating system on it, and so on. This is not normal for cloud environments.
A parallel test involves starting the alternate location’s servers and services to see that they function. In a parallel test, the primary business servers and services should not be disturbed.
A full test does shut down the servers and services that the company is actively using so that a failover to the alternate servers and services can be done.
Which of the following solutions helps to reduce the number of passwords that a user needs to maintain?
A. Single Sign-On
B. Multi-Factor Authentication
C. Secrets Management
D. Federated Identity
A. Single Sign-On
Explanation:
Identity and Access Management (IAM) is critical to application security. Some important concepts in IAM include:
Federated Identity: Federated identity allows users to use the same identity across multiple organizations. The organizations set up their IAM systems to trust user credentials developed by the other organization. Single Sign-On (SSO): SSO allows users to use a single login credential for multiple applications and systems. The user authenticates to the SSO provider, and the SSO provider authenticates the user to the apps using it. Identity Providers (IdPs): IdPs manage a user’s identities for an organization. For example, Google, Facebook, and other organizations offer identity management and SSO services on the Web. Multi-Factor Authentication (MFA): MFA requires a user to provide multiple authentication factors to log into a system. For example, a user may need to provide a password and a one-time password (OTP) sent to a smartphone or generated by an authenticator app. Cloud Access Security Broker (CASB): A CASB sits between cloud applications and users and manages access and security enforcement for these applications. All requests go through the CASB, which can perform monitoring and logging and can block requests that violate corporate security policies. Secrets Management: Secrets include passwords, API keys, SSH keys, digital certificates, and anything that is used to authenticate identity and grant access to a system. Secrets management includes ensuring that secrets are randomly generated and stored securely.
An organization is concerned about running afoul of GDPR regulations regarding jurisdictional boundaries. Which phase of the cloud data lifecycle are they MOST likely to be at?
A. Archive
B. Create
C. Destroy
D. Share
D. Share
Explanation:
The cloud data lifecycle has six phases, including:
Create: Data is created or generated. Data classification, labeling, and marking should occur in this phase. Store: Data is placed in cloud storage. Data should be encrypted in transit and at rest using encryption and access controls. Use: The data is retrieved from storage to be processed or used. Mapping and securing data flows becomes relevant in this stage. Share: Access to the data is shared with other users. This sharing should be managed by access controls and should include restrictions on sharing based on legal and jurisdictional requirements. For example, the GDPR limits the sharing of EU citizens’ data. Archive: Data no longer in active use is placed in long-term storage. Policies for data archiving should include considerations about legal data retention and deletion requirements and the rotation of encryption keys used to protect long-lived sensitive data. Destroy: Data is permanently deleted. This should be accomplished using secure methods such as cryptographic erasure/crypto shredding.
After seeing “Broken Access Control” listed as one of the top vulnerabilities on the OWASP Top 10, a cloud application architect has started looking into options to protect against this. Which of the following could the engineer implement to help protect against broken authentication?
A. Data Leak Prevention (DLP)
B. Multi-Factor Authentication (MFA)
C. Input validation
D. Proper logging
B. Multi-Factor Authentication (MFA)
Explanation:
Multi-Factor Authentication (MFA) is an authentication method in which a user is required to provide two or more types of factors proving they are who they claim to be. For example, a user would need both a password and a randomly generated code sent to their smartphone to access an application. MFA factors are broken up into categories such as something you know (passwords, pin), something you are (biometrics), something you have (key card, smartphone), and something you do/are (biometrics or behavioral characteristics).
Input validation is a critical control that can and should be added to many places in many applications. A good rule to follow is to never trust any input. Definitely do not trust input from a user. Many problems can be prevented or stopped by validating the input to include Cross Site Scripting (XSS) and injection attacks.
Logging is necessary to understand what has been happening throughout the technical environment. It is often only with the logs that we know that there has been a compromise which starts the Incident Response (IR) process.
DLP tools are beneficial to add to networks, the cloud, and end systems. They are used to detect traffic heading in the wrong direction or in the wrong format (not encrypted). They can also be used to detect data on servers that should not be there.
At what phase of the SSDLC does the coding of software components and integration occur?
A. Development
B. Design
C. Operations & Maintenance
D. Deployment
A. Development
Explanation:
The development phase entails the coding of software components as well as the integration and construction of the overall solution.
The design phase is the planning of what this software will do, who will use it, what needs to be built into the software, what components are needed, and so on.
The deployment phase is when it is put into use. This is when it is moved into production.
Operations and maintenance will see the software being used on a regular basis. It will be necessary to patch the software as new fixes come out.
Which of the following is only a concern if an organization chooses to BUILD a data center rather than rent cloud services?
A. Tenant partitioning
B. Access control
C. Multivendor pathway connectivity
D. Location
C. Multivendor pathway connectivity
Explanation:
Multivendor pathway connectivity refers to the use of multiple ISPs with cables routed over different paths to reduce the risk of an outage. This is more of a concern with a data center that an organization owns than one which they are using though the cloud.
Tenant partitioning is only applicable in multitenant environments like public clouds. Location and access control are important in both customer-owned and provider-owned data centers.
Bruis has been working with the developers for a new cloud-based application that will operate within their Platform as a Service (PaaS) environment. He has brought the focus of information security to the effort since he is an information security manager. He has been working to ensure that they are planning and developing and assessing the application the best they can as appropriate to the application and the corporation’s needs.
What fundamental cloud application idea does this work represent?
A. Developing collective responsibility
B. Security as a business objective
C. Shared security responsibility
D. Security by design
D. Security by design
Explanation:
The Cloud Security Alliance (CSA) and Software Assurance Forum for Excellence in Code (SAFECode) present the idea that there is a collective responsibility to secure applications, as they are developed for use within corporations and the cloud. That responsibility can be broken down into three parts:
Security by design refers to the inclusion of security at every stage of the development process rather than after an application has been released or in reaction to a security exploit or vulnerability. From application feasibility to retirement, security is an integral element of the process. Bruis is the representation of that consistent effort in this question. Shared security responsibility means that everyone within the corporation and/or the project has a responsibility to pay attention to security as they are doing their work. Security as a business objective is the idea that an organization should have a compliance-driven approach to security.
Deploying redundant and resilient systems such as load balancers is MOST relevant to an organization’s efforts in which of the following areas?
A. Availability Management
B. Problem Management
C. Service Level Management
D. Capacity Management
A. Availability Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
Simulations and tabletop exercises are part of which stage of developing a BCP?
A. Creation
B. Testing
C. Implementation
D. Auditing
B. Testing
Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:
Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP. Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis. Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.
Auditing is not one of the three stages of developing a BCP/DRP.
Which sort of testing watches and analyzes application performance while analyzing the code that is in use in real time to identify potential security issues?
A. Runtime Application Self-Protection (RASP)
B. Interactive Application Security Testing (IAST)
C. Static Application Security Testing (SAST)
D. Dynamic Application Security Testing (DAST)
B. Interactive Application Security Testing (IAST)
Explanation:
Interactive Application Security Testing (IAST) is a testing technique that has an application active and running and allows the tester to see what code is in use at any specific moment.
Static Application Security Testing (SAST) analyzes the source code. It is static because the application is sitting still on the computer. Dynamic Application Security Testing (DAST) is watching the application in a running condition.
Runtime Application Self-Protection (RASP) is a tool that can be added to software to protect it in real time. It can spot vulnerabilities and prevent real-time attacks.
Reference:
Many cloud customers have legal requirements to protect data that they place on the cloud provider’s servers. There are some legal responsibilities for the cloud provider to protect that data. Therefore, it is normal for the cloud provider to have their data centers audited using which of the following?
A. Internal auditor
B. External auditor
C. Cloud architect
D. Cloud operators
B. External auditor
Explanation:
An external auditor is not employed by the company being audited. An external auditor will often use industry standards such as ISO 27001 and SOC2 to perform an audit of a cloud provider. Due to the legal requirements, this work needs to be done by an independent party. Therefore, internal auditors are not the correct answer here.
Cloud architects design cloud structures, and cloud operators do the daily maintenance and monitoring of the cloud, according to the Cloud Security Alliance (CSA).
Adelita is the cloud administrator, and she is beginning the process of introducing a new information security tool to the business. There is a concern that data is on servers that it should not be on, so they have decided to use a Data Loss Prevention (DLP) system. As she is at the beginning of the implementation, she is at the first stage of DLP. This is a very difficult and critical stage. It must be done carefully and effectively.
What is the FIRST stage of DLP implementation?
A. Monitoring
B. Enforcement
C. Discovery and classification
D. Data de-identification
C. Discovery and classification
Explanation:
DLP is made up of three common stages which include discovery and classification, monitoring and, finally, enforcement.
Discovery and classification is the first phase, as the security requirements of the data must be addressed. Data must be understood for the DLP tool to do its job. This is not building a classification scheme within the business as that should already exist. This is teaching the DLP tool about the data that the business has, where it can be, where it can be sent, and in what format.
If data is understood, then monitoring can begin. This is when the DLP tool is able to watch and analyze data. DLP traditionally analyzed traffic that was flowing out of the business so that it could prevent a loss or leak of data. It can now analyze servers to look for data that should not be on a particular machine.
If data is being sent incorrectly or improperly or if it is at rest on a server it should not be on, then enforcement can occur. Enforcement can drop data, stop data, encrypt data, delete data, and so on.
Data de-identification is the removal of direct identifiers. Anonymization is the removal of direct and indirect identifiers.
Padma and her team have been updating the information security policies with information related to their new corporate Infrastructure as a Service (IaaS) cloud structure. For the Identity and Access Management (IAM) policy, it is critical to add the right cloud specific details. It is critical that…
A. The policy specifies that the management plane is controlled with multi-factor authentication and that each department has its own distinct login
B. The policy must specify that all users have their accounts set up with multi-factor authentication and that only trusted administrators should be able to log in into the shared corporate account
C. The policy specifies that the primary corporate account is carefully controlled with multi-factor authentication and that each department has its own separate account under corporate accounts
D. The policy must specify that all users are set up with multi-factor authentication for their email accounts and that each network administrator must set up all network equipment with multi-factor authentication as well
C. The policy specifies that the primary corporate account is carefully controlled with multi-factor authentication and that each department has its own separate account under corporate accounts
Explanation:
The primary corporate account should be tightly controlled with multi-factor authentication, and it should be with a hardware token. Once that account is set up, then each department or possibly each project has their own sub-account controlled by the primary. That way, if a bad actor accesses one of the sub-accounts, they will not be able to access and destroy all the corporate systems.
The answer that has each department with its own login is not wise. It implies that the department will be sharing a single login. Shared accounts are not wise. For that same reason, the answer with a “shared corporate account” is equally unwise.
The answer that includes “each network administrator…” is an odd answer because the network equipment has multi-factor authentication. Also, the question is about IaaS, which could include virtual network routers, switches and the like, but that answer implies it is actual hardware equipment.
A cloud service provider has published a SOC 2 report. Which of the following cloud considerations is this MOST relevant to?
A. Regulatory Oversight
B. Security
C. Auditability
D. Governance
C. Auditability
Explanation:
When deploying cloud infrastructure, organizations must keep various security-related considerations in mind, including:
Security: Data and applications hosted in the cloud must be secured just like in on-prem environments. Three key considerations are the CIA triad of confidentiality, integrity, and availability. Privacy: Data hosted in the cloud should be properly protected to ensure that unauthorized users can’t access the data of customers, employees, and other third parties. Governance: An organization’s cloud infrastructure is subject to various laws, regulations, corporate policies, and other requirements. Governance manages cloud operations in a way that ensures compliance with these various constraints. Auditability: Cloud computing outsources the management of a portion of an organization’s IT infrastructure to a third party. A key contractual clause is ensuring that the cloud customer can audit (directly or indirectly) the cloud provider to ensure compliance with contractual, legal, and regulatory obligations. A SOC 2 report shows that a cloud service provider meets certain requirements regarding the protection of the customer's data. Regulatory Oversight: An organization’s responsibility for complying with various regulations (PCI DSS, GDPR, etc.) also extends to its use of third-party services. Cloud customers need to be able to ensure that cloud providers are compliant with applicable laws and regulations.