Pocket Prep 14 Flashcards
An engineer has been asked by her supervisor to determine how fast each system must be back up and running after a disaster has occurred to meet BCDR objectives. What has this engineer been asked to determine?
A. Recovery Service Level (RSL)
B. Recovery Time Objective (RTO)
C. Maximum Tolerable Outage (MTO)
D. Maximum Tolerable Downtime (MTD)
D. Maximum Tolerable Downtime (MTD)
Explanation:
The MTD is a time measurement of the time in which each system must be brought back up and running after a disaster occurs to meet Business Continuity and Disaster Recovery (BCDR) objectives. It effectively defines how long the system can be non-operational.
The RTO is the amount of time that is allowed to take the actions required to bring a system operational.
RSL is the level of functionality. It is not normal for a system to recover to a level of 100% after failure when we are discussing BC/DR. The plans are to fail to another site (physical or virtual). When that happens, the question that must be addressed is what level of functionality must be there if it is going to be at a level that the business can tolerate.
If the functionality is less than 100%, that will only be tolerable for a certain amount of time. That time window is defined as the MTO. By the end of the MTO, the system needs to be returned to normal status.
An organization’s cloud infrastructure is scattered over multiple organizations’ data centers. Which of the following is the MOST likely cloud model in use?
A. Private Cloud
B. Hybrid Cloud
C. Public Cloud
D. Community Cloud
B. Hybrid Cloud
Explanation:
The physical environment where cloud resources are hosted depends on the cloud model in use:
Public Cloud: Public cloud infrastructure will be hosted by the CSP within their own data centers. Private Cloud: Private clouds are usually hosted by an organization within its own data center. However, third-party CSPs can also offer virtual private cloud (VPC) services. Community Cloud: In a community cloud, one member of the community hosts the cloud infrastructure in their data center. Third-party CSPs can also host community clouds in an isolated part of their environment.
Hybrid and multi-cloud environments will likely have infrastructure hosted by different organizations. A hybrid cloud combines public and private cloud environments, and a multi-cloud infrastructure uses multiple cloud providers’ services.
Which of the following is MOST closely related to an organization’s efforts
to ensure features like confidentiality and non-repudiation?
A. Cryptographic Key Establishment and Management
B. Security Function Isolation
C. Separation of System and User Functionality
D. Boundary Protection
A. Cryptographic Key Establishment and Management
Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:
Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them. Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings. Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors. Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems. Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems. Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
Reference:
Managing the automation of CI/CD pipelines to support Agile and DevOps practices falls under which of the following?
A. Release Management
B. Deployment Management
C. Configuration Management
D. Change Management
B. Deployment Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
In which cloud service model does the CSP’s responsibility extend to securing operating systems, database management systems (DBMSs), and similar components made available to the cloud customer?
A. IaaS
B. SaaS
C. PaaS
D. All service models
C. PaaS
Explanation:
Compute resources include the components that offer memory, CPU, disk, networking, and other services to the customer. In all cases, the cloud service provider (CSP) is responsible for the physical infrastructure providing these services.
However, at the software level, responsibility depends on the cloud service model in use, including:
Infrastructure as a Service (IaaS): In an IaaS environment, the CSP provides and manages the physical components, virtualization software, and networking infrastructure. The customer is responsible for configuring and securing their VMs and the software installed in them. Platform as a Service (PaaS): In a PaaS environment, the CSP’s responsibility extends to offering and securing the operating systems, database management systems (DBMSs), and other services made available to a customer’s applications. The customer is responsible for properly configuring and using these services and the security of any software that they install or use. Software as a Service (SaaS): In a SaaS environment, the CSP is responsible for everything except the custom settings made available to the cloud customer. For example, if a cloud storage drive can be set to be publicly accessible, that is the customer’s responsibility, not the CSP’s.
Gaby has been working for a large pharmaceutical company for many years now. She is transitioning into the role of cloud administrator. In her role, she will set up and configure many of the elements needed for the Research and Development (R&D) department. She has just finished building the Virtual Machines (VMs) that some of the researchers need as well as the applications and the appropriate Identity and Access Management (IAM) needs.
What has she been configuring?
A. Network
B. Database
C. Compute
D. Storage
C. Compute
Explanation:
The fundamental pieces that must be built within the cloud for it to work are compute, storage, and network.
Compute refers to the Virtual Machines (VMs), containers, applications, and anything needed to be able to process the user’s data. Software as a Service fits into this category.
Storage is where the data sits. When data is at rest, it is in storage. So, this includes block storage, object storage, file storage, databases, big data, and so on.
The answer option database fits into the storage group.
Network is the ability to move the data (by data we mean data, voice, and video). This includes virtual routers, switches, IP addresses, Virtual Private Networks (VPN), load balancers, Domain Name Service (DNS), and so on.
Vaeda is working with her company as the lead cloud information security specialist. In her job, she has been working with the developers as they are building new capabilities into their software portfolio. They are creating software that will be able to analyze large amounts of data for their customers. The customer will have to help train the systems’ algorithms through reinforcement.
What are they building?
A. Machine learning with Reinforced Learning
B. General Artificial Intelligence
C. Natural Language Processing
D. Narrow Artificial Intelligence
A. Machine learning with Reinforced Learning
Explanation:
Machine learning involves training algorithms that are able to learn from data and make predictions. Reinforced learning is one of the three subfields. The other two are supervised learning and unsupervised learning.
Narrow Artificial Intelligence (AI) has the ability to perform specific tasks within a limited domain. Siri and Alexa are good examples.
General AI represents machines that possess the ability to understand, learn, and apply knowledge across various domains similar to human intelligence.
Natural Language Processing (NLP) involves the interaction between the computer and human language. This allows machines to understand, interpret language, and generate human language responses. Chatbots and translation tools are examples.
An information security professional working with the Development/Operation (Dev/Ops) teams is helping them identify the threat modeling approach that most aligns with their needs. They are looking for a threat modeling technique that prioritizes the vulnerabilities that they have for the software they are currently building. Which of the following is an OWASP recommended model that can be used to perform this task?
A. Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of privilege (STRIDE)
B. Damage, Reproducibility, Exploitability, Affected users, Discoverabilit. (DREAD)
C. Process for Attack Simulation and Threat Analysis (PASTA)
D. Architecture, Threats, Attack Surfaces and Mitigations (ATASM)
B. Damage, Reproducibility, Exploitability, Affected users, Discoverabilit. (DREAD)
Explanation:
Correct answer: Damage, Reproducibility, Exploitability, Affected users, Discoverability (DREAD)
DREAD is a threat modeling technique that helps to prioritize vulnerabilities based on the five criteria that make up its name. It uses a scoring system, usually 1-10, to help prioritize things. Each of the five criteria is scored and then divided by 5 for an average score.
STRIDE is a threat modeling framework used to identify and categorize potential threats or attacks. It provides a set of six categories that make up its name.
ATASM is a threat modeling approach that approaches the process by breaking apart the logical and functional components so that all attackable surfaces that could potentially be attacked are identified.
PASTA is also a threat modeling approach that takes business objectives into consideration to align them with the technical requirements.
Giada is working with developers on a new application that will be used in a Platform as a Service (PaaS) deployment. The software will be handling and processing credit cards when their customers purchase access to their Software as a Service (SaaS). The SaaS will allow the customers a way to build and test software code for their own applications. It is imperative that the software protects the customers’ source code.
What can the developers add to their software that will ensure the values of the credit card numbers so that they can be charged for the continued use of the SaaS?
A. Tokenization
B. Obfuscation
C. Hashing
D. Data Loss Prevention (DLP)
C. Hashing
Explanation:
Hashing feeds data into a one-way algorithm that generates a unique value called a hash or message digest. Generating another hash of the same file in the future will produce the exact same value only if the data has not been modified; this ensures data integrity. If the data has been altered, the new hash will differ from the original.
Tokenization could be used to store the credit card numbers so that there is no known or stored card number, but this does not satisfy the needs of the question. The question asks for a way to ensure the value of the card number. A token number would also need to be verified as the correct number. Hashing can be used there as well.
DLP can be used to make sure that the card numbers are not sent insecurely in an email, for example. Again, this is not protecting the values.
Obfuscation is “to confuse.” Encryption is a type of obfuscation. However, that is not as specific to protecting the values as hashing is. So, hashing is a better answer.
An attacker sent commands through an application’s input and data fields. By doing this, the attacker was able to get the application to execute the code they sent as part of its normal processing. The attacker was able to use this technique to get the application to expose sensitive data that they should not have access to.
What type of attack was used?
A. Cross-site scripting
B. Identification & authentication failures
C. Denial of service
D. Injection
D. Injection
Explanation:
An injection attack occurs when an attacker sends (injects) malicious code or commands to an application’s input or data fields. The goal of the attacker is to get the application to execute the code as part of its normal processing. The best way to prevent injection attacks is by ensuring that all data and input fields include proper input validation and sanitization.
Identification & authentication failures is in position 7 on the OWASP Top 10 2021. Formerly, it was called broken authentication and was in position 2 in 2017. This is a lack of protection of the authentication process. When this is not done properly, someone can gain access to existing accounts.
Cross-Site Scripting (XSS) has now (2021) been combined with injection from the 2017 OWASP top 10 list. XSS allows attackers to execute scripts within a user’s browser. Input validation can be used to reduce the chance of this happening.
Denial of service is when something is done to prevent the user from being able to do their job.
Which of the following is NOT a threat for which the CSP bears some responsibility?
A. Denial of Service
B. Improper Disposal
C. Theft or Media Loss
D. Unauthorized Provisioning
D. Unauthorized Provisioning
Explanation:
Data storage in the cloud faces various potential threats, including:
Unauthorized Access: Cloud customers should implement access controls to prevent unauthorized users from accessing data. Also, a cloud service provider (CSP) should implement controls to prevent data leakage in multitenant environments. Unauthorized Provisioning: The ease of setting up cloud data storage may lead to shadow IT, where cloud resources are provisioned outside of the oversight of the IT department. This can incur additional costs to the organization and creates security and compliance challenges since the security team can’t secure data that they don’t know exists. Regulatory Non-Compliance: Various regulations mandate security controls and other requirements for certain types of data. A failure to comply with these requirements — by failing to protect data or allowing it to flow outside of jurisdictional boundaries — could result in fines, legal action, or a suspension of the business’s ability to operate. Jurisdictional Issues: Different jurisdictions have different laws and regulations regarding data security, usage, and transfer. Many CSPs have locations around the world, which can violate these laws if data is improperly protected or stored in an unauthorized location. Denial of Service: Cloud environments are publicly accessible and largely accessible via the Internet. This creates the risk of Denial of Service attacks if the CSP does not have adequate protections in place. Data Corruption or Destruction: Data stored in the cloud can be corrupted or destroyed by accident, malicious intent, or natural disasters. Theft or Media Loss: CSPs are responsible for the physical security of their data centers. If these security controls fail, an attacker may be able to steal the physical media storing an organization’s data. Malware: Ransomware and other malware increasingly target cloud environments as well as local storage. Access controls, secure backups, and anti-malware solutions are essential to protecting cloud data against theft or corruption. Improper Disposal: The CSP is responsible for ensuring that physical media is disposed of correctly at the end of life. Cloud customers can also protect their data by using encryption to make the data stored on a drive unreadable.
A cloud provider wants to assure potential cloud customers that their environment is secure. What is one way for the cloud provider to achieve this without needing to provide audit access to every potential customer?
A. Undergo a Service Organization Control (SOC) 2 Type II audit
B. Undergo a Service Organization Control (SOC) 3 audit
C. Undergo a Payment Card Industry Data Security Standard (PCI DSS) audit
D. Undergo a Service Organization Control (SOC) 2 Type I audit
A. Undergo a Service Organization Control (SOC) 2 Type II audit
Explanation:
A Service Organization Control 2 (SOC 2) audit reports on various organizational controls related to security, availability, processing integrity, and confidentiality or privacy. A cloud provider may choose to have a SOC 2 audit performed and make the report available to the public. This allows potential customers to have a sense of confidence that the environment is secure without needing to do an audit of their own.
A type II audit is over a span of time, which shows the effectiveness of the controls. This is more informative than a Type I audit, which shows that the design of the controls at a moment in time is good.
A SOC 3 report is a result of a SOC 2 audit. The information in the SOC 2 report may be more than the company wants to release. So there is a SOC 3, which shows that the SOC 2 was performed and that the customer can have confidence in the provider without divulging sensitive information. (The short version is that a SOC 3 is for public release.)
A PCI DSS audit is a good thing to do, but this would be done because of the handling of payment cards of some type. It has a much smaller reach as an audit. A SOC 2 would give the customer much more information and confidence in the provider.
The information security manager, Rohan, is working with the network and application teams to determine the best data protection methods to use for a new application that is being developed. Their concerns are that the integrity and confidentiality of the data must be protected.
Of the following, which is the BEST combination of technologies to meet their concerns?
A. A Representation State Transfer (REST) API and the Rivest Shamir Adelman algorithm
B. Transport Layer Security (TLS) using Rivest Shamir Adelman (RSA) and Message Digest 5
C. A Representation State Transfer (REST) API with the Secure Hash Algorithm (SHA256)
D. Transport Layer Security (TLS) using Advanced Encryption Standard (AES) and Message Digest 5
D. Transport Layer Security (TLS) using Advanced Encryption Standard (AES) and Message Digest 5
Explanation:
Protecting the data in transit with TLS using AES provides for the protection of the confidentiality of the data. AES is a symmetric algorithm, which is a common choice for use with TLS. If MD5 is used, then the integrity of the data can be determined.
TLS with RSA does not clarify the protection of the confidentiality as clearly as saying TLS with AES. RSA is an asymmetric algorithm. It is not commonly used to protect the confidentiality of data. It is commonly used to exchange the symmetric algorithm, so it is almost the correct answer. However, since AES is also in the correct answer, it is better.
REST does not encrypt data. TLS can be added to REST, but the answer does not specify that. So REST with SHA256 does not protect the confidentiality.
REST with RSA does not work either because RSA is not a logical addition. It could be used in TLS to exchange the symmetric key. However, there is nothing in that answer to protect the integrity of the data.
The Cloud Service Provider (CSP) will not permit your business to conduct an independent examination of cloud service controls and has indicated that this role must be performed by an independent third party and the results provided to your organization. What type of activity is this?
A. Auditability
B. Resiliency
C. Governance
D. Regulatory
A. Auditability
Explanation:
CSPs rarely permit a Cloud Service Customer (CSC) to audit their service controls. CSC engages third parties to conduct independent examinations of cloud service controls and to offer an opinion on their function in relation to their purpose. Service Organization Controls (SOC) 2 assessments are examples of these types of assessments.
An audit could be done to determine if a CSP is in compliance with a regulation, but there is no indication of a regulation or law in the question, so regulatory is not the best answer here.
Governance is the oversight provided by the Board of Directors (BoD) and the Chief Executive Officer (CEO). (The CCSP exam covers corporate governance, security governance, and data governance.)
Resiliency is a concern for businesses today and is addressed by redundancy throughout the network, the cloud, and the processes of the business.
Adela has been working with her new company for just a week now. As the information security manager, she has been analysing the Platform as a Service (PaaS) serverless cloud provider they are using. She has found that the security settings on the current service are not what needs to be in place for the data that they have to protect. The settings that are available are actually insufficient. She is aware of another cloud provider that does have a suitable service for them to use. However, the function that the system is running is specifically written for the current provider.
What is the term used to describe this type of scenario?
A. Vendor lock-in
B. Vendor lock-out
C. Provider exit
D. Guest hopping
A. Vendor lock-in
Explanation:
Cloud customers should avoid vendor lock-in. Vendor lock-in occurs when an organization is unable to easily move from one cloud provider to another without doing a lot of work. The function in this case would have to be redesigned, and there might be an additional element of having to reenter the data if they moved to another provider.
Provider exit is when a cloud provider decides to shut down one of their offerings, or perhaps they will sell it off to another provider.
Vendor lock-out would occur when a cloud provider files for bankruptcy and the courts take over control of their assets. When they do that, it is possible that the systems are shut down. If that happens, lock-out is the problem.
Guest hopping is an attack where an attacker jumps from one guest machine to another, possibly on another tenant.
An organization purchases their accounting program through the cloud. The accounting program is hosted entirely by the cloud provider on cloud hosted servers. The cloud customer is not responsible for maintaining any of the items needed to access the accounting program; they are simply able to access the program from anywhere that they have an internet connection.
What type of cloud service is being described here?
A. Platform as a Service (PaaS)
B. Database as a Service (DBaaS)
C. Infrastructure as a Service (IaaS)
D. Software as a Service (SaaS)
D. Software as a Service (SaaS)
Explanation:
Software as a Service (SaaS) is a type of cloud service in which the cloud provider maintains and manages everything on the back-end (including the infrastructure, platform, and server OS), and the cloud customer can simply access the software without needing to do any maintenance on it.
PaaS allows the customer to have access to a server-based or server-less environment to load their software.
IaaS allows the customer to bring all the operating systems to include routers, switches, servers, firewalls, intrusion detection systems, and so on. This allows a customer to build a virtual data center without having to worry about buying and maintaining the physical equipment.
DBaaS allows the customer to have a database without having to maintain the hardware or even the operating system that is the database.
Biometrics and passwords are part of which stage of IAM?
A. Accountability
B. Authentication
C. Identification
D. Authorization
B. Authentication
Explanation:
Identity and Access Management (IAM) services have four main practices, including:
Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering. Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in. Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this. Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
Bridgit is working for a manufacturing corporation that must protect the personal information of their employees and of their customers. She is looking for a document to provide guidance on how they can and should protect that information. Which of the following standards was developed by a joint privacy task force consisting of the American Institute of Certified Public Accountants and the Canadian Institute of Chartered Accountants?
A. Privacy Management Framework (PMF)
B. Sarbanes Oxley (SOX)
C. General Data Protection Regulation (GDPR)
D. ISO/IEC 27018
A. Privacy Management Framework (PMF)
Explanation:
The Privacy Management Framework, formerly the GAPP (Generally Accepted Privacy Principles), is a privacy standard developed by the American Institute of Certified Public Accountants and the Canadian Institute of Chartered Accountants. PMF contains the main privacy principles and is focused on managing and preventing threats to privacy.
The GDPR is the European Union’s (EU) regulation for the member states privacy laws.
ISO/IEC 27018 is an international standard to provide guidance to cloud providers acting as data processors. Data processors process data but cannot be employees of the company. The EU GDPR defines processing to include holding or storage of data.
SOX is a U.S. regulation that requires publicly traded companies to protect the integrity of their financial statements.
You are working with the Information Technology (IT) manager to establish a secure storage technology within their Infrastructure as a Service (IaaS) environment that they are building within their new virtual datacenter. They are worried that the data needs to be accessible within a short amount of time so that the users can perform the tasks that they need to do for their jobs. The users regularly access large amounts of data for analysis.
What technology should they use to enable access to the data?
A. Software Defined Network (SDN)
B. Hardware Security Manager (HSM)
C. Fibre Channel
D. Content Distribution Network (CDN)
C. Fibre Channel
Explanation:
iSCSI, Fibre Channel, and Fibre Channel over Ethernet (FCoE) are storage area network technologies that create dedicated networks for data storage and retrieval.
SDN is a technology that enables more efficient switch and router-based networks. It is not for storage of data.
CDN is a technology that does enable data access in a more efficient manner, but it does not match the scenario of the question. CDN is useful if there is a piece of data such as a video, movie, etc. that is being accessed by people within a certain area. The content can be temporarily cached on servers closer to the users who need it.
An HSM is used to store cryptographic keys. That is a good thing to add for security of the data, but the question is looking for a storage technology first. Then secure it. An HSM does not secure the data, it secures the keys that are used to protect the data.
Reference:
An application uses application-specific access control, and users must authenticate with their own credentials to gain their allowed level of access to the application. A bad actor accessed corporate data after having stolen credentials. According to the STRIDE threat model, what type of threat is this?
A. Spoofing identity
B. Insufficient due diligence
C. Broken authentication
D. Tampering with data
A. Spoofing identity
Explanation:
The STRIDE threat model has six threat categories, including Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, and Elevation of privileges (STRIDE). A bad actor logging in as a user is known as identity spoofing. Ensuring that credentials are protected in transmission and when stored by any system is critical. Using Multi-Factor Authentication (MFA) is also essential to prevent this. If you have an account with any internet-accessible account (your bank, Amazon, etc.), you should enable MFA. The same advice is true in the cloud.
Broken authentication is the entry on the OWASP top 10 list that includes identity spoofing (and more). The question is about STRIDE.
Insufficient due diligence is a cloud problem (and elsewhere) when corporations do not think carefully before putting their systems and data into the cloud and ensuring all the right controls are in place.
Tampering with data could occur once the bad actor is logged in as a user, but the question does not go that far. It is not necessary for someone to log in to tamper with data.
Tatum has been working with the cloud data architect and the cloud architect to plan the access control model that they will use to control access with. They are looking for something that will allow them to grant access based on characteristics such as job titles, department, location, time of data, and so on.
What would you recommend they use?
A. Attribute-Based Access Control (ABAC)
B. Access Control Lists (ACLs)
C. Content Dependent Access Control (CDAC)
D. Role-Based Access Control (RBAC)
A. Attribute-Based Access Control (ABAC)
Explanation:
Attribute-Based Access Control (ABAC) is an access control model that grants or denies access to resources based on various attributes associated with the subjects, objects, and environmental conditions. It offers a flexible and dynamic approach to access control, allowing access decisions to be made based on a wide range of attributes rather than relying solely on user roles or permissions. Attributes can include job titles, department, location, time of data, and more.
Role-Based Access Control (RBAC) is an access control model widely used in cloud computing environments to manage and enforce access permissions based on user roles. RBAC assigns roles to users or entities and associates permissions with those roles, allowing for simplified and efficient access management.
Access Control Lists (ACLs) are a mechanism used in computer systems and networks to define and enforce permissions or access rights for users or entities to access resources or perform specific actions. ACLs are typically associated with files, directories, network devices, or other system resources.
Content Dependent Access Control (CDAC), also known as Context-Based Access Control, is an access control mechanism that grants or denies access to resources based on the content or context of the information being accessed. Unlike traditional access control models that primarily rely on user identity or resource attributes, CDAC takes into account the actual content of the information to make access decisions.
A cloud administrator has just implemented a new hypervisor that is completely dependent on the host operating system for all operations. What type of hypervisor has this administrator implemented?
A. Full-service hypervisor
B. Type 2 hypervisor
C. Type 1 hypervisor
D. Bare metal hypervisor
B. Type 2 hypervisor
Explanation:
Type 2 hypervisors are dependent and run off of the host operating system rather than being tied directly into the hardware in the way Type 1 hypervisors are.
Bare metal hypervisors are another name used for Type 1 hypervisors. Full-service hypervisors are not an actual type of hypervisor.
What networking practice is based on hierarchical, distributed tables and when a change is made to the relationship between a domain and a specific IP address, the change is registered at the top of the hierarchical system and filters down to all lower systems?
A. Virtual Private Network (VPN)
B. Dynamic Host Configuration Protocol (DHCP)
C. Software Defined Network (SDN)
D. Domain Name Service (DNS)
D. Domain Name Service (DNS)
Explanation:
The Domain Name Service (DNS) is a hierarchical system that translates domain names into IP addresses. When a user wants to communicate with another machine, the user’s machine queries a DNS server to get the correct address.
DHCP is used to assign IP addresses to computers when they join a Local Area Network (LAN).
SDN is a technology that changes how switches know how to forward frames by adding a controller, which makes the decisions for the switches.
A VPN is an encrypted tunnel. The term is used in many places, but it means that the connection across the network is encrypted using TLS, SSH, or IPSec.
Which of the following features of a SIEM helps with identifying potential cybersecurity incidents?
A. Investigative Monitoring
B. Automated Monitoring
C. Log Centralization
D. Data Integrity
B. Automated Monitoring
Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:
Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources. Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only). Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats. Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident. Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest. Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources.
Padma has deployed a technology that is a different switch technology than what they have been using for a very long time. With this new technology, it removes the decision-making process from the switch and moves it to a controller. This leaves the process of forwarding frames to the switch.
What technology has been deployed?
A. internet Small Computer System Interface (iSCSI)
B. Fibre Channel
C. Virtual Local Area Network (VLAN)
D. Software Defined Networking (SDN)
D. Software Defined Networking (SDN)
Explanation:
Within a Software Defined Network (SDN), decisions regarding where traffic is filtered or sent to and the actual forwarding of traffic are completely separate from each other.
A Virtual Local Private Network (VLAN) is used to expand a local area network beyond physical/geographical limitations. It does not remove the decision making from the switch.
Fibre Channel (FC) and internet Small Computer System Interface iSCSI are technologies that are used in Storage Area Networks (SAN) so that the devices can communicate with the connected switch with a protocol more efficient than Ethernet IEEE 802.3
Which of the following is an umbrella agreement that might cover MULTIPLE projects that two companies collaborate on?
A. MSA
B. SLA
C. NDA
D. SOW
A. MSA
Explanation:
Two organizations working together may have various agreements and contracts in place to manage their risks. Some of the common types include:
Master Service Agreement (MSA): An MSA is an over-arching contract for all the work performed between the two organizations. Statement of Work (SOW): Each new project under the MSA is defined using an SOW. Service Level Agreement (SLA): An SLA defines the conditions of service that the vendor guarantees to the customer. If the vendor fails to meet these terms, they may be forced to pay some penalty. Non-Disclosure Agreement (NDA): An NDA is designed to protect confidential information that one or both parties share with the other.
Mia works for a network hardware and software development company. She is currently working on setting up the team that will be testing one of their new products. This particular piece of software is the Operating System (OS) of a network router.
When conducting functional testing, which is NOT an important consideration?
A. Testing must be sufficient to have reasonable assurance there are no bugs
B. Testing must be designed to exercise all requirements
C. Testing must be realistic for all environments
D. Testing must use limited information about the application
D. Testing must use limited information about the application
Explanation:
Correct answer: Testing must use limited information about the application
Testing that must use limited information about the application is called grey box testing and occurs after functional testing and deployment.
Functional testing is performed on an entire system, and the following are important considerations: Testing must be realistic, must exercise all requirements, and be bug free.
In a serverless Platform as a Service (PaaS) environment where a customer has setup Function as a Service (FaaS) to be able to run a particular function to analyze some sales data, who allocates and maintains the data storage system?
A. Cloud auditor
B. Cloud Service Provider (CSP)
C. Cloud service broker
D. Cloud Customer (CC)
B. Cloud Service Provider (CSP)
Explanation:
The customer will not see the server or its operating system on the virtual machine that is running their function. They would not see the cloud storage setup either. Both would be handled by the CSP.
The cloud auditor performs audits on the cloud environment to see if they are compliant with a law, regulation, or industry standard like ISO 27001.
A cloud service broker negotiates the contracts and setup between the customer and the provider.
Which of the following types of testing verifies that individual functions or modules work as intended?
A. Regression Testing
B. Integration Testing
C. Usability Testing
D. Unit Testing
D. Unit Testing
Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:
Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended. Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed. Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience. Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.
Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.
An engineer is adding validation processes to an application that will check that session tokens are being submitted by the valid and original obtainer of the token. What OWASP Top 10 vulnerability is this engineer mitigating by doing so?
A. Vulnerable and outdated components
B. Identification and authentication failures
C. Insecure design
D. Injection
B. Identification and authentication failures
Explanation:
The OWASP Top 10 is an up-to-date list of the most critical web application vulnerabilities and risks. Identification and authentication failures, formerly known as broken authentication, refers to the ability for an attacker to hijack a session token and use it to gain unauthorized access to an application. This risk can be mitigated by adding proper validation processes to ensure that session tokens are being submitted by the valid and original obtainer of the token.
Insecure design is exactly what it says. We need to move left and build security into our software. This includes performing threat modeling and using reference architectures.
Vulnerable and outdated components speak to our current problem of using objects, functions, libraries, APIs, and such from Git, GitHub, GitLab, and so on. Much of this code is abandoned or not kept up to date.
Injection includes SQL and command injection. This is when a bad actor types in inappropriate commands from the user’s interface. Input validation would help to minimize this further. It had been in position one on the Top 10 for well over a decade and has finally fallen to position three on the list.
An organization implemented a Data Rights Management (DRM) program. The cloud security specialist has been tasked with the responsibility of ensuring an in-depth report on the usage and access history that can be generated for all files. Which of the following BEST represents this functionality?
A. Rights revocation
B. Continuous auditing
C. Replication restrictions
D. Persistant protection
B. Continuous auditing
Explanation:
Continuous auditing ensures that you can provide an in-depth report on usage and access history for all files that are protected by data rights management.
Rights revocation is to take away the user’s access to the Data Rights Management (DRM) protected data.
Persistant protection is one of the features of DRM that means that the protection travels with the file. It is agnostic of the location of the data. For example, you can open a book you purchased on Amazon through Kindle for Mac, Windows, Mobile, or even an actual Kindle. But you should not have a copy of the book as a pdf that you can give to someone else.
Replication restrictions prevent the ability to take the content and make any copies you want and share with whomever you want. It can also restrict your ability to move it to new devices after so many previous moves.
NIST SP 800-137 Information Security Continuous Monitoring would be a good read for this info.
Of the following, which uses known weaknesses and flows to verify that systems are properly hardened and then produces a report for management regarding discovered weaknesses?
A. Penetration testing
B. Runtime Application Security Protection (RASP)
C. Static Application Security Test (SAST)
D. Vulnerability scanning
D. Vulnerability scanning
Explanation:
Vulnerability scanning is often run by organizations against their own systems. It uses known attacks and methodologies to ensure that systems are hardened against known vulnerabilities and threats.
Runtime Application Self-Protection (RASP) is a security mechanism that allows an application to protect itself by responding and reacting to ongoing events and threats.
During the penetration tests, the tester will try to break into systems using the same techniques that an actual attacker would use.
Static Application Security Testing (SAST) is a test in which the tester has special knowledge of and access to the source code to manually review it for vulnerabilities and weaknesses.
Lily is a data science specialist working for a huge retailer. Lily and her team have been analyzing the use of data within the business. Through the analysis, they have been working on determining when they need to move products from stores in one location to another to match the current customer needs. When they have been analyzing the data, they pull data in from relational data from transactional systems, operational databases, and business applications.
What type of storage is this?
A. Data mart
B. Big data
C. Data mining
D. Data warehouse
D. Data warehouse
Explanation:
A data warehouse is a type of data storage that is a result of pulling in relational data from transactional systems, operational databases, and business applications.
A data mart is a small specialized collection of data.
Data mining is digging through the data (mining) looking for valuable information.
Big data technologies help to solve storage problems when the data a corporation has is not being managed well in databases anymore. It is characterized by volume, variety, velocity, veracity, and variability.
Maalik works for an international trade company that does a lot of business between the United States, China, and South Korea. He knows that there is a privacy framework that should drive their decisions and handling of personal information. He knows that people have a legitimate expectation of privacy. What principle does this meet?
A. Asia-Pacific Economic Cooperation Privacy Framework, preventing harm
B. General Data Protection Regulation, data minimization
C. Asia-Pacific Economic Cooperation Privacy Framework, Integrity of personal information
D. General Data Protection Regulation, purpose limitation
A. Asia-Pacific Economic Cooperation Privacy Framework, preventing harm
Explanation:
The Asia-Pacific Economic Cooperation(APEC) Privacy Framework has a few principles. The principle of preventing harm says that organizations should take measures to prevent harm to individuals resulting from the misuse of their personal information.
The principle of Integrity of personal information says that organizations should ensure the accuracy and completeness of personal information and take measures to protect it against unauthorized access, alteration, or destruction.
The General Data Protection Regulation (GDPR) does not apply here. GDPR protects natural persons in Europe. They must be in Europe for GDPR to apply. If there is a German citizen in the US, China, or Korea, they are not protected under the GDPR. They are protected under the laws of the country they are currently in.
Which of the following is officially known as the “Financial Modernization Act of 1999”?
A. The Gramm-Leach-Bliley Act
B. Asian-Pacific Economic Cooperation
C. The Sarbanes-Oxley Act
D. General Data Protection Regulation
A. The Gramm-Leach-Bliley Act
Explanation:
Although it is officially named the Financial Modernization Act of 1999, it is most commonly known as and referred to as the Gramm-Leach-Bliley Act, or GLBA. This name pays tribute to the lead sponsors and authors of the act. GLBA is focused on protecting Personally Identifiable Information (PII) as it relates to financial institutions. Those same financial institutions must comply with SOX.
The Sarbanes-Oxley Act (SOX) requires publicly traded financial institutions to protect their financial statements. They must tell the truth about their financial status as publicly traded companies. It is a direct result of the Enron failure.
General Data Protection Regulation (GDPR) is a European Union (EU) regulation. It requires member states to have a compliant law. That law must have the same or greater level of protection of personal data for natural persons in the EU. It protects people if they are in the EU. If a German citizen is in the US at some point in time and a company collects their personal data in the US, they are not protected under GDPR.
Asian-Pacific Economic Cooperation (APEC) is headquartered in Singapore and has 21 economies participating. The purpose of APEC is to promote free trade around the Pacific rim by protecting personal data. At least, that is the piece we are concerned with here in CCSP.
Steve is working for a consulting firm. He is being asked for advice regarding the protection of data for a customer who is moving data into a Software as a Service implementation that enables their business to create and store documents of all types in the cloud. He is focused on the first phase of the data lifecycle, where encryption can be added to a created document that is now at rest.
Which phase would that be?
A. Use
B. Create
C. Destroy
D. Store
D. Store
Explanation:
While security controls are implemented in the create phase in the form of Transport Layer Security (TLS), this only protects data in transit and not data at rest. The store phase is the first phase in which security controls are implemented to protect data at rest.
The data lifecycle is Create, Store, Use, Share, Archive, Destroy.
A cloud service provider is building a new data center to provide options for companies that are looking for private cloud services. They are working to determine the size of datacenter that they want to build. The Uptime Institute created the Data Center Site Infrastructure Tier Standard Topology. With this standard, they created a few levels of data centers. The cloud provider has a goal of reaching tier three.
How is that characterized in general?
A. Concurrently Maintainable
B. Redundant Capacity Components
C. Fault Tolerance
D. Basic Capacity
A. Concurrently Maintainable
Explanation:
The Uptime Institute publishes one of the most widely used standards on data center tiers and topologies. The standard is based on four tiers, which include:
Tier I: Basic Capacity Tier II: Redundant Capacity Components Tier III: Concurrently Maintainable Tier IV: Fault Tolerance
Zoe has been negotiating with a public cloud company regarding the Infrastructure as a Service (IaaS) deployment that they are working on. They are planning to move their data center into the cloud. This will eliminate the need for the company to maintain physical equipment. One of the concerns that Zoe has is that they must be in compliance with the Family Educational Rights and Privacy Act (FERPA). To ensure compliance, they must be able to review the data center, the configuration, the security controls, and so on that are in place.
What is this need known as that should be stated and agreed to in the contract?
A. Auditability
B. Resiliency
C. Interoperability
D. Measured service
A. Auditability
Explanation:
Auditability is the process of gathering and making available the evidence necessary to demonstrate the operation and use of the cloud. It’s important to remember that a Cloud Service Provider (CSP) rarely allows a customer to perform an audit on their controls. However, a CSP usually does supply third-party attestations under NDA.
Interoperability is the ability of two different systems to be able to exchange information and use it. For example, a picture created on a Mac and readable on a Windows box.
Measured service is the ability of cloud service providers to bill the customer for the resources that they use.
Resiliency is the ability of a system to withstand failures yet still be able to function.
Reference:
If software developers and the supporting team were to ask the following four questions, what would they be doing?
What are we working on? What could go wrong? What are we going to do about it? Did we do a good job?
A. Performing threat modeling
B. Determining Maximum Tolerable Downtime (MTD)
C. Evaluating the Recovery Point Objective (RPO)
D. Performing a quantitative risk assessment
A. Performing threat modeling
Explanation:
The four questions are the basic idea behind threat modeling. Threat modeling allows the team to identify, communicate, and understand threats and mitigations. There are several techniques, such as STRIDE, PASTA, TRIKE, and OCTAVE.
STRIDE is one of the most prominent models used for threat modeling. Tampering with data is included in the STRIDE model. DREAD is another model, but it does not include tampering with data as a category. TOGAF and REST are not threat models. STRIDE includes the following six categories:
Spoofing identify Tampering with data Repudiation Information disclosure Denial of service Elevation of privileges
A quantitative risk assessment is when the Single Loss Expectency (SLE), Annual Rate of Occurrence (ARO), and Annualized Loss Expectency (ALE) are calculated based on the threat to a specific asset.
Determining the MTD is a step in Business Continuity/Disaster Recovery/Continuity planning. It answers the question of how long an asset can be unavailable before it is a significant problem for the business.
Evaluating the RPO is also a part of Business Continuity/Disaster Recovery/Continuity planning. The RPO is the value that represents how much data can be lost before it too is a problem for the business.
Rachel is in need of a method of confirming the authenticity of data coming into the application that is being created by the software developers. As the information security professional, she is able to recommend several techniques that can be built into the software. If they achieve their goals, they will be able to hold their users accountable for their actions.
What will they have achieved?
A. Nonrepudiation
B. Encryption
C. Hashing
D. Public key
A. Nonrepudiation
Explanation:
Nonrepudiation is the ability to confirm the origin or authenticity of data to a high degree of certainty. Nonrepudiation is typically done through methods such as hashing and digital signatures.
A hash, or hashing, provides information that the bits are correct. All the ones should be ones and all the zeros should be zeros. It does not provide authenticity. There is no way to confirm where the data came from or who created it with just the hash. A digital signature needs to be added by encrypting the hash with a private key.
A digital signature is verified by decrypting it with a public key. This is not the correct answer to the question because the question is about what they will achieve. They will not achieve a public key.
Encryption is altering data to an unreadable format. This does not achieve nonrepudiation unless we specifically talk about asymmetric public and private keys.