Pocket Prep 18 Flashcards
Client care representatives in your firm are now permitted to access and see customer accounts. For added protection, you’d like to build a feature that obscures a portion of the data when a customer support representative reviews a customer’s account. What type of data protection is your firm attempting to implement?
A. Encryption
B. Obfuscation
C. Tokenization
D. Masking
D. Masking
Explanation:
The organization is trying to deploy masking. Masking obscures data by displaying only the last four/five digits of a social security or credit card number, for example. As a result, the data is incomplete in the absence of the blocked/removed content. The rest of the information can appear to be there, but the user only sees “*” or a dot.
Tokenization is the process of removing data and placing a token in its place. The question is about part of the data being available, so that does not work.
Encryption takes the data and makes it unreadable. It’s unusual to encrypt, for example, the first part of a credit card number. So, this does not work, either.
Obfuscation is to “confuse.” If data has been obfuscated, the attacker would be left confused when looking at it. Think encryption. It is a way to obscure the data. There are other ways, though. Again, this is not going to work because the user sees a lot of asterisks or dots. That is not obfuscation, that is masking.
Which of the following types of testing focuses on software’s interfaces and the experience of the consumer?
A. Integration Testing
B. Regression Testing
C. Usability Testing
D. Unit Testing
C. Usability Testing
Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:
Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended. Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed. Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience. Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.
Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.
Which of the following is NOT one of the critical elements of a management plane?
A. Management
B. Orchestration
C. Scheduling
D. Monitoring
D. Monitoring
Explanation:
According to the CCSP, the three critical elements of a management plane are scheduling, orchestration, and management. Monitoring is not an element of the management plane.
Compliance with which of the following standards is OPTIONAL for cloud consumers and cloud service providers working in the relevant industry?
A. G-Cloud
B. PCI DSS
C. ISO/IEC 27017
D. FedRAMP
C. ISO/IEC 27017
Explanation:
Cloud service providers may have their environments verified against certain standards, including:
ISO/IEC 27017 and 27018: The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) publishes various standards, including those describing security best practices. ISO 27017 and ISO 27018 describe how the information security management systems and related security controls described in ISO 27001 and 27002 should be implemented in cloud environments and how PII should be protected in the cloud. These standards are optional but considered best practices. PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) was developed by major credit card brands to protect the personal data of payment card users. This includes securing and maintaining compliance with the underlying infrastructure when using cloud environments. Government Standards: FedRAMP-compliant offerings and UK G-Cloud are cloud services designed to meet the requirements of the US and UK governments for computing resources. Compliance with these standards is mandatory for working with these governments.
Which of the following types of SOC reports provides high-level information about an organization’s controls intended for public dissemination?
A. SOC 1
B. SOC 3
C. SOC 2 Type II
D. SOC 2 Type I
B. SOC 3
Explanation:
Service Organization Control (SOC) reports are generated by the American Institute of CPAs (AICPA). The three types of SOC reports are:
SOC 1: SOC 1 reports focus on financial controls and are used to assess an organization’s financial stability. SOC 2: SOC 2 reports assess an organization's controls in different areas, including Security, Availability, Processing Integrity, Confidentiality, or Privacy. Only the Security area is mandatory in a SOC 2 report. SOC 3: SOC 3 reports provide a high-level summary of the controls that are tested in a SOC 2 report but lack the same detail. SOC 3 reports are intended for general dissemination.
SOC 2 reports can also be classified as Type I or Type II. A Type I report is based on an analysis of an organization’s control designs but does not test the controls themselves. A Type II report is more comprehensive, as it tests the effectiveness and sustainability of the controls through a more extended audit.
Reference:
Which stage of the IAM process relies heavily on logging and similar processes?
A. Identification
B. Authorization
C. Accountability
D. Authentication
C. Accountability
Explanation:
Identity and Access Management (IAM) services have four main practices, including:
Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering. Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in. Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this. Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
Your organization wants to address baseline monitoring and compliance by restricting the duration of a host’s non-compliant condition. When the application is deployed again, the organization would like to decommission the old host and replace it with a new Virtual Machine (VM) constructed from the standard baseline image.
What functionality is described here?
A. Blockchain
B. Infrastructure as Code (IaC)
C. Virtual architecture
D. Immutable architecture
D. Immutable architecture
Explanation:
Immutable means unchanging over time or unable to be changed. Immutability of cloud infrastructure is a preferred state. It is feasible to easily decommission all virtual infrastructure components utilized by an older version of software and deploy a new virtual infrastructure in cloud settings. Immutable infrastructure is a solution to the problem of systems deviating from baseline settings over time. This is using a golden image to start virtual machines.
IaC is a virtual environment. The infrastructure is no longer physical routers, switches, and servers; it is now virtual routers, switches, and servers. That could also be called a virtual architecture, although IaC is the common language today.
Blockchain technology has an immutable element. It is, or should be, impossible to alter the record of who it belonged to, such as what we have with cryptocurrency. The FBI has been able to recover stolen bitcoins and return them to the rightful owner.
Information Rights Management (IRM) is the tool that a large manufacturing company has decided to use for their classroom books. It is important for them to control who has access to their content. One of the features that they are most interested in is the ability to recall their books and replace them with up-to-date content, since their training is very technical and they want to ensure that only their customers have access to the books.
Which phase of the data lifecycle does IRM fit best into?
A. Archive phase
B. Use phase
C. Share phase
D. Create phase
B. Use phase
Explanation:
IRM fits best into the use phase. It controls who has access to the content and allows for controls of copy, paste, print, and other features. It also allows the content to be expired, out of use, or replaced. It is about the control of how it is used by the customer.
The create phase does not fit the scenario because this phase is where the content is originally created by the authors. Once the data is created, it can be shared with the customers.
The share phase is not the primary phase of IRM. The exchange between the company and the user of the content is the share phase, but that is not where IRM focuses. The primary phase is the use phase because that is where the features fit.
The archive phase is incorrect because this phase is about long-term storage. IRM controls when the user is accessing it.
A cloud provider needs to ensure that the data of each tenant in their multitenant environment is only visible to authorized parties and not to the other tenants in the environment. Which of the following can the cloud provider implement to ensure this?
A. Network security groups (NSG)
B. Physical network segregation
C. Geofencing
D. Hypervisor tenant isolation
D. Hypervisor tenant isolation
Explanation:
In a cloud environment, physical network segregation is not possible unless it is a private cloud built that way. However, it’s important for cloud providers to ensure separation and isolation between tenants in a multitenant cloud. To achieve this, the hypervisor has the job of tenant isolation within machines.
An NSG is a virtual Local Area Network (LAN) behind a firewall, which is beneficial to use. It is used to control traffic within a tenant or from the internet to that tenant, not between tenants.
Geofencing is used to control where a user can connect from. It does not isolate tenants from each other. Rather, it restricts access from countries that you do not expect access to come from.
Which of the following SOC duties involves coordinating with a team focused on a particular task?
A. Threat Detection
B. Threat Prevention
C. Incident Management
D. Quality Assurance
C. Incident Management
Explanation:
The security operations center (SOC) is responsible for managing an organization’s cybersecurity. Some of the key duties of the SOC include:
Threat Prevention: Threat prevention involves implementing processes and security controls designed to close potential attack vectors and security gaps before they can be exploited by an attacker. Threat Detection: SOC analysts use Security Information and Event Management (SIEM) solutions and various other security tools to identify, triage, and investigate potential security incidents to detect real threats to the organization. Incident Management: If an incident has occurred, the SOC may work with the incident response team (IRT) to contain, investigate, remediate, and recover from the identified incident.
Quality Assurance is not a core SOC responsibility.
Ocean is the information security manager for a new company. They work with the company to ensure that it is in compliance with the specific laws that the company is worried about. Which of the following would they and the company be the least concerned with?
A. Data location
B. Containers
C. Type of data
D. Multi-tenancy
B. Containers
Explanation:
Containers are a type of virtualized storage that does not present significant compliance concerns on its own.
For regulated customers, type of data, data location, and multi-tenancy are frequently the primary compliance concerns. GDPR is a good example of this scenario.
A cloud provider has the capability to use a large pool of resources for numerous client hosts and applications. They are able to offer scalability and on-demand self-service. Which technology makes all this possible?
A. Software defined networking
B. Virtual media
C. Virtualization
D. Guest operating systems
C. Virtualization
Explanation:
Without virtualization, cloud environments as we know them would not be possible. This is because cloud environments are built on virtualization technology. It is virtualization that allows cloud providers to leverage a pool of resources for various customers and the ability to offer such scalability and on-demand self-service.
A Virtual Machine (VM) would be a guest operating system on top of a hypervisor, which is the host operating system. This is just part of what virtualization allows.
Virtual media or virtual Hard Disk Drives (HDD) or virtual Solid State Drives (SSD) is another benefit or part of what virtualization allows.
Software Defined Networking (SDN) is an advance in networking technology that can be used in a virtualized environment, but it is designed for a physical environment.
Halo, a cloud information security specialist, is working with the cloud data architect to design a secure environment for the corporation’s data in the cloud. They have decided, based on latency issues, that they are going to build a Storage Area Network (SAN) using Fibre Channel. Halo is working to identify the security mechanisms that need to be configured with the SAN.
Which of the following are security features that they should use to protect the storage controllers and all the sensitive data?
A. Authentication, LUN Masking, Transport Layer Security (TLS)
B. Authentication, Internet Protocol Security (IPSec), LUN masking
C. Authentication, LUN Masking, Secure Shell (SSH)
D. Kerberos, Internet Protocol Security (IPSec), LUN zoning
B. Authentication, Internet Protocol Security (IPSec), LUN masking
Explanation:
Security mechanisms that should be added to Fibre Channel include:
LUN Masking: Logical Unit Number (LUN) masking is another mechanism used to control access to storage devices on the SAN. LUN masking ensures that only authorized devices can access a particular LUN, helping to prevent unauthorized access or data theft. Authentication: Fibre Channel supports several methods of authentication, including Challenge-Handshake Authentication Protocol (CHAP) and Remote Authentication Dial-In User Service (RADIUS). These protocols ensure that only authorized devices are allowed to connect to the SAN. Encryption: Fibre Channel traffic can be encrypted using IPsec or other encryption protocols. Encryption helps to protect data in transit against eavesdropping or interception by unauthorized parties. Zoning: Fibre Channel switches support the concept of zoning, which allows administrators to control which devices can communicate with each other on the SAN. Zoning can be based on port, WWN (World Wide Name), or a combination of both. Auditing and logging: Fibre Channel switches and devices should be configured to generate logs and audit trails of all SAN activity. This can help identify potential security incidents or anomalies and provide a record of activity for compliance purposes.
Kerberos is not commonly used to authenticate to a Fibre Channel SAN, CHAP is more likely.
LUN masking is used, not LUN zoning. The word zoning by itself would have been sufficient.
TLS and SSH are not common for encryption. TLS can be used if the traffic is being tunneled, Fibre Channel over IP (FCIP).
Frankie has been tasked with finding and understanding how certain data keeps being leaked. She is analyzing the circumstances that data is being used and transmitted under. What type of analysis is she doing as part of Data Loss Prevention (DLP)?
A. Contextual analysis
B. Data permanence
C. Data classification
D. Content analysis
A. Contextual analysis
Explanation:
Data Loss Prevention (DLP) is the set of tools, technologies, and policies that a business can use to protect their sensitive data from being sent or used by the wrong people or the wrong place.
Content analysis is when the DLP tools are looking for keywords, patterns, metadata, or anything to identify sensitive data, such as social security numbers or credit card numbers.
Contextual analysis is the analysis of the circumstances (context) in which data is being used. For example, an email being transmitted or received from an external system versus internal.
Data classification is understanding, labeling, and applying policies based on how sensitive data is.
Data permanence would be simply how long the data exists. It is not a formal term.
Mateo is working with the cloud provider that his business has chosen to provide Platform as a Service (PaaS) for their server-based needs. It is necessary to specify the Central Processing Unit (CPU) requirements to ensure that this solution works as they require. CPU needs would be specified within the what?
A. Business Associate Agreement (BAA)
B. Service Level Agreement (SLA)
C. Master Services Agreement (MSA)
D. Privacy Level Agreement (PLA)
B. Service Level Agreement (SLA)
Explanation:
A Service Level Agreement (SLA) specifies the conditions of service that will be provided by the cloud provider, such as uptime or CPU needs.
The MSA is a document that covers the basic relationship between the two parties. In this case, the customer and the cloud provider. It does not specify metrics such as CPU needs.
The BAA is found within HIPAA. It informs the cloud provider of the requirements to protect health data. In Europe, this would be called a Data Processing Agreement (DPA) under GDPR. More generically, this would be called a Privacy Level Agreement (PLA).
Reference:
An organization has a team working on an audit plan. They have just effectively defined all the objectives needed to set the groundwork for the audit plan. What is the NEXT step for this team to complete?
A. Perform the audit
B. Define scope
C. Review previous audits
D. Conduct market research
B. Define scope
Explanation:
Audit planning is made up of four main steps, which occur in the following order:
Define objectives Define scope Conduct the audit Lessons learned and analysis
Which of the following events is likely to cause the initiation of a Disaster Recovery (DR) plan?
A. A failure in the supply chain for the manufacturing process
B. The loss of the Chief Executive Officer (CEO) and Chief Financial Office. (CFO) in a plane crash
C. A fire in the primary data center
D. The main Internet Service Provider (ISP) experiences a fiber cut
C. A fire in the primary data center
Explanation:
NIST defines Disaster Recovery Plans as a written plan for recovering one or more information systems at an alternate facility in response to a major hardware or software failure or the destruction of a facility in NIST SP 800-34, so the primary data center being destroyed by fire is the correct answer.
NIST defines a Business Continuity Plan as a written document with instructions or procedures that describe how an organization’s mission/business processes will be sustained during and after a significant disruption. Because the question is looking for a disaster, the answer about a fire is a better answer here. That has the potential of destroying the facility or at least causing damage to the hardware and software in the data center.
If the ISP has a fiber cut, that would/could distract communications to the data center. If this happens, it is unlikely to require a move to an alternate site. This is a BC issue, not a disaster, at least according to the NIST definitions.
These definitions from NIST work well around this exam. If you disagree with the definitions or use the terms in another way that is fine—just know for the exam that this is another way to look at the terms.
If the CEO’s and CFO’s lives are lost, that is a sad event for their families and for the business. A succession plan should be created if this is a concern for a business.
A potential failure in the supply chain is something that needs to be managed. ISO/IEC 28000 is a useful document that begins that work. However, a DR plan is not needed for this. Perhaps a BC plan would be useful though.
While building their virtualized data center in a public cloud Infrastructure as a Service (IaaS), a real estate corporation operating in Canada knows that they must be careful to care for all the data and personal information that they will be storing within the cloud. Since it is critical to protect the data that is in their possession, they are working to control access.
Which of the following is NOT a protection technique that they can use for their systems?
A. Privileged access
B. Standard configurations
C. Separation of duty
D. Least privilege
A. Privileged access
Explanation:
Privileged access must be strictly limited and should enforce least privilege and separation of duty. Therefore, it is not a virtualization system protection mechanism.
Least privilege means that a user of any kind should only be given as little access as possible. They should have access to what they need to access with only the permissions that they require and nothing more. This is a great idea to pursue but difficult to achieve in reality.
Separation of duty is the idea of taking a task and breaking it down into specific steps. They divide and assign those steps to a combination of at least two people. Those two people will have to perform their specific steps to accomplish that task. The purpose of this is to force collusion. This would require someone to convince someone else to help them commit fraud rather than being able to do it by themselves.
Standard configurations are agreed-upon baselines and aid in managing change, which provides protection for virtualization systems.
A large social media company that relies on public Infrastructure as a Service (IaaS) for their virtual Data Center (vDC) had an outage. They were not locatable through Domain Name Service (DNS) queries midafternoon one Thursday. In their virtual routers, a configuration was altered incorrectly. What did they fail to manage properly?
A. Input validation
B. Service level management
C. User training
D. Change enablement practice
D. Change enablement practice
Explanation:
ITIL defines change enablement practice as the practice of ensuring that risks are properly assessed, authorizing changes to proceed, and managing a change schedule to maximize the number of successful service and product changes. This is what happened to Facebook/Instagram/WhatsApp/Meta. They have their own network, but the effect would have been the same using AWS as an IaaS. This is change management.
Service level management is defined in ITIL as the practice of setting clear business-based targets for service performance so that the delivery of a service can be properly assessed, monitored, and managed against these targets.
Input validation needs to be performed by software to ensure that the values entered by the users are correct. The main goal of input validation is to prevent the submission of incorrect or malicious data and ensure that the software functions as intended. By checking for errors or malicious input, input validation helps to increase the security and reliability of software.
User training can help reduce the likelihood of errors occurring while using the software. By teaching users how to properly use the software, they become more aware of potential mistakes that may occur and can take measures to prevent them. This can help reduce the occurrence of mistakes, leading to less downtime, more accurate work, and improved outcomes.
Software Configuration Management (SCM) is widely used in all software development environments today. There are many practices that are part of a secure SCM environment. What are some of these practices?
A. Version control, build automation, release management, issue tracking
B. Secure Software Development Lifecycle, build automation, releas. management, issue tracking
C. Version control, build automation, Secure Software Development Lifecycle
D. Version control, release management, testing and tracking tools
A. Version control, build automation, release management, issue tracking
Explanation:
Correct answer: Version control, build automation, release management, issue tracking
Software Configuration Management (SCM) has many practices. Some common activities include:
Version control: This allows developers to track changes to code, collaborate, and revert changes if needed. Build automation: This automates the compiling of the source code into executable software. Jenkins and Travis CI are common build tools today. Release management: This helps to automate the process of deploying software and ensures releases are tested and approved. Issue tracking: This is used to track and manage bugs, feature requests, and other issues. Jira, Trello, and Asana are common tools.
Secure Software Development LifeCycle (SSDLC) is a related yet distinct process. SSDLC is about developing software with security in mind. SCM can support the SSDLC process. SSDLC would not be a practice of SCM.
Testing and tracking tools are found within SCM, but it is not so much the process.
Kathleen works at a large financial institution that has a growing software development group. They have a desire to “shift left” in their thinking as they build their Platform as a Service (PaaS) environment. The Developers and Operations (DevOps) are now working together to build and deploy a strong and secure cloud environment that will contain a Sofware as a Service (SaaS) product that will be used by the financial analysts. To ensure the software has the ability to withstand attack attempts that will surely happen, they need to hypothesis what can happen so as to do their best to prevent it.
What would you recommend that they do?
A. Threat modeling using both Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD) and vulnerability assessment
B. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD)
C. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and open box testing
D. Threat modeling using both Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD) and penetration testing
B. Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Exploitability, Affected users, Discoverability (DREAD)
Explanation:
Correct answer: Threat modeling using both Spoofing, Tampering, Repudiation, Information disclosure, Elevation of privileges (STRIDE) and Damage, Reproducability, Expoloitability, Affected users, Discoverability (DREAD)
Threat modeling is the processing of finding threats and risks that face an application or system once it has gone live. This is an ongoing process that will change as the risk landscape changes and is, therefore, an activity that is never fully completed. DREAD and STRIDE, which were both conceptualized by Microsoft, are two prominent models recommended by OWASP. Together, they look at what could happen (STRIDE) and how bad it could be (DREAD).
Threat modeling techniques include STRIDE, DREAD, PASTA, ATASM, TRIKE, and a few others.
Open box testing is a type of software testing, where the code is known to the tester. It is not threat modeling; it is the actual test of actual code. Threat modeling is predictive, trying to understand how threats could be realized in the future.
The same is true with vulnerability assessment and penetration testing. They are looking for problems that exist, not predicting what could happen in the future.
Which of the following attributes of evidence deals with the fact that it is real and relevant to the investigation?
A. Convincing
B. Admissible
C. Authentic
D. Complete
C. Authentic
Explanation:
Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:
Authentic: The evidence must be real and relevant to the incident being investigated. Accurate: The evidence should be unquestionably truthful and not tampered with (integrity). Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case. Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something). Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).
Which of the following risks associated with PaaS environments includes hypervisor attacks and VM escapes?
A. Interoperability Issues
B. Persistent Backdoors
C. Virtualization
D. Resource Sharing
C. Virtualization
Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:
Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality. Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production. Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes. Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
At which stage of the process of developing a BCP should an organization plan out personnel and resource requirements?
A. Implementation
B. Testing
C. Auditing
D. Creation
A. Implementation
Explanation:
Managing a business continuity/disaster recovery plan (BCP/DRP) has three main stages:
Creation: The creation stage starts with a business impact assessment (BIA) that identifies critical systems and processes and defines what needs to be covered by the plan and how quickly certain actions must be taken. Based on this BIA, the organization can identify critical, important, and support processes and prioritize them effectively. For example, if critical applications can only be accessed via a single sign-on (SSO), then SSO should be restored before them. BCPs are typically created first and then used as a template for prioritizing operations within a DRP. Implementation: Implementation involves identifying the personnel and resources needed to put the BCP/DRP into place. For example, an organization may take advantage of cloud-based high availability features for critical processes or use redundant systems in an active/active or active/passive configuration (dependent on criticality). Often, decisions on the solution to use depend on a cost-benefit analysis. Testing: Testing should be performed regularly and should consider a wide range of potential scenarios, including cyberattacks, natural disasters, and outages. Testing can be performed in various ways, including tabletop exercises, simulations, or full tests.
Auditing is not one of the three stages of developing a BCP/DRP.
A cloud security engineer working for a financial institution needs to determine how long specific financial records must be stored and preserved. Which of the following specifies how long financial records must be preserved?
A. Gramm-Leach-Bliley Act (GLBA)
B. Privacy Act of 1988
C. General Data Protection Regulation (GDPR)
D. Sarbanes-Oxley (SOX)
D. Sarbanes-Oxley (SOX)
Explanation:
The Sarbanes-Oxley Act (SOX) regulates how long financial records must be kept. SOX is enforced by the Securities and Exchange Commission (SEC). SOX was passed as a way to protect stakeholders and shareholders from improper practices and errors. This was passed as a result of the fraudulent reporting by Enron.
GDPR is the European Union’s (EU) requirement for member countries, including Germany, to protect personal data in their possession.
GLBA is an extension to SOX, which requires personal data to be protected for the customers of the business that must be in compliance with SOX.
The privacy act of 1988 is an Australian law that requires personal data to be protected.
A SOC report is MOST related to which of the following common contractual terms?
A. Litigation
B. Right to Audit
C. Metrics
D. Compliance
B. Right to Audit
Explanation:
A contract between a customer and a vendor can have various terms. Some of the most common include:
Right to Audit: CSPs rarely allow customers to perform their own audits, but contracts commonly include acceptance of a third-party audit in the form of a SOC 2 or ISO 27001 certification. Metrics: The contract may define metrics used to measure the service provided and assess compliance with service level agreements (SLAs). Definitions: Contracts will define various relevant terms (security, privacy, breach notification requirements, etc.) to ensure a common understanding between the two parties. Termination: The contract will define the terms by which it may be ended, including failure to provide service, failure to pay, a set duration, or with a certain amount of notice. Litigation: Contracts may include litigation terms such as requiring arbitration rather than a trial in court. Assurance: Assurance requirements set expectations for both parties. For example, the provider may be required to provide an annual SOC 2 audit report to demonstrate the effectiveness of its controls. Compliance: Cloud providers will need to have controls in place and undergo audits to ensure that their systems meet the compliance requirements of regulations and standards that apply to their customers. Access to Cloud/Data: Contracts may ensure access to services and data to protect a customer against vendor lock-in.
Occhave is consulting on a new project for a company that is expanding its retail operations to vacation destinations around the world. Some of these remote vacation spots have limited internet access at times. Credit card processing is their primary concern that pertains to network access.
If they do not have internet access at a location, what options do they have?
A. Use Voice over Internet Protocol (VoIP) to contact the bank to confirm available funds
B. Capture the credit card details locally and wait for internet access to return
C. Access the local cloud server to process the credit card rather than the bank server
D. Use the internet access on their mobile cell phones until the internet is back
B. Capture the credit card details locally and wait for internet access to return
Explanation:
The only choice, if you do not have internet access, is to wait for that access to return to actually charge a credit card. There is a financial risk that a card may not be valid and a customer will have already left the store with the product. However, if the company wants to make sales at those times, that is the risk they have to take. They could capture local address information to be able to connect with the customer later.
One of the conditions for cloud is broad network access. If you have access to the network (the internet), then you have access to the cloud. If you do not have access to the network, then there is no cloud.
No internet access means no internet access. So it is not possible to use the internet through the mobile phone.
There is no local cloud server since there is no internet. There could be a private cloud that has been built, but this company is not big enough in those remote locations to build a private cloud every place the internet is not reliable.
VoIP is the transmission of a phone call over an IP-based network. For this remote location, the IP-based network would be the internet. In the scenario, there is no internet access at all, so this is not an option.
A cloud information security manager is working with the data architect to determine the best way to implement encryption in a specific database. They are analyzing the data that is stored in this particular database, and they have discovered that there are a few fields with very sensitive data.
What type of encryption would work best to protect this data and not overwhelm the administrators and systems with too much work?
A. Application-level encryption
B. Fully Homomorphic Encryption
C. Column-level encryption
D. Transparent Data Encryption
C. Column-level encryption
Explanation:
Column-level encryption can be performed to encrypt the few columns (fields or attributes) that are particularly sensitive. This allows for granular control where the columns that contain sensitive information such as social security numbers or credit card numbers can be encrypted.
Application-level encryption could be used for this, but it does require more work to manage and maintain. This involves encrypting the data at the application layer before it is stored in the database.
Transparent Data Encryption (TDE) encrypts the entirety of the database or specific columns. The application code does not need to be changed if TDE is used. So, this is a possible answer to this question, but given a choice, the closest answer to the question is usually better. Since the question mentions fields, which is another name for the column beyond attributes, it is the best answer here.
Fully Homomorphic Encryption (FHE) is a new and emerging technique to keep the data encrypted while it is in use. The three techniques above are for encrypting data at rest.
Which of the following types of private data is protected by the GDPR, CCPA/CPRA, and similar data protection laws?
A. Personally Identifiable Information
B. Protected Health Information
C. Payment Data
D. Contractual Private Data
A. Personally Identifiable Information
Explanation:
Private data can be classified into a few different categories, including:
Personally Identifiable Information (PII): PII is data that can be used to uniquely identify an individual. Many laws, such as the GDPR and CCPA/CPRA, provide protection for PII. Protected Health Information (PHI): PHI includes sensitive medical data collected regarding patients by healthcare providers. In the United States, HIPAA regulates the collection, use, and protection of PHI. Payment Data: Payment data includes sensitive information used to make payments, including credit and debit card numbers, bank account numbers, etc. This information is protected under the Payment Card Industry Data Security Standard (PCI DSS). Contractual Private Data: Contractual private data is sensitive data that is protected under a contract rather than a law or regulation. For example, intellectual property (IP) covered under a non-disclosure agreement (NDA) is contractual private data.
A startup cloud provider is building their first Data Center (DC). They have been doing their research into the constraints of what the American Society of Heating, Refrigeration and Air conditioning Engineers (ASHRAE) recommends for temperature and humidity inside of the DC. The DC will be close to a desert, and they are concerned about it being too dry in the DC. At the same time, they want to ensure that the moisture level is not too high in their data center.
What is the recommended maximum moisture level for a data center?
A. 60 percent relative humidity
B. 70 percent relative humidity
C. 50 percent relative humidity
D. 80 percent relative humidity
A. 60 percent relative humidity
Explanation:
The recommended maximum moisture level in a data center is 60 percent relative humidity.
The recommended minimum is 40 percent relative humidity. When there is too much moisture in the air, it can cause condensation to form, which may damage the systems. In addition, having the humidity levels too low may cause an excess of electrostatic discharge.
Which of the following SIEM features may be necessary when dealing with data sets from various sources?
A. Automated Monitoring
B. Data Integrity
C. Normalization
D. Alerting
C. Normalization
Explanation:
Security information and event management (SIEM) solutions are useful tools for log analysis. Some of the key features that they provide include:
Log Centralization and Aggregation: Combining logs in a single location makes them more accessible and provides additional context by drawing information from multiple log sources. Data Integrity: The SIEM is on its own system, making it more difficult for attackers to access and tamper with SIEM log files (which should be write-only). Normalization: The SIEM can ensure that all data is in a consistent format, converting things like dates that can use multiple formats. Automated Monitoring or Correlation: SIEMs can analyze the data provided to them to identify anomalies or trends that could be indicative of a cybersecurity incident. Alerting: Based on their correlation and analysis, SIEMs can alert security personnel of potential security incidents, system failures, and other events of interest. Investigative Monitoring: SIEMs support active investigations by enabling investigators to query log files or correlate events across multiple sources
Amir is working for a large organization that has a Platform as a Service (PaaS) application that they created for their internal users. It is a web application that uses browser cookies for sessions and state. However, when the user logs out, the cookies are not properly destroyed. This has allowed another user that had access to the same browser as the previous user to log in using the same cookies from the previous session.
What is this an example of?
A. Security misconfiguration
B. Sensitive data exposure
C. Broken authentication
D. Broken access control
C. Broken authentication
Explanation:
Broken authentication is one of the OWASP Top 10 vulnerabilities. Broken authentication occurs when an issue with a session token or cookie makes it possible for an attacker to gain unauthorized access to a web application. This can occur when session tokens are not properly validated, making it possible for an attacker to hijack the token and gain access. Another example of this can occur when cookies are not properly destroyed after a user logs out, making it possible for the next user to gain access with their cookies.
Security misconfiguration occurs when someone does not understand how to configure the software, what configuration needs to be there, etc.
A great resource for the OWASP top 10 can be found OWASP’s website. It is good to be familiar with the top 10 and some of the solutions or fixes to prevent them from occurring.
Broken access control is not top of the OWASP Top 10 list. Broken access control occurs in a variety of ways, such as failing to setup access based on the logic of least privilege or if elevation of permissions is possible for the average user when it should not be.
Reference:
JoAnn has been configuring a server that will handle all network forwarding decisions, which allows the network device to simply perform frame forwarding. This allows for dynamic changes to traffic flows based on customer needs and demands. What is the name of the network approach described here?
A. Dynamic Name System Security (DNSSec)
B. Virtual Private Cloud (VPC)
C. Software-defined networking (SDN)
D. Dynamic Host Configuration Protocol (DHCP)
C. Software-defined networking (SDN)
Explanation:
In software defined networking, decisions regarding where traffic is filtered and sent are separate from the actual forwarding of the traffic. This separation allows network administrators to quickly and dynamically adjust network flows based on the needs of customers. Software defined networking is often referred to as Software Defined - Wide Area Network (SD-WAN) when it is used as the backbone network.
DNSSec is an extension to DNS. DNS converts domain names, such as pocketprep.com to IP addresses. DNS is a hierarchically organized set of servers within the internet and corporate networks. DNSSec adds authentication to allow verification of the source of DNS information.
DHCP is used to dynamically allocate IP addresses to devices when they join a network.
VPC is a simulation of a private cloud within a public cloud environment.
An organization is looking to balance concerns about data security with the desire to leverage the scalability and cost savings of the cloud. Which of the following cloud models is the BEST choice for this?
A. Hybrid Cloud
B. Private Cloud
C. Public Cloud
D. Community Cloud
A. Hybrid Cloud
Explanation:
Cloud services are available under a few different deployment models, including:
Private Cloud: In private clouds, the cloud customer builds their own cloud in-house or has a provider do so for them. Private clouds have dedicated servers, making them more secure but also more expensive. Public Cloud: Public clouds are multi-tenant environments where multiple cloud customers share the same infrastructure managed by a third-party provider. Hybrid Cloud: Hybrid cloud deployments mix both public and private cloud infrastructure. This allows data and applications to be hosted on the cloud that makes the most sense for them. For example, sensitive data can be stored on the private cloud, while less-sensitive applications can take advantage of the benefits of the public cloud. Multi-Cloud: Multi-cloud environments use cloud services from multiple different cloud providers. This enables customers to take advantage of price differences or optimizations offered by different providers. Community Cloud: A community cloud is essentially a private cloud used by a group of related organizations rather than a single organization. It could be operated by that group or a third party, such as FedRAMP-compliant cloud environments operated by cloud service providers.
A cloud provider has assembled all the cloud resources together, from routers to servers and switches, as well as the Central Processing Unity (CPU), Random Access Memory (RAM), and storage within the servers. Then they made them available for allocation to their customers. Which term BEST describes this process?
A. On-demand self-service
B. Data portability
C. Reversibility
D. Resource pooling
D. Resource pooling
Explanation:
Cloud providers may choose to do resource pooling, which is the process of aggregating all the cloud resources together and allocating them to their cloud customers. There is pooling of physical equipment into the datacenter. Then there is a pool of resources within a server that are allocated to running virtual machines. That is the Central Processing Unity (CPU), the Random Access Memory (RAM), and the network bandwidth that is available.
Reversibility is the ability to get all the company’s artifacts out of the cloud provider’s equipment, and what is on the provider’s equipment is appropriately deleted.
Portability is the ability to move data from one provider to another without having to reenter the data.
On-demand self-service is the ability for the customer/tenant to use a portal to purchase and provision cloud resources without having much, if any, interaction with the cloud provider.
Communication, consent, control, transparency, and independent yearly audits are the five key principles focused on by which of the following standards?
A. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27001
B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018
C. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27050
D. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 31000
B. International Standards Organization/International Electrotechnical Commission (ISO/IEC) 27018
Explanation:
ISO/IEC 27018 is a standard for providing security and privacy within cloud computing. The ISO/IEC 27018 focuses on five key principles, which include communication (relaying information to cloud customers), consent (receiving permission before using customer data for any reason), control (cloud customers retain full control over their own data in the cloud), transparency (cloud providers inform customers of any potential exposure to support staff and contractors), and independent and yearly audits (cloud providers must undergo yearly audits performed by a third party).
ISO/IEC 27001 is information security, cybersecurity, and privacy protection - Information Security Management System (ISMS). This is good for building and auditing a corporation’s ISMS.
ISO/IEC 27050 is information technology - electronic discovery (e-discovery).
ISO/IEC 31000 is risk management guidelines.
Cryptoshredding falls under which classification in NIST’s methods of media sanitization?
A. Purge
B. Wipe
C. Clear
D. Destroy
A. Purge
Explanation:
When data is no longer needed, it should be disposed of using an approved and appropriate mechanism. NIST SP 800-88, Guidelines for Media Sanitization, defines three levels of data destruction:
Clear: Clearing is the least secure method of data destruction and involves using mechanisms like deleting files from the system and the Recycle Bin. These files still exist on the system but are not visible to the computer. This form of data destruction is inappropriate for sensitive information. Purge: Purging destroys data by overwriting it with random or dummy data or performing cryptographic erasure (cryptoshredding). Often, purging is the only available option for sensitive data stored in the cloud, since an organization doesn’t have the ability to physically destroy the disks where their data is stored. However, in some cases, data can be recovered from media where sensitive data has just been overwritten with other data. Destroy: Destroying damages the physical media in a way that makes it unusable and the data on it unreadable. The media could be pulverized, incinerated, shredded, dipped in acid, or undergo similar methods.
Wipe is not a NIST-defined method of media sanitization.
A multinational conglomerate company manufactures smart appliances that include washing machines and espresso machines. Some of their products have ended up being used by a consulting firm. These products are in the buildings (lights and such) and in the breakrooms (refrigerators). These products are connected to the network and are sending their logs to the Security Information and Event Manager (SIEM). An analysist in the Security Operations Center (SOC) has been analysing an Indication of Compromise (IoC). The IoC indicates correctly that an attack has occurred by a bad actor that has compromised a virtual desktop that then led to a compromise of the database.
What does this say about the smart appliances?
A. True positive
B. True negative
C. False positive
D. False negative
B. True negative
ExplanatioTo understand true negatives, it is essential to grasp the concept of a confusion matrix, which is a table that summarizes the performance of a classification model. The confusion matrix consists of four elements:
True Positives (TP): The model correctly predicts positive outcomes when the actual outcomes are indeed positive. True Negatives (TN): The model correctly predicts negative outcomes when the actual outcomes are indeed negative. False Positives (FP): The model incorrectly predicts positive outcomes when the actual outcomes are negative. False Negatives (FN): The model incorrectly predicts negative outcomes when the actual outcomes are positive.
Because there is nothing that the analyst sees about the smart appliances and there is a compromise between the virtual desktop and the database, there is no problem with the smart appliances. Therefore, it is true that there are no (negative) IoCs regarding the smart appliances being attacked.n:
Any information relating to past, present, or future medical status that can be tied to a specific individual is known as which of the following?
A. Payment Card Industry (PCI) information
B. Gramm Leach Bliley Act (GLBA)
C. Health Information Portability Accountability Act
D. Protected Health Information (PHI)
D. Protected Health Information (PHI)
Explanation:
Protected Health Information (PHI) is a subset of Personally Identifiable Information (PII). PHI applies to any entity defined under the U.S. Health Information Portability and Accountability Act (HIPAA) laws. Any information that can be tied to a unique individual as it relates to their past, current, or future health status is considered PHI.
The payment card industry defines the Data Security Standard (DSS) that we fully know as PCI-DSS. It demands that payment card information be protected.
GLBA is a U.S. act that ensures that personal data belonging to the customers of financial institutions must be protected. It is tied to Sarbanes Oxley (SOX).
Reference:
Teo is concerned about future attacks and their ability to perform the forensic analysis that is required of him and his team for his corporation. If they move into the cloud, they are concerned they will not be able to obtain the forensic evidence that they require. Which standard provides guidelines for handling digital evidence?
A. ISO/IEC 27037
B. ISO/IEC 27036
C. ISO/IEC 27041
D. ISO/IEC 27042
A. ISO/IEC 27037
Explanation:
ISO/IEC 27037 provides guidelines for handling digital evidence. It has specific activity guidance for identification, collection, acquisition, and preservation of digital evidence that could be valuable as proof of bad activities.
ISO/IEC 27041 provides guidance for methods and processes used in investigations to make sure they are “fit for purpose.”
ISO/IEC 27042 provides guidance on analysis and interpretation of digital evidence.
ISO/IEC 27036 provides guidance on cybersecurity and supplier relationships. It is an overview intended to assist corporations in securing their information and systems when working with suppliers.