CCSP Domain 2: Architecture and Design Flashcards
An email is an example of what type of data?
A. Structured data
B. Semi-structured data
C. RFC-defined data
D. Unstructured data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 24). Wiley. Kindle Edition.
D. Unstructured data
Explanation:
Emails and other freeform text are examples of unstructured data. Structured data like the data found in databases is carefully defined, whereas semi structured data like XML or JSON applies structure without being tightly controlled. While email itself is defined by an RFC, the term RFC defined data is not used in this context
Nick wants to ensure that data is properly handled once it is classified. He knows that data labeling is important to the process and will help his data loss prevention tool in its job of preventing data leakage and exposure. When should data be labeled in his data lifecycle?
A. Creation
B. Storage
C. Use
D. Destruction
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 24). Wiley. Kindle Edition.
A. Creation
Explanation:
Data labeling typically occurs during the Creation phase of the data lifecycle. Data labels may also be changed or added during use as data is modified.
Jacinda is planning to deploy a data loss prevention (DLP) system in her cloud environment. Which of the following challenges is most likely to impact the ability of her DLP system to determine whether sensitive data is being transmitted outside of her organization?
A. Lack of data labeling
B. Use of encryption for data in transit
C. Improper data labeling
D. Use of encryption for data at rest
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 24). Wiley. Kindle Edition.
B. Use of encryption for data in transit
Explanation:
Jacinda is likely to face challenges using her DLP system due to the broad and consistent use of encryption for data in transit or data in motion in cloud environments. She will need to take particular care to design and architect her environment to allow the DLP system to have access to the traffic it needs.
Susan wants to ensure that super user access in her cloud environment can be properly audited. Which of the following is not a common item required for auditing of privileged user access?
A. The remote IP address
B. The account used
C. The password used
D. The local IP address
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 24). Wiley. Kindle Edition.
C. The password used
Explanation:
Passwords are intentionally not captured or logged since creating an audit log that contains passwords would be a significant security issue.
Ben’s organization uses the same data deletion procedure for their on-site systems and their third-party-provided, cloud-hosted systems. Ben believes there is a problem with the process currently in use, which involves performing a single-pass zero-wipe of the disks and volumes in use before they are reused. What problem with this approach should Ben highlight for the cloud environment?
A. Crypto-shredding is a secure option for third-party-hosted cloud platforms.
B. Zero-wiping alone is not sufficient, and random patterns should also be used.
C. Zero- wiping requires multiple passes to ensure that there will be no remnant data. D. Drives should be degaussed instead of wiped or crypto-shredded to ensure that data is fully destroyed at the physical level.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 24). Wiley. Kindle Edition.
A. Crypto-shredding is a secure option for third-party-hosted cloud platforms.
Explanation:
Cryptographic erasure, or cryptoshredding, is the only way to ensure that drives and volumes hosted by third parties are securely cleared. Zero wiping may result in remnant data, particularly where drives are dynamically allocated space in a hosted environment. Degaussaing or other physical destruction is typically not possible with third party hosted systems without a special contract and dedicated hardware
Jason has been informed that his organization needs to place a legal hold on information related to pending litigation. What action should he take to place the hold? Restore the files from backups so that they match the dates for the hold request. Search for all files related to the litigation and provide them immediately to opposing counsel. Delete all the files named in the legal hold to limit the scope of litigation. Identify scope files and preserve them until they need to be produced.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 25). Wiley. Kindle Edition.
D. Drives should be degaussed instead of wiped or crypto-shredded to ensure that data is fully destroyed at the physical level.
Explanation:
Legal holds require organizations to identify and preserve data that meets the holds scope. Jason should identify the files and preserve them until they are required. Restoring files might erase important data, deleting files is completely contrary to the concept of the hold, and holds do not require immediate production - they are just what they sound like, a requirement to hold the data
Murali is reviewing a customer’s file inside of his organization’s customer relationship management tool and sees the customer’s Social Security number listed as XXX-XX-8945. What data obfuscation technique has been used?
A. Anonymization
B. Masking
C. Randomization
D. Hashing
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 25). Wiley. Kindle Edition.
B. Masking
Explanation:
Masking data involves replacing data with alternate characters like X or *. This is typically done via controls in the software or database itself, as the underlying data remains intact in the database. Anonymization or identification removes data that might allow individuals to be identified. Randomization or shuffling data moves it around, disassociating the data but leaving real data in place to be tested
An XML file is considered what type of data?
A. Unstructured data
B. Restructured data
C. Semi-structured data
D. Structured data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 25). Wiley. Kindle Edition.
C. Semi-structured data
Explanation:
XML and JSON are both examples of semi structured data. Other examples CSV files, XML, NoSQL Databases and HTML files
Lucca wants to implement logging in an infrastructure as a service cloud service provider’s environment for his Linux instances. He wants to capture events like creation and destruction of systems, as part of scaling requirements for performance. What logging tool or service should he use to have the most insight into these events?
A. Syslog from the Linux systems
B. The cloud service provider’s built-in logging function
C. Syslog-NG from the Linux systems
D. Logs from both the local event log and application log from the Linux systems
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 25). Wiley. Kindle Edition.
B. The cloud service provider’s built-in logging function
Explanation:
The providers own logging function is the best option as information about systems being created and destroyed wont exist on the local systems, and thus syslog, syslog-ng, and local logs wont work.
Joanna’s company uses a load balancer to distribute traffic between multiple web servers. What data point is often lost when traffic passes through load balancers to local web servers in a cloud environment?
A. The source IP address
B. The destination port
C. The query string
D. The destination IP address
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 25). Wiley. Kindle Edition.
A. The source IP address
Explanation:
Original source IP addresses may not be visible in the local web server log. Fortunately, load balancer logs can be used if they are available.
Isaac is using a hash function for both integrity checking and to allow address data to be referenced without the actual data being exposed. Which of the following attributes of the data will be not be lost when the data is hashed?
A. Its ability to be uniquely identified
B. The length of the data
C. The formatting of the data
D. The ability to sort the data based on street number
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (pp. 25-26). Wiley. Kindle Edition.
A. Its ability to be uniquely identified
Explanation:
Hashing converts variable length data to fixed length outputs, meaning that the length, formatting and the ability to perform operations on the data using strings or numbers will be list. Its ability to be uniquely identified wont be lost - Isaac just needs to know the hash of a given address to continue to reference that data element
Amanda’s operating procedures for secure data storage require her to ensure that she is using data dispersion techniques. What does Amanda need to do to be compliant with this requirement?
A. Delete all data not in secure storage.
B. Store data in more than one location or service.
C. Avoid storing data in intact form, requiring data from more than one location to use a data set.
D. Geographically separate data by at least 15 miles to ensure that a single natural disaster cannot destroy it.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 26). Wiley. Kindle Edition.
B. Store data in more than one location or service.
Explanation:
Data dispersion is the practice of ensuring that important data is stored in more than one location or service. It does not necessarily require specific distances or geographic limits, doesnt require deletion of data not in secure storage and doesnt require you to use multiple data sets to access data
Lisa runs Windows instances in her cloud-hosted environment. Each Windows instance is created with a C: drive that houses the operating system and application files. What type of storage best describes the C: drive for these Windows instances?
A. Long-term storage
B. Ephemeral storage
C. Raw storage
D. Volume-based storage
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 26). Wiley. Kindle Edition.
B. Ephemeral storage
Explanation:
Storage that is associated with an instance that will be destroyed when the instance is shut down is ephemeral storage.
Steve is working to classify data based on his organization’s data classification policies. Which of the following is not a common type of classification?
A. Size of the data
B. Sensitivity of the data
C. Jurisdiction covering the data
D. Criticality of the data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 26). Wiley. Kindle Edition.
A. Size of the data
Explanation:
The size of the data or files is not typically a data classification type or field. Sensitivity, jurisdiction and criticality are all commonly used to classify data
Chris is reviewing his data lifecycle and wants to take actions in the data creation stage that can help his data loss prevention system be more effective. Which of the following actions should he take to improve the success rate of his DLP controls?
A. Data labeling
B. Data classification
C. Hashing
D. Geolocation tagging
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 26). Wiley. Kindle Edition.
A. Data labeling
Explanation:
Data labels can help DLP systems identify and manage data, so Chris should ensure that data is labeled as part of its creation process to help his DLP identify and protect it
Gary is gathering data to support a legal case on behalf of his company. Why might he digitally sign files as they are collected and preserve them along with the data in a documented, validated way?
A. To allow for data dispersion
B. To ensure the files are not copied
C. To keep the files secure by encrypting them
D. To support nonrepudiation
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 27). Wiley. Kindle Edition.
D. To support nonrepudiation
Explanation:
Chain of custody documentation, often including actions like hashing files to ensure they are not changed from their original form, is commonly done to support nonrepudiation
Valerie is performing a risk assessment for her cloud environment and wants to identify risks to her organization’s ephemeral volume-based storage used for system drives in a scalable, virtual machine–based environment. Which of the following is not a threat to ephemeral storage?
A. Inadvertent exposure
B. Malicious access due to credential theft
C. Poor performance due to its ephemeral nature
D. Loss of forensic artifacts
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 27). Wiley. Kindle Edition.
C. Poor performance due to its ephemeral nature
Explanation:
Ephemeral storage will have the performance of its overall storage type, so low performance isnt an expected issue. Inadvertent exposure, malicious access and loss of forensic artifacts are all concerns for ephemeral storage
Which storage type is most likely to have remnant data issues in an environment in which the storage is reused for other customers after it is reallocated if it is not crypto-shredded when it is deallocated and instead is zero-wiped?
A. Ephemeral storage
B. Raw storage
C. Long-term storage
D. Magneto-optical storage
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 27). Wiley. Kindle Edition.
B. Raw storage
Explanation:
Raw storage provides direct access to a disk, and without crypto shredding is likely to have remnant data on the disk after it is used.
Kathleen wants to perform data discovery across a large data set and knows that some data types are more difficult to perform discovery on than others. Which of the following data types is the hardest to perform discovery actions on?
A. Unstructured data
B. Semi-structured data
C. Rigidly structured data
D. Structured data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 27). Wiley. Kindle Edition.
A. Unstructured data
Explanation:
Unstructured data is the most difficult to perform discovery against because the data is unlabeled and requires discovery to be done using searches or other techniques that can handle arbitrary data.
Isaac wants to filter events based on the country of origin for authentications. What log information should he use to perform a best-effort match for logins?
A. userID
B. IP address
C. Geolocation
D. MAC address
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 27). Wiley. Kindle Edition.
C. Geolocation
Explanation:
While it isnt always perfectly accurate, geolocation, data attempts to identify the location of a given IP address. Isaac can use that data to attempt to match authentication events to logins, although VPNs and other tools may obscure the actual login location for users
Charleen wants to use a data obfuscation method that allows realistic data to be used without the data being actual data associated with specific users or individuals. What data obfucation method should she use?
A. Hashing
B. Shuffling
C. Randomization
D. Masking
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 28). Wiley. Kindle Edition.
B. Shuffling
Explanation:
While there may be some concerns about read data being used, Charleens goal is to have actual data for testing, making shuffling her best option
Michelle wants to track deletion of files in an object storage bucket. What potential issue should she be aware of if her organization makes heavy use of object-based storage for storage of ephemeral files?
A. The logging may not be accurate.
B. Logging may be automatically disabled if too many events occur.
C. Creation and deletion events cannot be logged in filesystems.
D. The high volume of logging may increase operational costs.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 28). Wiley. Kindle Edition.
D. The high volume of logging may increase operational costs.
Explanation:
Michelle should be aware that logging deletion events, like any other high volume event, may incur additional costs for her organizations.
Diana is outlining the labeling scheme her organization will use for their data. Which of the following is not a common data label?
A. Creation date
B. Data monetary value
C. Date of scheduled destruction
D. Confidentiality level
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 28). Wiley. Kindle Edition.
B. Data monetary value
Explanation:
Data confidentiality level is often contained in a label, but the monetary value is not a common data label. Creation and scheduled destruction date are also common data labels
Susan wants to be prepared for legal holds. What organizational policy often accounts for legal holds?
A. Data classification policy
B. Retention policy
C. Acceptable use policy
D. Data breach response policy
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 28). Wiley. Kindle Edition.
B. Retention policy
Explanation:
Retention policies often include language that addresses legal holds because holds can impact retention practices and requirements. Data classification, acceptable use and data breach response policies typically do not include legal hold language
Henry wants to follow the OWASP guidelines for key storage. Which of the following is not a best practice for key storage?
A. Keys must be stored in plaintext to allow for access.
B. Keys must be protected in both volatile and persistent memory.
C. Keys stored in databases should be encrypted using key encryption keys.
D. Keys should be protected in storage to ensure that they are not modified or changed inadvertently.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 28). Wiley. Kindle Edition.
A. Keys must be stored in plaintext to allow for access.
Explanation:
Keys should never be stored in plaintext format and should instead be stored in a secure manner - typically encrypted in a hardware security module or other key vault
Marco wants to implement an information rights management tool. What phase of the data lifecycle relies heavily on IRM to ensure the organization retains control of its data?
A. Create
B. Store
C. Share
D. Destroy
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 29). Wiley. Kindle Edition.
C. Share
Explanation:
While IRMs are useful through many of the phases of the cloud data lifecycle, Marco knows that sharing data is when an IRM is most heavily used to ensure that data is not inadvertently exposed or misused
JSON is an example of what type of data?
A. Structured data
B. Semi-structured data
C. Unstructured data
D. Labeled data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 29). Wiley. Kindle Edition.
B. Semi-structured data
Explanation:
JSON is an example of semi structure data. Other examples include CSV files, XML, NoSQL databses and HTML files
Charleen wants to perform data discovery on her organization’s data, which is stored in archival storage hosted by her organization’s cloud service provider. What issue should she point out about this discovery plan?
A. It may be slow and costly due to how archival storage is designed and priced.
B. The data may not exist because it has been archived.
C. The discovery process cannot be run against archival storage because it is not online under normal circumstances.
D. The data will need to be decrypted before being scanned for discovery purposes.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 29). Wiley. Kindle Edition.
A. It may be slow and costly due to how archival storage is designed and priced.
Explanation:
Cloud archival storage is typically designed primary as a storage location with infrequent and smaller scale access, not for large scale interactive access like a discovery process will use. It is likely to be a slow and potentially costly effort if the data is scanned while in the archival location
Ujama’s manager has asked him to perform data mapping to prepare for his next task. What will Ujama be doing?
A. Adding data labels to unstructured data
B. Matching fields in one database to fields in another database
C. Identifying the storage locations for files of specific types on system drives
D. Building a file and folder structure for data storage
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 29). Wiley. Kindle Edition.
B. Matching fields in one database to fields in another database
Explanation:
Data mapping is the process of matching fields in databases to allow them to be integrated or for purposes such as data migration
Lin wants to ensure that her organization’s data labels remain with the data. How should she label her data to give it the best chance of retaining its labels?
A. Embed the labels in the filename.
B. Add the labels to the file’s content at the beginning of the file.
C. Add labels to the file metadata.
D. Add the labels to the file’s content at the end of the file.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 29). Wiley. Kindle Edition.
C. Add labels to the file metadata.
Explanation:
Adding labels as part of the files metadata is a common practice and is less likely to be changed than including them in the file name
Nina’s organization has lost the cryptographic keys associated with one of their cloud-based servers. What can Nina do to recover the data the keys were used to protect?
A. Generate new keys to recover the data.
B. Use the passphrase for the keys to recover the keys.
C. Reverse the hash that was used to encrypt the data.
D. The data is lost and Nina cannot recover it.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 29). Wiley. Kindle Edition.
D. The data is lost and Nina cannot recover it.
Explanation:
Key escrow and backup is incredibly important, and once a key is lost, it cannot be recovered and the data should be considered lost. Generating a new key will not decrypt the data, the passphrase isnt sufficient to recover keys and hashing is not a form of encryption and cannot be reversed
Madani is planning to perform data discovery on various data sets and files his organization has. On which type of data will discovery be most easily performed?
A. Unstructured data
B. Semi-structured data
C. Encrypted data
D. Structured data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 30). Wiley. Kindle Edition.
D. Structured data
Explanation:
Madani knows that structured data is well defined and organized and will be the easiest to perform discovery actions on
Ashley tracks the handling of a forensic image, including recording who handles it, when it was collected and when each transfer occurs, and why the transfer occurred. What practice is Ashley performing?
A. Documenting chain of custody
B. Ensuring repudiation
C. A legal hold
D. Forensic accounting
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 30). Wiley. Kindle Edition.
A. Documenting chain of custody
Explanation:
Ashley is documenting the chain of custody for the image to ensure it can be used in court.
Which of the following is not a common goal of data classification policies?
A. Identifying classification levels
B. Assigning responsibilities
C. Defining roles
D. Mapping data
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 30). Wiley. Kindle Edition.
D. Mapping data
Explanation:
Data mapping is a term used to describe matching fields in databases to allow data migration or integration. Data classification policies do identify classification levels, assign responsibilities and define roles
Hiroyuki wants to optimize his organization’s data labeling process. How and when should he implement data labeling to be most efficient and effective?
A. Manual labeling at the data creation stage
B. Automated labeling at the data creation stage
C. Automated labeling at the data use stage
D. Manual labeling at the data use stage
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 30). Wiley. Kindle Edition.
B. Automated labeling at the data creation stage
Explanation:
Labeling data at creation ensures that it can be properly handled through the rest of its lifecycle. Automated labeling is preferable where possible to avoid human error and to accommodate the volume of data that most organizations create
Jen wants to ensure that keys used by individuals in her organization can be handled properly. Which of the following is not a best practice for handling long-term keys in use by humans?
A. Anonymize access using the key
B. Identify the key user
C. Identify when the key is used
D. Uniquely tag the keys
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 30). Wiley. Kindle Edition.
A. Anonymize access using the key
Explanation:
Anonymizing access using a key removes the ability to provide accountability and works against organizational best practices. Identifying the key, the user and when and how it is used supports accountability for usage
Danielle wants to ensure that the data stored in her cloud-hosted datacenter is properly destroyed when it is no longer needed. Which of the following options should she choose?
A. Physical destruction of the media
B. Crypto-shredding
C. Degaussing
D. Overwriting
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 30). Wiley. Kindle Edition.
B. Crypto-shredding
Explanation:
Cryptoshredding is the only viable option in environments controlled or managed by a third party organization in most circumstances. Physical destruction is not permitted or supported by third party provider, nor is degaussing, which will also not work on many modern drives
Charles wants to use tokenization as a security practice for his organization’s data. What technical requirement will he have to meet to accomplish this?
A. He will need to encrypt his data.
B. He will need two distinct databases.
C. He will need to use a FIPS 140-2 capable cryptographic engine to create tokens.
D. He will need to deidentify the data.
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 31). Wiley. Kindle Edition.
B. He will need two distinct databases.
Explanation:
Tokenization relies on two distinct databases, one with actual data and one with tokenized data. Token servers then pull the data the token represents from the real data database when needed. This process does not require encryption, specify FIPS requirements or involve deidentification practices
Once Charles has his two databases ready, what step comes next in the tokenization process?
A. Data discovery to identify sensitive data
B. Tokenization of the index values
C. Hashing each item in the database
D. Randomization of data in the database to prepare for tokenization
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 31). Wiley. Kindle Edition.
A. Data discovery to identify sensitive data
Explanation:
With the source database ready and a tokenization database prepared to be populated, the next step is to identify which data should be tokenized. Not all data is sensitive and thus not all data needs to be tokenized. Once you know what data will be tokenized, you can tokenize it - sometimes by hashing the data. Randomization is not part of this process
Jack wants to understand how data is used in his organization. What tool is often used to help IT professionals understand where and how data is used and moved through an organization?
A. Data classification
B. Data mapping
C. Dataflow diagrams
D. Data policies
Chapple, Mike; Seidl, David. (ISC)2 CCSP Certified Cloud Security Professional Official Practice Tests (p. 31). Wiley. Kindle Edition.
C. Dataflow diagrams
Explanation:
Dataflow diagrams are a critical part of organizational understanding of how data is created, moves and is used throughout an organization. They often include details like ports, protocols, data elements and classification and other details that can help you understand not only where data is but how it gets there and what data is in use