CV Exam Simmyt Flashcards
Which of the following is also referred to as the control plane?
A. Forwarding Plane
B. Service Plane
C. Data Plane
D. Management Plane
D. Management Plane
Explanation:
The management plane, also called the control plane, is the plane on a device where changes are made and the device is managed, For example, applying an IP address or changing the name of the device would be activities that occur on the management plane.
The data plane is responsible for the sending of packets and is also called the forwarding plane.
The forwarding plane is just another name for the data plane.
There is no service plane.
Objective:
Cloud Platform and Infrastructure Security
Sub-Objective:
Comprehend Cloud Infrastructure Components
In which step of the SDLC are the possible user operations defined?
A. Defining
B. Development
C. Design
D. Testing
C. Design
Explanation:
In the Design step of the Software Development Life Cycle (SDLC), the possible user operations are defined and the interface is envisioned.
The Development step is where the programmers are most involved and is when code is written.
In the Testing step, activities such as initial penetration testing and vulnerability scanning against the application are performed.
In the Defining step, the business requirements of the software are determined, which will drive the design step.
Which consideration is NOT shared by both the vendor and the cloud customer?
A. Security
B. Budget
C. Performance
D. Reversibility
B. Budget
Explanation:
Budget is a consideration of the customer only. The vendor has no say in how the organization allocates its resources for the cloud initiative.
However, that is one of the few considerations that are not shared by both the vendor and the customer. Among the considerations shared are:
Interoperability Portability Reversibility Availability Security Privacy Resiliency Performance Governance Maintenance Versioning Service levels Service Level Agreements (SLA) Auditability Regulatory
Which of the following is NOT a characteristic of the log collection process?
A. Difficult process, easy analysis
B. Often not a priority
C. Mundane and Repetitive
D. Sufficient Experience and training requirements
A. Difficult process, easy analysis
Explanation:
The logging process is not difficult, and analysis is not easy. Logging is fairly easy, most software and devices in modern enterprises can effectively log anything and everything that the organization might want to capture. Reading and analyzing these logs, however, can prove challenging.
Log reviewing is mundane and repetitive. This is not exciting work, and even the best analyst can become lax due to repetition.
Log review and analysis is often not a priority. Most organizations do not have the wherewithal to dedicate the personnel required to effectively analyze log data.
Log analysis requires someone experienced. The person needs to have sufficient experience and training to perform the activity in a worthwhile manner.
During an annual risk assessment of your company’s cloud deployment, you obtain the SLA from the CSP and need to request certain metrics from the CSP for analysis. You need to determine the time required to perform a requested operation or task. Which measurement should you examine?
A. Man-time to switchover
B. Response time
C. Completion Time
D. Instance Startup time
B. Response time
Explanation:
You should examine the response time, which is the time required to perform a requested operation or task.
Completion time is the time required to complete the initiated or requested task. Completion time measures from the time the request is made until the request fulfillment is completed, while response time measures from the time the request is made until the request fulfillment is started. Response time is always smaller than completion time.
Instance startup time is the time required to initialize a new instance.
Mean-time to switchover is the average time to switch over from a service failure to a replicated failover instance (backup).
A data and media sanitization clause is being included as part of the service level agreement (SLA) with your company’s cloud service provider.
Which sanitization method is the preferred method of sanitization?
A. Physical Destruction
B. Cryptographic Erasure
C. Overwriting
D. Degaussing
A. Physical Destruction
Explanation:
Physical destruction ensures that data is physically unrecoverable and is the preferred method of sanitization.
Degaussing applies a strong magnetic field to the hardware and media, effectively making them blank. This method does not work with solid-state drives.
Overwriting applies multiple passes of random characters to the storage areas, ending with a final pass of all zeroes or ones. Overwriting can be extremely time-consuming for large storage areas. This method does ensure that data cannot be recovered. However, it is not the preferred method of sanitization.
Cryptographic erasure, also referred to as cryptoshredding, encrypts the data. Then it takes the keys generated in that process, encrypts them with a different encryption engine, and destroys the keys. As long as the keys are properly destroyed and a strong encryption algorithm is used, then the data is unrecoverable. However, if a strong encryption algorithm is not used, the encryption can be broken and the data retrieved.
You have recently been hired by a company with a SaaS cloud thatb was launched earlier this year.
You discover that the appropriate risk assessment was not performed prior to the deployment.
As a result, a number of issues have been encountered since the cloud deployment that could have been mitigated.
What is the threat that has occurred in this scenario?
A. APTs
B. Insufficient due diligence
C. Denial of Service
D. Shared technology issues
B. Insufficient due diligence
Explanation:
In this scenario, the threat that has occurred is insufficient due diligence. If the company had performed a proper risk assessment, it would have likely identified many of the issues that occurred and could have taken actions to prevent or mitigate those threats.
A denial of service (DoS) threat occurs when system slowdowns occur as a result of attackers consuming system resources.
Shared technology issues occur when the underlying components in the cloud model are not properly secured using defense-in-depth techniques. All layers of the cloud must be secured and monitored.
Advanced persistent threats (APTs) are attacks that infiltrate systems to gain a foothold within the targeted company. The attackers maintain a quiet presence in the company’s network and establish more doors into the network, using those doors to steal data.
When performing a risk assessment of your cloud’s logical and physical infrastructure, which value provides an estimate of how often an identified risk will occur over the period of a year?
A. SLE
B. AV
C. EF
D. ARO
D. ARO
Explanation:
Annualized rate of occurrence (ARO) provides an estimate of how often an identified risk will occur over the period of a year. ARO is used in the calculation of annualized loss expectancy (ALE). ALE = ARO x SLE.
Asset value is the value of the asset at risk and is expressed in a monetary amount. Exposure factor is the estimated loss that will result if the risk occurs and is expressed as a percentage. Single loss expectancy (SLE) is the potential loss when a risk occurs and uses the following formula:
SLE = AV x EF
If an asset is worth $10,000 and the exposure factor is 20%, then SLE = $2,000. If the ALE for this risk event is every two years, then ALE = $2,000 x .5 = $1,000.
Which of the following consists of managing information in accordance with who has rights to it?
A. PCI-DSS
B. IRM
C. DLP
D. PII
B. IRM
Explanation:
Information Rights Management (IRM) is the process of managing information in accordance with who has rights to it. It is also sometimes known by the name Digital Rights Management.
None of the other options manages information based on who has rights to it.
Data Loss Prevention (DLP) software can be used to monitor the edge of the network or to monitor a single device for the exfiltration of sensitive data.
The Payment Card Industry Data Security Standard (PCI-DSS) covers credit card data.
Personally identifiable information (PII) is any piece of information that can be tied uniquely to an individual, such as a name and Social Security number.
Your company has a public cloud that provides SaaS through a CSP. Your company’s developers have worked with developers at the CSP to create a custom application that uses REST via APIs to interact with the cloud environment. Which of the following is NOT used in REST interactions?
A. Sessions
B. XML
C. Credentials
D. JSON
A. Sessions
Explanation:
Representational State Transfer (REST) interactions do NOT include sessions. The server does not need to store any temporary information about client, so sessions are not required.
Credentials are used to allow authentication between clients and servers. A RESTful application programming interface (API) can support multiple data formats, including JavaScript Object Notation (JSON) and Extensible Markup Language (XML).
Which data classification process ensures that data that is considered sensitive in one environment is likewise treated as sensitive in another environment?
A. Cross-Referencing
B. Classification
C. Mapping
D. Labeling
C. Mapping
Explanation:
Mapping is the process that ensures that data that is considered sensitive in one environment is treated as sensitive in another environment.
Classification is the process that defines the sensitivity level of a resource. It is attached to the resource as a label and is communicated to other environments through mapping.
Labeling is the process of attaching a data classification level to a resource. The label can include many properties about the resource but, at a minimum, should describe its sensitivity level.
Cross-referencing is not a term used when discussing classification processes
You are employed by a CSP that must ensure compliance with Chinese national standards. Which CSA STAR level should the CSP pursue?
A. Level 2: CSA C-STAR Assessment
B. Level 1: CSA GDPR Code of Conduct Self-Assessment
C. Level 2: CSA STAR Attestation
D. Level 2: CSA STAR Certification
A. Level 2: CSA C-STAR Assessment
Explanation:
LEVEL 2: Cloud Security Alliance (CSA) C-Security, Trust, and Assurance Registry (C-STAR) will ensure compliance with Chinese national standards.
Level 1: CSA Global Data Protection Regulation (GDPR) Code of Conduct Self-Assessment only includes a self-assessment and does not ensure compliance with Chinese national standards.
Level 2: CSA STAR Attestation includes a third-party individual assessment against SOC 2 standards.
Level 2: CSA STAR Certification includes a third-party individual assessment against ISO/IEC 27001:2005.
CSA also includes Level 3: Continuous Monitoring, which is currently under development and should include third-party continuous monitoring.
A financial institution has recently decided to engage a CSP to provide an IaaS deployment. The board of directors is concerned with the disclosure of personal financial information. Which US federal law should apply in this situation?
A. SCA
B. GLBA
C. SOX
D. HIPAA
B. GLBA
Explanation:
The Gramm-Leach-Bliley Act (GLBA) should apply in this scenario. GLBA regulates the collection and disclosure of private financial information, stipulates that financial institutions must implement security programs to protect such information, and prohibits accessing private information using false pretenses. It also requires financial institutions to give customers written privacy notices regarding information-sharing practices.
Health Insurance Portability and Accountability Act (HIPAA) would apply in a scenario where healthcare organizations are involved. Sarbanes-Oxley (SOX) protect individuals from accounting errors and fraudulent practices in publicly traded companies. The Stored Communications Act (SCA) protects certain electronic communication and computing services from unauthorized access or interception.
What is the first phase of the cloud-secure software development life cycle?
A. Develop
B. Design
C. Test
D. Define
D. Define
Explanation:
This first phase of the cloud-secure software development life cycle is Define. During this phase, all the requirements for the application are defined.
The phases of the cloud-secure software development life cycle are:
Define Design Develop Test Secure operations Disposal
Your company is carrying out some testing of its business continuity plan (BCP). During this testing, all business units managers and BCP process employees are present and discuss the each individual’s responsibilities. Which type of test is most likely occurring?
A. Full-Interruption Test
B. Functional Drill
C. Tabletop exercise
D. Walk-through drill
C. Tabletop exercise
Explanation:
A tabletop exercise, also referred to as structured walk-through test, is most likely occurring. During this test, business unit managers and BCP process employees are present. A discussion of each individual’s responsibilities is completed. Sometimes it includes individual and team training of the step-by-step procedures outlined in the BCP. The testing process provides clarification and highlighting of critical plan elements. Problems are also noted. This is the first type of test that is usually completed for a BCP.
None of the other tests is described in the scenario.
A walk-through drill, also referred to as a simulation test, is more complicated than a tabletop exercise. In a walk-through drill, a scenario is selected and applied to the BCP. Usually only the operational and support personnel for the BCP process attend this meeting. Attendees practice certain functional steps to ensure that they have the knowledge and skills needed to complete them. For this test, it is critical for employees to act out the critical steps, recognize difficulties, and resolve problems.
A functional drill, also referred to as a parallel test, involves moving personnel to the recovery site(s) to attempt to establish communications and perform real recovery processing. The drill will help the organization determine whether following the BCP will successfully recover critical systems at an alternate processing site. Because a functional drill fully tests the BCP, all employees are involved. It demonstrates emergency management capabilities and tests procedures for evacuation, medical response, and warnings.
A full-interruption, also referred to as a full-scale test, is the most comprehensive BCP test. A real-life emergency is simulated as closely as possible. It is important to properly plan this type of test to ensure that business operations are not negatively affected. This usually includes processing data and transactions using backup media at the recovery site. All employees must participate in this type of test, and all response teams must be involved. This test has the highest level of simulation, including notifications, resource mobilization, and communications.
Your company has a contract with a CSP to provide an IaaS model. This model has been deployed for two years now. Company developers have designed different APIs to manage and to provide access to the cloud resources. Management has recently become concerned that competitors who use the same CSP can gain access to your company’s information that is deployed on shared CSP resources. Which common pitfall of cloud deployment models is described in this scenario?
A. Integration Complexity
B. On-premises Transfer
C. Apps not cloud ready
D. Tenancy Separation
D. Tenancy Separation
Explanation:
This scenario is displaying the pitfall of tenancy separation. When an organization deploys a cloud solution, it must usually address the security concerns that accompany a multi-tenancy solution. Most companies will need to ensure that the CSP implements the proper countermeasures for access control, process isolation, and denial of guest/host escape attempts.
On-premises transfer is a pitfall that occurs when a company transfers its applications and configurations to the cloud. APIs that were developed to use on-premises resources may not function properly in the cloud or provide the appropriate security.
Integration complexity is a pitfall that occurs when new applications need to interface with old applications. Often, developers do not have unrestricted access to the supporting services. It can be difficult to design the integration of infrastructure, applications, and integration platforms managed by the CSP with the appropriate level of security.
Apps not being cloud ready is a pitfall that occurs because many applications were not originally developed while considering the security implications of a cloud deployment. These older applications may not work properly in a cloud model.
Your company wants to deploy several virtual machines using resources provided by your cloud service provider (CSP). As part of this deployment, you need to install and configure the virtualization management tools on your network. Which of the following statements is FALSE about installing these tools?
A. Access to the virtualization management tools should be rule-based
B. Virtualization management should take place on an isolated management network
C. Audit and log all access to the virtualization management tools
D. Only a secure kernel based virtual machine should be used to access the hosts
A. Access to the virtualization management tools should be rule-based
Explanation:
Access to the virtualization management tool should be role-based, not rule-based.
All of the other statements regarding virtualization management are true. Virtualization management should take place on an isolated management network. You should audit and log all access to the virtualization management tools. Only a secure kernel-based virtual machine (KVM) should be used to access the hosts.
Users are reporting issues when accessing data on the cloud. When you research the issue, you notice that reads and writes are slow. Which component is most likely causing this problem?
A. CPU
B. Disk
C. Memory
D. Network
B. Disk
Explanation:
When you are experiencing slow reads and writes, the problem is mostly likely the disk.
None of the other components is the problem. If the network were the problem, you would experience excessive dropped packets. If the memory were the problem, you would experience excessive memory usage or a paging file issue. If the CPU were the problem, you would experience excessive CPU utilization.
With which cloud model is the lack of security surrounding app stores an issue?
A. SaaS
B. PaaS
C. IaaS
D. XaaS
A. SaaS
Explanation:
One of the most important security considerations with Software as a Service (SaaS) is the lack of security surrounding app stores. In many cases these app stores make apps available that the cloud vendor did not develop but simply resells. These apps may not have been adequately tested from a security standpoint. Google Android has had issues with this in the past. It is possible and advisable to restrict access in the app store to only company developed or approved apps.
None of the other models include the lack of security surrounding app stores as an issue.
Identity and access management (IAM) is important to all cloud deployments, but is critical to Platform as a Service (PaaS). PaaS comprises a development platform that not only provides but encourages shared access. It is critical that IAM systems be robust. Otherwise you lose both security and accountability, which is a key need in a shared environment.
The biggest security consideration for Infrastructure as a Service (IaaS), especially in a shared public cloud, is the proper segmentation of resources between tenants. This includes the isolation of VMs from various tenants.
Anything as a Service (XaaS) is a general term that applies to all forms of cloud service offerings and not to a specific type.
Which of the following technologies or concepts has had the impact of making the cloud intelligent?
A. Quantum Computing
B. Containers
C. AI
D. IoT
C. AI
Explanation: Artificial intelligence (AI) occurs when a machine system such as a computer is able to take in data and make decisions based on that data unaided by humans. Also sometimes called machine learning, it enables the cloud to “learn” from data or to become “intelligent”.
Quantum computing uses quantum-mechanical phenomena such as superposition and entanglement to perform computations. When quantum supremacy is realized (which a NASA and Google AI partnership claimed to have achieved in mid-2019), quantum computing has the ability to solve a problem that classical computers practically cannot. When quantum computing becomes available to the masses, it will likely do so through the cloud because of the massive computing resources required to perform quantum calculations.
Containers comprise an alternative method of providing virtualization. While container-based virtualization is not likely to completely replace server virtualization because the security is still better with a virtual machine, it has many advantages in a cloud environment. It can be deployed quicker than a VM, and containers let you pack more computing workloads onto a single server, resulting in less hardware purchase, lower facilities cost for your datacenter, and a reduction in experts required to manage that equipment.
The Internet of Things (IoT) has had the effect of increasing the need for cloud computing services because it is usually preferable to manage IoT resources from the cloud.
You are employed by a cloud service provider (CSP) that has globally distributed data centers and secure cloud computing environments. You decide to deploy remote access to allow authorized employees, customers, and third-party personnel to remotely manage cloud deployments. Which of the following is NOT a benefit of providing this solution?
A. Lower Administrative Overhead B. Accountability C. Secure Isolation D. Session Control E. Real Time Monitoring
A. Lower Administrative Overhead
Explanation:
Deploying remote access to allow authorized employees, customer, and third-party personnel to remotely manage cloud deployments does NOT lower administrative effort. In fact, it may actually increase administrative overhead because administrators will need to ensure that the appropriate tools are installed for those needing remote access. In addition, training will need to be provided on the remote access tools.
All of the other options are benefits of providing remote access. Accountability provides information on who is accessing the data center remotely using a tamper-proof audit trail. Session control allows control of who can access the environment and enforcement of workflows. Real-time monitoring allows administrators to view privileged activities as they are happening or as a recorded playback for forensic analysis. Secure isolation between the remote user’s desktop and the target system is provided so that any potential malware does not spread to the target systems.
In which data technique is data sliced into chunks that are encrypted along with parity bits and then written to various drives in the cloud cluster?
A. Data Encoding
B. RAID
C. Erasure Coding
D. Data Dispersion
D. Data Dispersion
Explanation:
While somewhat similar to RAID, data dispersion is the process of slicing the data into what are called shards. It is then encrypted along with parity bits (a process called erasure encoding when used in cloud data dispersal) and then written to various drives in the cloud cluster.
While there are RAID versions that provide no-fault tolerance, in most cases RAID uses multiple drives, strategically located data, and parity information to allow for immediate access to data that may have resided on a failed drive. RAID does not, however, slice the data into shards and then encrypt it along with parity bits.
Data encoding is the process of using coding techniques to prevent the introduction of malicious- character strings into web applications. It has nothing to do with data dispersal.
Erasure coding is the process used by data dispersion to encrypt the data along with parity bits.
Recently, an attacker was able to access several virtual machines deployed in the cloud. Initially, you were able to deploy a workaround. Once the incident was stopped, you were able to identify the root cause. As a result, you now need to deploy a fix for the known error. In which process are you currently working?
A. Configuration Management
B. Problem Management
C. Incident Management
D. Change Management
B. Problem Management
Explanation:
This issue was initially identified as a problem. Problem management is used to minimize the impact of problems. A problem is the root cause of an incident.
Change management manages all changes to configuration items, including any devices. All changes must be tested and formally approved prior to deployment in the live environment. Problem management should hand over the fix for the known error for proper testing and approval. Once the fix is approved, change management will deploy the fix, and the fix will be marked as completed in both the change management and problem management processes.
Incident management occurs when incidents are identified, analyzed, and corrected to prevent a future reoccurrence. The initial attack and its resolution was part of incident management. However, identifying the root cause and deploying a fix to a known error is part of problem management.
Configuration management occurs when the configuration of an item, such as a network device, must be changed.
When secure settings have been implemented on your SIEM system, which system needs to be updated to reflect those changes?
A. WAF
B. SDLC
C. CMDB
D. API
C. CMDB
Explanation:
The Configuration Management Database (CMDB) needs to be updated to reflect any system changes. The CMDB is a repository that should contain all the configuration settings of the various systems and the history or versioning of changes to those settings.
The Software Development Life Cycle (SDLC) is a model to guide software development projects to ensure the timely delivery of systems that are both functional and secure.
A web application firewall (WAF) is a firewall that is designed to examine all traffic to a web server while looking for common web attacks in an attempt to prevent them.
An application programming interface (API) is a software entity created to communicate between applications or between components of an application.
Which of the following cloud service categories is most likely to have web application security issues?
A. None of the categories
B. PaaS
C. IaaS
D. SaaS
D. SaaS
Explanation:
The Software as a Service (SaaS) cloud service is most likely to have web application security issues because software and OS vulnerabilities exist whether software is hosted in the cloud or on-premises.
None of the other listed categories is as likely to have web application security issues.
The security issues for the Infrastructure as a Service (IaaS) model are personnel threats, external threats, and lack of specific skillsets to manage the model.
The Platform as a Service (PaaS) mode includes the security issues listed for the IaaS model, and also includes issues with interoperability, persistent backdoors, virtualization, and resource sharing.
The SaaS model includes all the security issues listed for the IaaS and PaaS models, and also includes issues with proprietary formats and web application security.
Which of the following statements does NOT correspond to the Store phase of the cloud data lifecycle?
A. Data should be classified according to sensitivity and value
B. The Store phase typically occurs at the same time as the Create phase
C. Data should be protected according to its classification level
D. Access control lists (ACLs) should be created to control access to cloud data
B. The Store phase typically occurs at the same time as the Create phase
Explanation:
While data should be classified according to sensitivity and value, this classification does NOT take place during the Store phase. It takes place during the Create phase.
All of the other statements correspond to the Store phase of the cloud data lifecycle. Data should be protected according to its classification level, which is assigned during the Create phase. ACLs should be created to control access to cloud data. The Store phase typically occurs at the same time as the Create phase.
Which of the following, when used to switch between multiple devices connected to a unit, physically breaks the current connection before a new one is made?
A. Secure Data Ports
B. Fixed Firmware
C. Tamper labels
D. Air-gapped pushbuttons
D. Air-gapped pushbuttons
Explanation:
Air-gapped pushbuttons on KVM switches physically break the current connection before a new one is made.
Tamper labels are used to alert you that someone has physically accessed the system and torn the labels. They are applied to the cases of devices that you need to remain secure. While they do not prevent physical access, they alert you if physical access has occurred.
Fixed firmware is device software that cannot be erased or altered. Fixed firmware is installed on internal chips in the device.
Secure data ports reduce the likelihood of data leaking between computers that are connected through the KVM by protecting the ports. But they do not break a connection before a new one is made.
Which of the following is the maximum amount of time you can continue without a resource?
A. MTD
B. RTO
C. ALE
D. RPO
A. MTD
Explanation:
Maximum tolerable downtime (MTD) is the maximum amount of time you can continue without a resource.
Recovery time objective (RTO) is the target time in which recovery occurs. It should be less than the MTD to provide extra time.
Recovery point objective (RPO) is measured in data loss, not time. It is the maximum allowable amount of data you can afford to lose during an issue. This value can be reduced by more frequent backups.
Annualized loss expectancy (ALE) is the financial loss expected from an event spread across the time period between events based on the event’s historical frequency. This yields a yearly average amount.
You are designing the encryption system to use for your cloud solution. What type of cryptography would be appropriate when most of the access to the cloud will be from smartphones and tablets?
A. ECC
B. AES
C. DES
D. Triple DES
A. ECC
Explanation:
Elliptical curve cryptography (ECC) is an approach to public key cryptography that uses much smaller keys than traditional cryptography to provide the same level of security. Smaller key sizes place a lighter load on the CPU of the devices, and because smartphones and tablets have less processing power, this is a good thing.
Advanced Encryption Standard (AES) is the most secure encryption currently, but it requires the use of large keys.
Data Encryption Standard (DES) is less secure than AES and also requires the use of larger keys.
Triple DES is more secure than DES, less secure than AES, and again, requires the use of larger keys.
In which step of the SDLC are the business requirements of the software determined?
A. Defining
B. Design
C. Testing
D. Development
A. Defining
Explanation:
In the Defining step of the Software Development Life Cycle (SDLC), the business requirements of the software are determined, which will drive the design step.
In the Testing step, activities such as initial penetration testing and vulnerability scanning against the application are performed.
In the Design step, the possible user operations are defined and the interface is envisioned.
The Development step is where the programmers are most involved and is when code is written.
According to ISO/IEC 27034, which two components are used to build the ASMP? (Choose two.)
A. DAST B. ANF C. ONF D. SAST D. RASP
B. ANF
C. ONF
Explanation:
The organizational normative framework (ONF) and the application normative framework (ANF) are used to build the application security management process (ASMP). The ONF defines the organizational security best practices for all application development, and include sections that cover business context, regulatory context, technical context, specifications, roles, process, and application security control (ASC) library. The ANF uses the applicable portions of the ONF on a specific application to achieve the needed security requirements or the target trust level.
The ONF and ANF have a one-to-many relationship, where one ONF is used as the basis to create multiple ANFs. An ASMP manages and maintains each ANF.
The steps in the ASMP process are as follows:
Specify the application requirements and environment. Assess application security risks. Create and maintain the ANF and ONF. Provision and operate the application. Audit application security.
All of the other options are security testing methods.
Static application security testing (SAST) is considered to be a white-box test. With SAST, an analysis of the application source code, byte code, and binaries are performed without executing the application code. It is used to detect coding errors. SAST is most often using during the development of the application and is more comprehensive that dynamic application security testing (DAST).
Dynamic application security testing (DAST) is usually considered a black-box test. DAST is used against applications that are running, rather than prior to their deployment like SAST.
Runtime application self-protection (RASP) prevents issues in applications by deploying self-protection capabilities in the runtime environment.
Your company wants to migrate to a cloud solution for personnel access to Windows and Linux. Which cloud service category should you research?
A. PaaS B. NaaS C. CompaaS D. SaaS D. DSaaS F. IaaS
A. PaaS
Explanation:
The Platform as a Service (PaaS) cloud service category allows access to operating systems, such as Windows, Linux, Unix, and Mac OS.
The three main cloud service categories are PaaS, Software as a Service (SaaS), and Infrastructure as a Service (IaaS). SaaS provides access to applications, including email. IaaS provides access to hardware, blades, connectivity, and utilities.
Other cloud service categories that you need to understand for the CCSP exam include:
Compliance as a Service (CompaaS or CaaS) - includes a variety of compliance services such as data encryption, disaster recovery, reporting, and vulnerability scanning. Networking as a Service (NaaS) - includes network services from third parties to customers that do not want to build their own networking infrastructure. Data Science as a Service (DSaaS) - involves an outside company providing advanced analytics applications (gathered using data science) to corporate clients for their business use.
Some publications will tell you that CompaaS, NaaS, and DSaaS will not be on the exam. However, these three categories are specifically listed in the CCSP Client Information Bulletin (CIB) from (ISC)2.
What is the primary coverage area in ISO/IEC 28000:2007?
A. Information security management systems
B. Security management systems for the supply chain
C. Risk management
D. Security techniques for PII in public clouds
B. Security management systems for the supply chain
Explanation:
ISO/IEC 28000:2007 covers security management systems for the supply chain.
ISO/IEC 31000 covers risk management. ISO/IEC 27001 covers information security management systems. ISO/IEC 27018:2014 covers security techniques for personally identifiable information (PII) in public clouds.