Pocket Prep 11 Flashcards
Pabla has been working with their corporation to understand the impact that particular threats can have on their Infrastructure as a Service (IaaS) implementation. The information gathered through this process will be used to determine the correct solutions and procedures that will be built to ensure survival through many different incidents and disasters. To perform a quantitative assessment, they must determine their Single Loss Expectancy (SLE) for the corporation’s Structured Query Language (SQL) database in the event that the data is encrypted through the use of ransomware.
Which of the following is the BEST definition of SLE?
A. SLE is the value of the event given a certain percentage loss of the asset
B. SLE is the value of the asset given the amount of time it will be offline in a given year
C. SLE is the value of the event given the value of the asset and the time it can be down
D. SLE is the value of the cost of the event multiplied times the asset value
A. SLE is the value of the event given a certain percentage loss of the asset
Explanation:
Which of the following is the BEST definition of SLE?
A
SLE is the value of the event given a certain percentage loss of the asset
Correct answer: SLE is the value of the event given a certain percentage loss of the asset
SLE is calculated by taking the asset value times the exposure factor. Exposure factor is effectively the percentage of loss of the asset.
The Annual Rate of Occurrence (ARO) is the number of times that event is expected within a given year.
The SLE multiplied times the ARO gives the value of the annualized loss expectancy.
SLE is the value of the cost of the event multiplied times the asset value is an incorrect answer because it is the loss of the asset times the asset value.
SLE is the value of the event given the value of the asset and the time it can be down is an incorrect answer because the time it can be offline is not a factor. That would be the Maximum Tolerable Downtime (MTD).
SLE is the value of the asset given the amount of time it will be offline in a given year is an incorrect answer because it is not the amount of time it can be offline in a given year. That is typically represented by nines (e.g., 99.99999% downtime).
Which of the following NIST controls for system and communication protection is MOST closely related to management of tasks such as encryption and logging configurations?
A. Cryptographic Key Establishment and Management
B. Boundary Protection
C. Security Function Isolation
D. Separation of System and User Functionality
C. Security Function Isolation
Explanation:
NIST SP 800-53, Security and Privacy Controls for Information Systems and Organizations defines 51 security controls for systems and communication protection. Among these are:
Policy and Procedures: Policies and procedures define requirements for system and communication protection and the roles, responsibilities, etc. needed to meet them. Separation of System and User Functionality: Separating administrative duties from end-user use of a system reduces the risk of a user accidentally or intentionally misconfiguring security settings. Security Function Isolation: Separating roles related to security (such as configuring encryption and logging) from other roles also implements separation of duties and helps to prevent errors. Denial-of-Service Prevention: Cloud resources are Internet-accessible, making them a prime target for DoS attacks. These resources should have protections in place to mitigate these attacks as well as allocate sufficient bandwidth and compute resources for various systems. Boundary Protection: Monitoring and filtering inbound and outbound traffic can help to block inbound threats and stop data exfiltration. Firewalls, routers, and gateways can also be used to isolate and protect critical systems. Cryptographic Key Establishment and Management: Cryptographic keys are used for various purposes, such as ensuring confidentiality, integrity, authentication, and non-repudiation. They must be securely generated and secured against unauthorized access.
Donie is working for a corporation that is about to undergo an audit. The auditor knows that the corporation is subject to the Federal Information Security Management Act (FISMA). Which type of corporation is Donie employed by?
A. Retail
B. Government agency
C. Healthcare
D. Banking
B. Government agency
Explanation:
The organization is a government agency. Government agencies are affected by the Federal Information Security Management Act (FISMA). The law defines a comprehensive framework to protect government information, operations, and assets against natural or human-made threats. It requires that government agencies conduct vulnerability scans.
None of the other organizations are affected by FISMA.
Healthcare in the U.S. must be in compliance with the Health Insurance Portability and Accountability Act (HIPAA).
Banking is subject to Basel III in Europe.
Retail must comply with the Payment Card Industry Data Security Standard (PCI DSS).
Rebekah has been working with software developers on mechanisms that they can implement to protect data at different times. There is a need to use data from a customer database in another piece of software. However, it is necessary to ensure that all personally identifiable elements are removed first.
The process of removing all identifiable characteristics from data is known as which of the following?
A. Anonymization
B. Obfuscation
C. Masking
D. De-identification
A. Anonymization
Explanation:
Anonymization is the removal of all personally identifiable pieces of information, both direct and indirect.
Data de-identification is the removal of all direct identifiers. It leaves the indirect ones in place.
Masking is to cover or hide information. This is commonly seen when a user types in their password, yet all that is seen on the screen are stars or dots.
Obfuscation is to confuse. Encryption is one form of obfuscation, but it can be done with other techniques. For example, instead of transmitting data in base 16, it could be sent in base 64.
Containerization is an example of which of the following?
A. Serverless
B. Microservices
C. Application virtualization
D. Sandboxing
C. Application virtualization
Explanation:
Application virtualization creates a virtual interface between an application and the underlying operating system, making it possible to run the same app in various environments. One way to accomplish this is containerization, which combines an application and all of its dependencies into a container that can be run on an OS running the containerization software (Docker, etc.). Microservices and containerized applications commonly require orchestration solutions such as Kubernetes to manage resources and ensure that updates are properly applied.
Sandboxing is when applications are run in an isolated environment, often without access to the Internet or other external systems. Sandboxing can be used for testing application code without placing the rest of the environment at risk or evaluating whether a piece of software contains malicious functionality.
Serverless applications are hosted in a Platform as a Service (PaaS) cloud environment, where management of the underlying servers and infrastructure is the responsibility of the cloud provider, not the cloud customer.
Defining clear, measurable, and usable metrics is a core component of which of the following operational controls and standards?
A. Continual Service Improvement Management
B. Change Management
C. Continuity Management
D. Information Security Management
A. Continual Service Improvement Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.
Your organization is in the process of migrating to the cloud. Mid-migration you come across details in an agreement that may leave you non-compliant with a particular law. Who would be the BEST contact to discuss your cloud-environment compliance with legal jurisdictions?
A. Regulator
B. Stakeholder
C. Partner
D. Consultant
A. Regulator
Explanation:
As a CCSP, you are responsible for ensuring that your organization’s cloud environment adheres to all applicable regulatory requirements. By staying current on regulatory communications surrounding cloud computing and maintaining contact with approved advisors and, most crucially, regulators, you should be able to assure compliance with legal jurisdictions.
A partner is a generic term that can be used to refer to many different companies. For example, an auditor can be considered a partner.
A stakeholder is someone who has responsibility for caring for a part of the business.
A consultant could assist with just about anything. It all depends on what their skills are. It is plausible that a consultant could help with legal issues. However, regulators definitely understand the laws, so that makes for the best answer.
Reference:
(ISC)² CCSP Certified Cloud Security Professional Official Study Guide, 3rd Edition. Pg 253.
Load and stress testing are examples of which type of testing?
A. Usability Testing
B. Functional Testing
C. Unit Testing
D. Non-Functional Testing
D. Non-Functional Testing
Explanation:
Functional testing is used to verify that software meets the requirements defined in the first phase of the SDLC. Examples of functional testing include:
Unit Testing: Unit tests verify that a single component (function, module, etc.) of the software works as intended. Integration Testing: Integration testing verifies that the individual components of the software fit together correctly and that their interfaces work as designed. Usability Testing: Usability testing verifies that the software meets users’ needs and provides a good user experience. Regression Testing: Regression testing is performed after changes are made to the software and verifies that the changes haven’t introduced bugs or broken functionality.
Non-functional testing tests the quality of the software and verifies that it provides necessary functionality not explicitly listed in requirements. Load and stress testing or verifying that sensitive data is properly secured and encrypted are examples of non-functional testing.
Which of the following cloud roles and responsibilities involves maintaining cloud infrastructure AND meeting SLAs?
A. Regulatory
B. Cloud Service Broker
C. Cloud Service Provider
D. Cloud Service Partner
C. Cloud Service Provider
Explanation:
Some of the important roles and responsibilities in cloud computing include:
Cloud Service Provider: The cloud service provider offers cloud services to a third party. They are responsible for operating their infrastructure and meeting service level agreements (SLAs). Cloud Customer: The cloud customer uses cloud services. They are responsible for the portion of the cloud infrastructure stack under their control. Cloud Service Partners: Cloud service partners are distinct from the cloud service provider but offer a related service. For example, a cloud service partner may offer add-on security services to secure an organization’s cloud infrastructure. Cloud Service Brokers: A cloud service broker may combine services from several different cloud providers and customize them into packages that meet a customer’s needs and integrate with their environment. Regulators: Regulators ensure that organizations — and their cloud infrastructures — are compliant with applicable laws and regulations. The global nature of the cloud can make regulatory and jurisdictional issues more complex.
Tristan is the cloud information security manager working for a pharmaceutical company. They have connected to the community cloud that was built by the government health agency to advance science, diagnosis, and patient care. They also have stored their own data with a public cloud provider in the format of both databases and data lakes.
What have they built?
A. Public cloud
B. Hybrid cloud
C. Storage area network
D. Private cloud
B. Hybrid cloud
Explanation:
A hybrid cloud deployment model is a combination of two of the three options: public, private, and community. It could be public and private, private and community, or public and community as in the question. A public cloud example is Amazon Web Service (AWS). A private cloud is built for a single company. Fundamentally, it means that all the tenants on a single server are from the same company. A community example is the National Institute of Health (NIH), which built a community cloud to advance science, diagnosis, and patient care.
A Storage Area Network (SAN) is the physical and virtual structure that holds data at rest. SAN protocols include Fibre Channel and iSCSI.
Which of the following risks associated with PaaS environments includes hypervisor attacks and VM escapes?
A. Virtualization
B. Persistent Backdoors
C. Interoperability Issues
D. Resource Sharing
A. Virtualization
Explanation:
Platform as a Service (PaaS) environments inherit all the risks associated with IaaS models, including personnel threats, external threats, and a lack of relevant expertise. Some additional risks added to the PaaS model include:
Interoperability Issues: With PaaS, the cloud customer develops and deploys software in an environment managed by the provider. This creates the potential that the customer’s software may not be compatible with the provider’s environment or that updates to the environment may break compatibility and functionality. Persistent Backdoors: PaaS is commonly used for development purposes since it removes the need to manage the development environment. When software moves from development to production, security settings and tools designed to provide easy access during testing (i.e. backdoors) may remain enabled and leave the software vulnerable to attack in production. Virtualization: PaaS environments use virtualized OSs to provide an operating environment for hosted applications. This creates virtualization-related security risks such as hypervisor attacks, information bleed, and VM escapes. Resource Sharing: PaaS environments are multitenant environments where multiple customers may use the same provider-supplied resources. This creates the potential for side-channel attacks, breakouts, information bleed, and other issues with maintaining tenant separation.
Organizations like yours are looking for guidance on how to meet business objectives while also managing and minimizing the risks that come with implementing cloud computing solutions. Which of the following would be the most helpful?
A. Cloud Security Alliance (CSA)
B. Open Web Application Security Project (OWASP)
C. Internet Assigned Numbers Authority (IANA)
D. Institute of Electrical and Electronics Engineers (IEEE)
A. Cloud Security Alliance (CSA)
Explanation:
The Cloud Security Alliance (CSA) is an organization that offers guidance to organizations deploying a cloud environment. They provide support to cloud providers and customers to enable trust in the cloud. This includes the Cloud Controls Matrix (CCM) and the Enterprise Architecture [formerly the Trusted Cloud Initiative (TCI)].
OWASP is a group working to improve the security of applications.
IANA is a global organization that is responsible for the assignment of Internet Protocol (IP) addresses.
IEEE is an association for electrical and electronics engineers.
Client care representatives in your firm are now permitted to access and see customer accounts. For added protection, you’d like to build a feature that obscures a portion of the data when a customer support representative reviews a customer’s account. What type of data protection is your firm attempting to implement?
A. Obfuscation
B. Tokenization
C. Masking
D. Encryption
C. Masking
Explanation:
The organization is trying to deploy masking. Masking obscures data by displaying only the last four/five digits of a social security or credit card number, for example. As a result, the data is incomplete in the absence of the blocked/removed content. The rest of the information can appear to be there, but the user only sees “*” or a dot.
Tokenization is the process of removing data and placing a token in its place. The question is about part of the data being available, so that does not work.
Encryption takes the data and makes it unreadable. It’s unusual to encrypt, for example, the first part of a credit card number. So, this does not work, either.
Obfuscation is to “confuse.” If data has been obfuscated, the attacker would be left confused when looking at it. Think encryption. It is a way to obscure the data. There are other ways, though. Again, this is not going to work because the user sees a lot of asterisks or dots. That is not obfuscation, that is masking.
For which of the following is data discovery the EASIEST?
A. Semi-structured data
B. Structured data
C. Mostly structured data
D. Unstructured data
B. Structured data
Explanation:
The complexity of data discovery depends on the type of data being analyzed. Data is commonly classified into one of three categories:
Structured: Structured data has a clear, consistent format. Data in a database is a classic example of structured data where all data is labeled using columns. Data discovery is easiest with structured data because the data discovery tool just needs to understand the structure of the database and the context to identify sensitive data. Unstructured Data: Unstructured data is at the other extreme from structured data and includes data where no underlying structure exists. Documents, emails, photos, and similar files are examples of unstructured data. Data discovery in unstructured data is more complex because the tool needs to identify data of interest completely on its own. Semi-Structured Data: Semi-structured data falls between structured and unstructured data, having some internal structure but not to the same degree as a database. HTML, XML, and JSON are examples of semi-structured data formats that use tags to define the function of a particular piece of data.
Mostly structured is not a common classification for data.
Which of the following cloud audit mechanisms is designed to identify anomalies or trends that could point to events of interest?
A. Correlation
B. Log Collection
C. Packet Capture
D. Access Control
A. Correlation
Explanation:
Three essential audit mechanisms in cloud environments include:
Log Collection: Log files contain useful information about events that can be used for auditing and threat detection. In cloud environments, it is important to identify useful log files and collect this information for analysis. However, data overload is a common issue with log management, so it is important to collect only what is necessary and useful. Correlation: Individual log files provide a partial picture of what is going on in a system. Correlation looks at relationships between multiple log files and events to identify potential trends or anomalies that could point to a security incident. Packet Capture: Packet capture tools collect the traffic flowing over a network. This is often only possible in the cloud in an IaaS environment or using a vendor-provided network mirroring capability.
Access controls are important but not one of the three core audit mechanisms in cloud environments.
A cloud provider has assembled all the cloud resources together, from routers to servers and switches, as well as the Central Processing Unity (CPU), Random Access Memory (RAM), and storage within the servers. Then they made them available for allocation to their customers. Which term BEST describes this process?
A. Reversibility
B. Data portability
C. Resource pooling
D. On-demand self-service
C. Resource pooling
Explanation:
Cloud providers may choose to do resource pooling, which is the process of aggregating all the cloud resources together and allocating them to their cloud customers. There is pooling of physical equipment into the datacenter. Then there is a pool of resources within a server that are allocated to running virtual machines. That is the Central Processing Unity (CPU), the Random Access Memory (RAM), and the network bandwidth that is available.
Reversibility is the ability to get all the company’s artifacts out of the cloud provider’s equipment, and what is on the provider’s equipment is appropriately deleted.
Portability is the ability to move data from one provider to another without having to reenter the data.
On-demand self-service is the ability for the customer/tenant to use a portal to purchase and provision cloud resources without having much, if any, interaction with the cloud provider.
Integrity protections like hash functions are important to demonstrate which necessary attribute of evidence?
A. Accurate
B. Complete
C. Admissible
D. Authentic
A. Accurate
Explanation:
Typically, digital forensics is performed as part of an investigation or to support a court case. The five attributes that define whether evidence is useful include:
Authentic: The evidence must be real and relevant to the incident being investigated. Accurate: The evidence should be unquestionably truthful and not tampered with (integrity). Complete: The evidence should be presented in its entirety without leaving out anything that is inconvenient or would harm the case. Convincing: The evidence supports a particular fact or conclusion (e.g., that a user did something). Admissible: The evidence should be admissible in court, which places restrictions on the types of evidence that can be used and how it can be collected (e.g., no illegally collected evidence).
Amelia works for a medium-sized company as their lead information security manager. She has been working with the development and operations teams on their new application that they are building. They are building an application that will interact with their customers through the use of an Application Programming Interface (API). Due to the nature of the application, it has been decided that they will use SOAP.
That means that the data must be formatted using which of the following?
A. eXtensible Markup Language (XML)
B. Java Script Object Notation (JSON)
C. Coffee Script Object Notation (CSON)
D. YAML (YAML Ain’t Markup Language)
A. eXtensible Markup Language (XML)
Explanation:
The SOAP only permits the use of XML-formatted data, while REpresentational State Transfer (REST) allows for the use of a variety of data formats, including both XML and JSON. SOAP is most commonly used when the use of REST is not possible.
XML, JSON, YAML, CSON are all data formats.
Which stage of the IAM process relies heavily on logging and similar processes?
A. Authentication
B. Identification
C. Accountability
D. Authorization
C. Accountability
Explanation:
Identity and Access Management (IAM) services have four main practices, including:
Identification: The user uniquely identifies themself using a username, ID number, etc. In the cloud, identification may be complicated by the need to connect on-prem and cloud IAM systems via federation or identity as a service (IDaaS) offering. Authentication: The user proves their identity via passwords, biometrics, etc. Often, authentication is augmented using multi-factor authentication (MFA), which requires multiple types of authentication factors to log in. Authorization: The user is granted access to resources based on assigned privileges and permissions. Authorization is complicated in the cloud by the need to define policies for multiple environments with different permissions models. A cloud access security broker (CASB) solution can help with this. Accountability: Monitoring the user’s actions on corporate resources. This is accomplished in the cloud via logging, monitoring, and auditing.
A financial organization has purchased an Infrastructure as a Service (IaaS) cloud service from their cloud provider. They are consolidating and migrating their on-prem data centers (DC) into the cloud. Once they are setup in the cloud, they will have their servers, routers, and switches configured as needed with all of the network-based security appliances such as firewalls and Intrusion Detection Systems (IDS).
What type of billing model should this organization expect to see?
A. Locked-in monthly payment that never changes
B. Metered usage that changes based upon resource utilization
C. One up-front cost to purchase cloud equipment
D. Up-front equipment purchase, then a locked-in monthly fee afterward
B. Metered usage that changes based upon resource utilization
Explanation:
In an IaaS environment (and Platform as a Service (PaaS) as well as Software as a Service (SaaS)), the customer can expect to only pay for the resources that they are using. This is far more cost effective and allows for greater scalability. However, this type of billing does mean that the price is not locked-in, and it could change as the need for resources either increases or decreases from month to month.
There is no equipment to purchase with cloud services (IaaS, PaaS or SaaS). You could purchase equipment if you want to build a private cloud. However, there is no mention of that in the question. The standard cloud definition excludes the “locked-in monthly payment.” A company could offer that, but it is outside of the cloud as defined in NIST SP 800-145 and ISO 17788.
Yun is working with the application developers as they move through development and into operations with their new application. They are looking to add something to the application that can allow the application to protect itself.
Which of the following is a security mechanism that allows an application to protect itself by responding and reacting to ongoing events and threats?
A. Vulnerability scanning
B. Dynamic Application Security Testing (DAST)
C. Runtime Application Self-Protection (RASP)
D. Static Application Security Testing (SAST)
C. Runtime Application Self-Protection (RASP)
Explanation:
Runtime Application Self-Protection (RASP) is a security mechanism that runs on the server and starts when the application starts. RASP allows an application to protect itself by responding and reacting to ongoing events and threats in real time. RASP can monitor the application, continuously looking at its own behavior. This allows the application to detect malicious input or behavior and respond accordingly.
Dynamic Application Security Testing (DAST) is a type of security test that looks at the application in a dynamic or running state. This means that the tester can only use the application. They do not have the source code for the application. This can be used to test if the application behaves as needed or if it can be used maliciously by a bad actor.
Static Application Security Testing (SAST) is a type of test where the application is static or still. That means the application is not in a running state, so what the test has knowledge of and access to is the source code.
Vulnerability scanning is a test that is run on systems to ensure that the systems are properly hardened, and there are not any known vulnerabilities in the system.
The software development team is working with the information security team through the Software Development Lifecycle (SDLC). The information security manager is concerned that the team is rushing through the phase of the lifecycle where the most technical mistakes could be made. Which phase is that?
A. Testing
B. Requirements
C. Development
D. Planning
C. Development
Explanation:
During the development or coding phase of the SDLC, the plans and requirements are turned into an executable programming language. As this is the phase where coding takes place, it is most likely the place where technical mistakes would be made.
Technical mistakes could be made in the planning or requirements phase, although more architectural problems are likely to occur.
Testing is technical and mistakes can be made during testing, but it is more likely that the testing is not as complete as needed.
A cloud service provider is building a new data center to provide options for companies that are looking for private cloud services. They are working to determine the size of datacenter that they want to build. The Uptime Institute created the Data Center Site Infrastructure Tier Standard Topology. With this standard, they created a few levels of data centers. The cloud provider has a goal of reaching tier three.
How is that characterized in general?
A. Basic Capacity
B. Redundant Capacity Components
C. Concurrently Maintainable
D. Fault Tolerance
C. Concurrently Maintainable
Explanation:
Correct answer: Concurrently Maintainable
The Uptime Institute publishes one of the most widely used standards on data center tiers and topologies. The standard is based on four tiers, which include:
Tier I: Basic Capacity Tier II: Redundant Capacity Components Tier III: Concurrently Maintainable Tier IV: Fault Tolerance
Which of the following involves tracking known issues and having documented solutions or workarounds?
A. Problem Management
B. Continuity Management
C. Service Level Management
D. Availability Management
A. Problem Management
Explanation:
Standards such as the Information Technology Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000-1 define operational controls and standards, including:
Change Management: Change management defines a process for changes to software, processes, etc., reducing the risk that systems will break due to poorly managed changes. A formal change request should be submitted and approved or denied by a change control board after a cost-benefit analysis. If approved, the change will be implemented and tested. The team should also have a plan for how to roll back the change if something goes wrong. Continuity Management: Continuity management involves managing events that disrupt availability. After a business impact assessment (BIA) is performed, the organization should develop and document processes for prioritizing the recovery of affected systems and maintaining operations throughout the incident. Information Security Management: Information security management systems (ISMSs) define a consistent, company-wide method for managing cybersecurity risks and ensuring the confidentiality, integrity, and availability of corporate data and systems. Relevant frameworks include the ISO 27000 series, the NIST Risk Management Framework (RMF), and AICPA SOC 2. Continual Service Improvement Management: Continual service improvement management involves monitoring and measuring an organization’s security and IT services. This practice should be focused on continuous improvement, and an important aspect is ensuring that metrics accurately reflect the current state and potential process. Incident Management: Incident management refers to addressing unexpected events that have a harmful impact on the organization. Most incidents are managed by a corporate security team, which should have a defined and documented process in place for identifying and prioritizing incidents, notifying stakeholders, and remediating the incident. Problem Management: Problems are the root causes of incidents, and problem management involves identifying and addressing these issues to prevent or reduce the impact of future incidents. The organization should track known incidents and have steps documented to fix them or workarounds to provide a temporary fix. Release Management: Agile methodologies speed up the development cycle and leverage automated CI/CD pipelines to enable frequent releases. Release management processes ensure that software has passed required tests and manages the logistics of the release (scheduling, post-release testing, etc.). Deployment Management: Deployment management involves managing the process from code being committed to a repository to it being deployed to users. In automated CI/CD pipelines, the focus is on automating testing, integration, and deployment processes. Otherwise, an organization may have processes in place to perform periodic, manual deployments. Configuration Management: Configuration errors can render software insecure and place the organization at risk. Configuration management processes formalize the process of defining and updating the approved configuration to ensure that systems are configured to a secure state. Infrastructure as Code (IaC) provides a way to automate and standardize configuration management by building and configuring systems based on provided definition files. Service Level Management: Service level management deals with IT’s ability to provide services and meet service level agreements (SLAs). For example, IT may have SLAs for availability, performance, number of concurrent users, customer support response times, etc. Availability Management: Availability management ensures that services will be up and usable. Redundancy and resiliency are crucial to availability. Additionally, cloud customers will be partially responsible for the availability of their services (depending on the service model). Capacity Management: Capacity management refers to ensuring that a service provider has the necessary resources available to meet demand. With resource pooling, a cloud provider will have fewer resources than all of its users will use but relies on them not using all of the resources at once. Often, capacity guarantees are mandated in SLAs.