Chapter 08: Responding to Vulnerabilities Flashcards

1
Q

ERM

A

Enterprise Risk Management

Organizations take a formal approach to risk analysis that begins with identifying risks, continues with determining the severity of each risk, and then results in adopting one or more risk management strategies to address each risk

Reasons We Use ERM
* Keep data confidential
* Avoid financial losses
* Avoid legal issues
* Maintain positive brand image
* Ensure COOP (continuity of operations plan—BCP/DRP)
* Establish trust and mitigate liability
* Meet stakeholder’s objectives

NIST 800-39
* Managing Information Security Risk
* Great starting point for applying processes to risk identification and assessment
* Assess, Respond, Monitor—Frame

Risk Identification
* Takes place by evaluating threats, identifying vulnerabilities, and assessing the probability / likelihood of an eventt affecting an asset or process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Threats

A

Any possible events that might have an adverse impact on the CIA of our information or information systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Vulnerabilities

A

Weaknesses in our systems or controls that could be exploited by a threat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Risks

A

The interesection of a vulnerability and a threat that might exploit that vulnerability

A threat without a corresponding vulnerability doesn’t pose a risk, nor does a threat without a vulnerability

Once you identify threats and vulnerabilities, through threat intel, vulnerability scans, etc, you can identify the risks that exist in your organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Risk Calculation

A

When evaluating risk, use two factors:
1. Probability: Likelihood that a risk will occur
2. Magnitude: Impact the risk will have on the org if it occurs

Risk Severity = Probability x Magnitude

This equation doesn’t always have to be interpreted literally—think of this conceptually as combining the two to determine the severity

NOTE: For any question on the exam that deals with risk, keep these two things in the back of your mind and ask what the probability and magnitude are

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

BIA

A

Business Impact Analysis

A formalized approach to risk prioritization that allows organizations to conduct their reviews in a structured manner, and follow two methodologies:
1. Quantitative Risk Assessments: Analysis that uses numeric data to provide very straightforward prioritization of risks
2. Qualitative Risk Assessments: Substitute subjective judgments and categories for strict numerical analysis, allowing the assessment of risks that are difficult to quantify

MTD (Maximum Tolerable Downtime)
* How long you can be down without going out of business
* Each business process can have its own MTD, such as a range of minutes to hours for critical functions, 24 hours for urgent functions, or up to 7 days for normal functions
* MTD sets upper limit on the recovery time that the system and asset owners need to resume operations

RTO (Recovery Time Objective)
* The length of time it takes after an event to resume norrmal business operations and activities
* Not a full recovery though, just to get a point where you can provide services
* MTTR is full repair

WRT (Work Recovery Time)
* The length of time in addition to the RTO of individual systems to perform reintegration and testing of a restored or upgraded system following an event

RPO (Recovery Point Objective)
* Longest time you can tolerate lost data being unrecoverable

Page 297

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Quantitative Risk Assessment

A

Most quantitative risk assessment processes follow a similar methodology that includes the following steps:
1. Determine the AV (asset value) affected by risk: AV is expressed in dollars and may be determined using the cost to acquire the asset, cost to replace, or depreciated cost of the asset depending on the org’s preferences
2. Determing the likelihood the risk will occur: Risk analysts consult subject matter experts and determine the likelihood that a risk will occur in a given year. This is expressed as the ARO (annualized rate of occurrence)—a risk expected to occur twice a year has an ARO of 2.0 while a risk expected once every 100 years has an ARO of 0.01
3. Determine the amount of damage that will occur if the risk materializes: This is knonw as the EF (exposure factor) and is expressed as a percentage of the asset expected to be damaged—the EF of a risk that would completely destroy an asset is 100% while an EF that destroys half is 50%
4. Calculate the SLE: SLE (single loss expectancy) is the amount of financial damage expected each time a risk materializes—calculated by multiplying the AV by EF
5. Calculate the ALE: ALE (annualized loss expectancy) is the amount of damage expected from a risk each year—calculated by multiple SLE and ARO

Repeat this process for each threat-vulnerability combination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Qualitative Risk Assessment

A

Qualitative risk assessment techniques seek to overcome the limitations of quantitative techniques by substituting subjective judgment for objective data

They still use the same probability and magnitude factors to evaluate risk severity, but do so using subjective categories

It’s not possible to directly calculate financial impacts of risks that are assessed using qualitative techniques, but a risk assessment scale / risk matrix makes it possible to still prioritize risks:

Risk Matrix
* Impact along one axis, likelihood along the other
* 3 x 3: low, medium, high
* 4 x 4: low, medium, high critical

Why Qualitative Would Be Preferred
* Complexity of cybersecurity risks
* Unknowns
* Limited data
* Resource constraints
* Communication

Many orgs will combine quantitative and qualitative to get a well rounded picture of their tangible and intangible risks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Supply Chain Assessment

A

Risks can occur based on third-party relationships as well, don’t overlook these

Performing vendor due diligence is a crucial security responsibility—if they don’t have adequate security controls in place, your data is at risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Risk Management

A

After you’ve completed a risk assessment, you can focus on risk management, which is the process of systematically addressing the risks facing your org

The risk assessment serves two important roles in the risk management process:
1. Provides guidance in prioritizing risks so that the risks with the highest probability and magnitude are addressed first
2. Helps determine whether the potential impact of a risk justifies the costs incurred by adopting a risk management approach

Things to Know for Controls
* If control is required by a framework, best practice, or regulation
* Cost of control
* Amount of risk a control mitigates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Risk Mitigation

A

The process of applying security controls to reduce the probability and/or magnitude of a risk

This is the most common risk management strategy, and the majority of work for security pros revolvees around mitigating risks through the design, implementation, and management of security controls

When you choose to mitigate a risk, you can apply one control or a series of controls—each one should reduce the probability of the risk, magnitude of the risk, or both

EXAM NOTE: Adding controls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Risk Avoidance

A

The changing of business practices to completely eliminate the potential that a risk will materialize

This may seem like a highly desirable approach, but there’s a major drawback—risk avoidance strategies often have a serious detrimental impact on the business

EXAM NOTE: Changing plans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Risk Transference

A

Shifting some of the impact of a risk from the organization to another entity

The most common example is purchasing an insurance policy that covers a risk—the customer pays a premium to the insurance carrier who agrees to cover losses from risks specified in the policy

Many general business policies exclude all cyber risks, so purchase cyber insurance separately or as a rider on an existing business policy

EXAM NOTE: Insurance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Risk Acceptance

A

Deliberately choosing to take no other risk management strategy and simply continuing operations as normal in the face of risk

This approach may be warranted if the cost of mitigating a risk is greater than the impact of the risk itself

This should not be taken as a default strategy—risk acceptance without an analysis isn’t accepted risk, it’s unmanaged risk (which is bad bad bad)

EXAM NOTE: Low risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Technical Controls

A

AKA: Logical controls

Security controls that are implemented as a system (hardware, software, or firmware) to enforce CIA in the digital space

  • Firewall rules
  • ACLs
  • IPS
  • Encryption
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Operational Controls

A

A category of security control that’s implemented primarily by people rather than systems—these are the processes we put in place to manage technology in a secure manner
* User access reviews
* Log monitoring
* Vulnerability management
* Security guards to ensure people don’t break into the building
* Train employees on how not to fall for phishing scam

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Managerial Controls

A

Security controls that provide oversight of the information system

These are procedural mechanisms that focus on the mechanics of the risk management process
* Periodic risk assessments
* Security planning exercises
* Incorporation of security into the org’s change management, service acquisition, and project management practices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Preventive Controls

A

Security controls intended to stop a security issue before it occurs
* Firewalls
* Encryption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Detective Controls

A

Security controls to identify security events thave have already occurred
* IDS
* Logs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Responsive Controls

A

Security controls that help orgs respond to an active security incident
* 24x7 SOC that can triage and direct first responders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Corrective Controls

A

Security controls to help remediate security issues that have already occurred
* Restoring backups after a ransomware attack
* Patch management system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Compensating Controls

A

Security controls designed to mitigate the risk associated with exceptions made to a security policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

STRIDE

A

Microsoft’s STRIDE classification model is one method you can use to calssify threats based on what they leverage:
* Spoofing of user identity
* Tampering
* Repudiation
* Information disclosure
* Denial of service
* Elevation of privilege

Other models include:
* PASTA (Process for Attack Simulation and Threat Analysis)
* LINDDUN CVSS

Classification tools provide two major benefits
1. Allows you to use a common framework to describe threats, allowing others to contribute and manage threat information
2. Serves as a reminder of the types of threats that exist and can help analysts perform better threat analysis by giving a list of potential threat options

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Threat Modeling

A

Threat modeling takes many factors into account, but the common elements are:
* Assessing adversary capability, or the resources, intent, and ability of the likely threat actor or organization
* The total attack surface of the organization you’re assessing—this means any system, device, network, app, staff member, or other target that a threat may target
* Listing possible attack vectors, the means by which attackers can gain acces to their targets
* The impact if the attack was successful
* The likelihood of the attack or threat succeeding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Threat Research

A

Once the threat model is complete, you conduct threat research

There are many different types of threat research you can conduct:

Threat Reputation
* Look at the reputation of a site, netblock, or actor to determine whether they have a history or habit of malicious behavior
* Often paired with IPs or domains, but file reputation services, data feeds, and other reputation-based tools also exist

Behavioral Assessments
* Use for insider threats because insider threat behavior is often difficult to distinguish from job or role related world
* Detecting interal threat behaviors relies heavily on the context of the actions that were performed, a broad view of the insider’s actions across all systems, apps, and networks, and the availability to provide insight over time
* Many insider attacks rely on privileged account abuse, leveraging access to sensitive information and use of shared passwords
* They also occur outside of normal business hours or may require more time, making it possible to identify insider threats through differences in behavior

IOC
* IOC are forensic evidence or data that can help to identify an attack
* Unlike other assessment methods, IOC are used exclusively after an attack has started, but may still be ongoing
* Knowing which IOC are associated with a given threat actor, or common exploit path, can help defenders take appropriate steps to prevent further compromise or even identify the threat actor
* Can help limit the damage or stop the attack from progressing

Page 306

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Attack Surface Management

A

Your attack surface is the combination of all systems and services that have some exposure to attackers, and might allow those attackers to gain access to your environment

Managing the attack surface includes:
* Edge Discovery: Scanning that identifies any systems or devices with public exposure by scanning IPs belonging to the org
* Passive Discovery: Techniques that monitor inbound and outbound traffic to detect devices that didn’t appear during other discovery scans
* Security Controls Testing: Verifies that the org’s array of security controls are functioning properly
* Pentesting and Adversary Emulation: Seeks to emulate the actions of an adversary to discover flaws in the org’s security controls

Use the results of these discovery and testing techniques to make changes to the environment and improve security—this is called attack surface reduction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Bug Bounty Programs

A

A formal process that allows orgs to open their systems to inspection by security researchers in a controlled environment that encourages attackers to report vulnerabilities in a responsible fashion

Security testers probe the systems for vulnerabilities, and if they find one they can choose to:
* Disclose publicly
* Exploit
* Disclose responsibly
* Take no action

Bug bounties incentivize testers to disclose responsibly, usually by offering cash

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Configuration Management

A

Tracks the way that specific endpoint devices are set up, both the OS settings and the inventory of software installed on a device

Configuration management should also crerate artifacts that may be used to help understand system configuration

Baselining
* An important component of configuration management
* It’s a snapshot of a system or application at a given point in time
* May be used to assess whether a system has changed outside of an approved change management process
* System admins may compare a running system to a baseline to identify all changes to the system and then compare those changes to a list of approved change requests

Together, change and configuration management allow tech professionals to track the status of hardware, software, and firmware, ensuring that change occurs when desired—but in a controlled fashion that minimizes risk to an org

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Change Management

A

Change management programs provide orgs with a formal process for identifying, requesting, approving, and implementing changes to configurations

Version Control
* A critical component of change management programs, partiuclarly in the areas of software and script development
* Versioning assigns each release of a piece of softwrae an incrementing version number that may be used to identify any given copy

Together, change and configuration management allow tech professionals to track the status of hardware, software, and firmware, ensuring that change occurs when desired—but in a controlled fashion that minimizes risk to an org

Dion’s Notes
* Change management is the process where changes to the configuration of information systems are monitored and controlled as part of the organization’s overall configuration management efforts
* It ensures that all changes are planned and controlled to minimize risk of a service disruption
* Each individual component should have a separate document or database record that describes its initial state and subsequent changes—configuration information, patches installed, backup records, incident reports or issues
* Changes are categorized according to their potential impact and level of risk—major, significant, minor, normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Maintenance Windows

A

Changes can be disruptive to an org, which is why the timing must be carefully coordinated

Many orgs will consolidate multiple changes into a single period of time: the maintenance window

Typically occur on weekends or during other times when business activity is low

Windows are scheduled far in advance and coordinated by a change manager who publishes a list of planned changes, and then monitors the process of implementing, validating, and testing changes

31
Q

SDLC

A

Software Development Life Cycle

The SDLC describes the steps in a model for software development throughout its life

It maps software creation from an idea to requirements gathering and analysis to design, coding, testing, and rollout

Once software is in production, it also includes user training, maintenance, and decommissioning at the end of it’s useful life

The SDLC is useful for orgs and developers because it provides a conistent framework to structure workflow and to provide planning for the development process

And while software development doesn’t always follow a formal model, most enterprise development for major apps does follow most

Page 310

32
Q

SDLC Phases

A

1. Feasibility
* Where the initial investigations into whether the effort should occur are conducted
* It also looks at alternative solutions and high-level costs for each solution proposed
* Results in a recommendation with a plan to move forward

2. Analysis and Requirements Definition
* Customer input is sought to determine what the desired functionality is, what the current system or app currently does or doesn’t do, and what improvements are desired
* Requirements may be ranked to determine which are most critical to the success of the project
* Security requirements definition is crucial here—it ensures that the app is designed to be secure and that secure coding practices are used

3. Design
* Includes design for functionality, architecture, integration points and techniques, dataflows, business processes, and any other elements that require design consideration

4. Development
* The actual coding of the app
* This phase might also involve unit testing—testing of small components individually to ensure they function properly
* Might also include code analysis

5. Testing and Integration
* Formatl testing with customers or others outside of the dev team
* Individual units or software components are integrated and then tested to ensure proper functionality
* Connections to outside services, data sources, and other integration may occur
* UAT (user acceptance testing) occurs to ensure that the software users are satisfied with its functionality

6. Training and Transition
* Ensure that the end users are trained on the software
* Ensure the software has entered general use
* Sometimes called acceptance, installation, and deployment

7. Ongoing Operations and Maintenance
* The longest phase
* Includes patching, updating, minor modifications, and other work that goes into daily support

8. Disposition
* Occurs when a product or system reaches the end of its life
* Often ignored in the excitement of developing new products, but it’s critically important because…
* Shutting down old products can produce cost savings
* Replacing existing tools may require specific knowledge or additional effort
* Data and systems may need to be preserved or properly disposed of

33
Q

Dev Environments

A

Many orgs will use multiple environemtns for their development and testing, the common ones being:

Development
* Used for devs or builders to do their work
* Some workflows provide each dev with their own environment
* Others use a shared environment

Test
* Where software or systems can be tested and validated without impacting the production environment
* In some schemes, this is preproduction
* In others, a separate preproduction staging environment is used

Production
* The live system
* Software, patches, and other chnages that have been tested are moved to production

Change management processes are typicaly followed to move through these environments, which allows teams to:
* Perform rollback and undo changes that had unintended consequences
* Restore the system to a prior state
* Provide accountability and oversight, which may be required for audit or compliance purposes

34
Q

SDLC Waterfall

A

A sequential model in which each phase of development is followed by the next

A typical 6 phase waterfall process looks like this:
1. Gather requirements
2. Analysis
3. Design
4. Implement
5. Testing
6. Deployment

Waterfall is seen as inflexible, but it’s used for complex systems

Typically recommended for dev efforts that involve a fixed scope and a known timeframe for delivery, and for those using a stable, well-understood tech platform

35
Q

SDLC Spiral

A

Takes the linear dev concepts from Waterfall and adds an iterative process that revisits four phases multiple times during the SDLC to:
* Gather more detailed requirements
* Design functionality guided by the requirements
* Build based on the design

It also puts a significant emhases on risk assessment, reviewing the risks multiple times during the process

Spiral has four phases:
1. Identification, or requirements gathering, which initially gathers business reqs, system reqs, and more detailed reqs for subsystems or modules as the process continues
2. Design, conceptual, architectural, logcial, and sometimes physical or final design
3. build, which produces an intial proof of concept and then further development releases until the final production build is produced
4. Evlauation, which involves risk analysis for the dev project inteded to monitor the feasibility of delivering the software from a technical and managerial view—this phase also involves customer testing and feedback to ensure customer acceptance

Spiral provides greater flexibility to handle changes in requirements as well as external influences such as availability of customer feedback and development staff

Also allows the SDLC to start earlier in the process than Waterfall

Because Spiral revists processes, it’s possible for this model to result in rework or to identify design requirements later in the process that require a significant design change due to more detailed requirements coming to light

36
Q

Agile

A

Unlike the linear Waterfall and Spiral, Agile is an iterative and incremental process rooted in the Manifesto for Agile Software Development:
* Individuals and interactions are more important than processes and tools
* Working software is preferable to comprehensive documentation
* Customer collaboration replaces contract negotiation
* Responding to change is key, rather than following a plan

Agile is a major departure from Waterfall and Spiral because it:
* Breaks up work into smaller units, allowing work to be done more quickly with less upfront planning
* Focuses on adapting to needs rather than predicting them
* Work is borken up into short working sessions called sprints, which can last days to a few weeks

The Agile methodology is based on 12 principles:
1. Ensure customer satisfaction via early and continuous delivery of the software
2. Welcome changing requests, even late in the dev process
3. Deliver working sofware frequently (in weeks rather than months)
4. Esnure dialy cooperation between devs and businesspeople
5. Projects should be built around motivated individuals who get the support, trust, and environment they need to succeed
6. Face-to-face conversations are the most efficient way to convey information inside the dev team
7. Progress is measured by having working software
8. Dev should be done at a sustainable pace that can be maintained on an ongoing basis
9. Pay continuous attention to technical excellence and good design
10. Simplicity—the art of maximizing the amount of work not done—is essential
11. The best architectures, requirements, and designs emerge from self-organizing teams
12. Tams should reflect on how to become more effective and then implement that behavior at regular intervals

37
Q

Specialized Terms in Agile

A

Backlogs
* Lists of features or tasks that are required to complete a project

Planning Poker
* A tool for estimation and planning used in Agile dev processes
* Estimators are given cards with values for the amount of work required for a task
* Estimators are asked to estimate and each reveals their bid on the task
* This is done until agreement is reached, with the goal to have estimators reach the same estimate through discussion

Timeboxing
* Timeboxes are a previously agreed on time that a person or team uses to work on a specific goal
* This limits the time to work on a goal to the timeboxed time rather than allowing work until completion
* Once a timebox is over, the completed work is assessed to determine what needs to occur next

User Stories
* Collected to describe high-level user requirements
* A user story might be “Users can change their password via the mobile app” which provides direction for estimation and planning for an Agile work session

Velocity Tracking
* Condcuted by adding up the estimates for the current sprint’s effort and then comparing that to what was completed
* This tells the team whether they’re on track, faster, or slower than expected

38
Q

RAD

A

Rapid Application Development is an iterative process that relies on building prototypes

Unlike other methods, there’s no planning phase—planning is done as the software is written

RAD relies on functional components of the code being developed in parallel and then integrated to produce the finished product

Like Agile, RAD can provide a highly responsive dev environment

RAD has five phases:
1. Business Modeling: Focuses on the business model, including what information is important, how it’s processed, and what the business process should involve
2. Data Modeling: Includes gathering and analyzing all datasets and objects needed for the effort and defining their attributes and relationships
3. Process Modeling: For dataflows, based on the business model, as well as process descriptions for how data is handled
4. Application Generation: Coding and use of automated tools to convert data and process models into prototypes
5. Testing and Turnover: Focuses on the dataflow and interfaces between components since prototypes are tested at each iteration for functionality

39
Q

DevOps

A

Combines software development and IT operations with the goal of optimzing the SDLC

It’s done by using collections of tools called toolchains to improve SDLC elements like:
* Coding
* Building and test
* Packacging
* Release
* Configuration and configuration management
* Monitoring of elements

40
Q

DevSecOps

A

Security as part of the DevOps model

Here, security is a shared responsibility that’s part of the entire development and oeprations cycle

That means integrating security into the design, development, testing, and operational work done to produce apps and services

The role of security practitioners in DevSecOps includes:
* Threat analysis and communications
* Planning
* Testing
* Providing feedback
* Ongoing improvement and awareness responsibilities

41
Q

CI

A

Continuous Integration

A development practice that checks code into a shared repository on a consistent ongoing basis, ranging from a few times a day to a very frequent process of check-ins and automatd builds

NOTE: Requires building automated security testing into the pipleline testing process
* It can result in new vulnerabilities being deployed into production
* Could allow an untrusted or rogue developer to insert flaws into deployed code
* Logging, reporting, and monitoring must be designed to fit CI/CD process

42
Q

CD

A

Continuous Deployment (Delivery)

Rolls out the tested changes into production automatically, as soon as they’ve been tested

NOTE: Requires building automated security testing into the pipleline testing process
* It can result in new vulnerabilities being deployed into production
* Could allow an untrusted or rogue developer to insert flaws into deployed code
* Logging, reporting, and monitoring must be designed to fit CI/CD process

43
Q

Designing and Coding for Security

A

It’s important to participate in the SDLC because it provides security pros oppty to improve the security of an app from step zero

  • Requirements Gathering and Design: First chance to help with software security—it can be built in as part of the requirements and then designed in based on them
  • Development: Secure coding techniques, code review, and testing can improve the quality and security of the code being developed
  • Testing: Fully integrated software can be tested using tools like web app security scanners or pentesting techniques—provides foundation for ongoing security operations by building the baseline for future scans and regression testing during patching and updates

Page 319

44
Q

Improper Error Handling

A

Error messages that shouldn’t be exposed outside of a secure environment are made accessible to attackers or the general public

Since they often contain detailed information about what’s going on at the moment the error occurs, attackers can use that to learn about the app, databses, or even get stack trace information that provides significant detail they can leverage in future attacks

Even errors that don’t appear to provide detailed information can still allow attackers to learn more about the app, as different responses provide clues about how successful their efforts are

Always pay careful attention to app vulnerability reports that show accessible error messages, as well as the content of those messages

45
Q

Dereferencing

A

Often due to null pointer dereferences

This means that a pointer with a value of NULL (aka, one that isn’t set), is used as though it contains an expected value

This type of error almost always leads to a crash unless caught by an error handler

Race conditions are also a common place to find a dereferencing issue

Dion’s Notes
* Software vulnerability that occurs when the code attempts to remove the relationship between a pointer and the thing it points to

46
Q

Insecure Object References

A

These occur when apps expose information about internal objects, allowing attackers to see how the object is identified and stored in a backend storage system

Once an attacker knows that, they might be able to leverage the information to gain further access or to make assumptions about other data objects that they can’t view in this way

Dion’s Notes
* Coding vulnerability where unvalidated input is used to select a resource object like a file or database
* To prevent this, implement access control techniques in apps to verify a user is authorized to access a specific object

47
Q

Race Conditions

A

These rely on timing—an app that needs to take action on an object may be sensitive to what’s occuring, or has occurred, to that object

Although race conditions aren’t always reliable, they can be very powerful, and repeagted attacks against a race condition can result in a successful attack

Dion’s Notes
* Software vulnerability that occurs when the resulting outcome from execution processes is directly dependent on the order and timing of certain events
* Difficult to detect and mitigate
* Dirty Cow is an example that impacted Linux OS that caused a local privilege escalation bug
* TOCTTOU: Potential vulnerability that occurs when there’s a change between when an app checked a resource and when the app used the resource

Race Condition and TOCTTOU Defense
1. Develop apps to not process things sequentially if possible
2. Implement a locking mechanism to provide app with exclusive access

48
Q

Broken Authentication

A

Exactly what it sounds like

Improperly implemented authentication may allow attackers who aren’t logged in, or who aren’t logged in as a user with the correct rights, access to resources

Implementing a strong and realiable authentication and authorization system is crucial when coding an app

49
Q

Sensitive Data Exposure

A

This may occur when any number of software flaws are exploited

The simplest version of this is when the app doesn’t properly protect sensitive data and allows attackers to access it

50
Q

Insecure Components

A

This includes a broad range of issues introduced when a component of an app or service is vulnerable, and thus it introduces that vulnerability into the application itself

Understanding all the components and modules that make up an app is critical to determining whether it may have known vulnerabilities that exist due to those components

Dion’s Notes
Any code that’s used or invoked outside the main program development process
* Code reuse
* Third party libraries
* SDKs

51
Q

Insufficient Logging and Monitoring

A

This will result in being unable to determine what occurred when something goes wrong

Part of a strong security design is determining what should be logged and monitored, ensuring that it’s appropriately captured

Then, build processes and systems to handle those logs and events so that the right thing happens when they occur

Dion’s Notes
* Any program that doesn’t properly record or log detailed enough information for an analyst to perform their job
* Logging and monitoring must support your use case and answer who, what, when, where, and how

52
Q

Weak or Default Configurations

A

These are common when apps and services aren’t properly set up, or when the default settings are used

One common example of this is using a default password for a service or database connection

Many app vulnerability scanners look for these default configurations, making it even easier for attackers to find them

Dion’s Notes
* Any program that uses ineffective credentials or configurations, or one in which the defaults have not been changed for security
* Many apps will choose to run as root or as a local admin, but do they need to be?
* Permissions may be too permissive on files or directories due to weak configurations
* If you’re going to install a new program, there should be a security configuration template or a scripted installation that actually goes through and makes it more secure

53
Q

Use of Insecure Functions

A

These can make it harder to secure code

Functions like strcpy, which don’t have critical security features built in, can result in code that’s easier for attackers to target

strcpy allows data to be copied without caring whether the source is bigger than the destination—if that occurs, attackers can place arbitrary data in memory locations past the original destination, possibly allowing a buffer overflow attack to succeed

54
Q

Secure Coding Best Practices

A

These will vary depending on the application, its infrastructure, backend design, and what framework or language it’s written in

Still, many of the same development, implementation, and design best practices apply to most apps:

Input Validation
* Any technique used to ensure the data entered into a field or variable in an application is handled appropriately by that application
* Helps prevent a wide range of problems, from XSS to SQL injection attacks

Output Encoding
* Translates special characters into an equivalent but safe version before a target app or interpreter reads it
* Helps prevent XSS attacks by preventing special characters from being inserted that cause the target app to perform an action
* Any coding method to sanitize output by converting untrusted input into a safe form where the input is displayed as data to the user without executing as code in the browser
* Mitigates against code injection and XSS attacks that attempt to use input to run a script

Secure Session Management
* Ensures that attackers can’t hijack user sessions, or that session issues don’t cause confusion among users

Authentication
* Limits access to apps to only authenticated users or systems
* Use MFA to help limit the impact of credential compromises

Data Protection
* Techniques, like encryption, keep data protected against eavesdropping and other confidentiality violations while stored or in transit over a network

Parameterized Queries
* Prevent SQL injection attacks by precompiling SQL queries so that new code can’t be inserted when the query is executed

Input Normalization
* A string is stripped of illegal characters or substrings and converted to the accepted character set

EXAM: Any time you take input, you want to do input validation
EXAM: Any time you’re outputting data that came from a user back to the screen, you want to use output encoding
EXAM: Any time you want to connect to a SQL database, use parameterized queries

55
Q

Static Code Analysis

A

Also called source code analysis, this is conducted by reviewing the code for an app

Since static analysis uses the source code for an app, it can be seen as a type of white-box testing with full visibility to the testers

This can allow testers to find problems that other tests might miss, either because the logic isn’t exposed to other testing methods or because of internal business logic problems

Unlike other testing methods, static code analysis doesn’t run the program—it focuses on understanding how the program is written and what the code is intended to do

It can be conducted with automated tools or manually, which is a process known as code understanding

56
Q

Dynamic Code Analysis

A

This relies on execution of the code while providing it with input to test the software

Much like static code analysis, dynamic may be done with automated tools or manually, but there’s a strong preference for automated testing due to the volume of tests that need to be conducted in most dynamic code testing processes

57
Q

Fuzzing

A

Involves sending invalid or random data to an app to test its ability to handle unexpected data

The app is monitored to determine if it crashes, fails, or responds in an incorrect manner

Because of the large amount of data that a fuzz test involves, it’s typically automated and is particularly useful for detecting input validation, logic issues, memory leaks, error handling

However, fuzzing only identifies simple problems and doesn’t account for complex logic or business process issues—so it might not provide complete code coverage if its progress isn’t monitored

58
Q

Fault Injection

A

Directly inserts faults into error handling paths, particularly handling mechanisms that are rarely used or might otherwise be missed during normal testing

It can be done one of three ways:
1. Compile-Time Injection: Inserts faults by modifying the source code of the app
2. Protocol Software Injection: Uses fuzzing techniques to send unexpected or protocol noncompliant data to an app or service that expects protocol-compliant input
3. Runtime Injection: Injecting data into the running program, either inserting into the memory of the program or injecting the faults in a way that causes the program to deal with them

Fault injection is typically done using automated tools due to the potential for human error in the injection process

59
Q

Mutation Testing

A

Related to fuzzing and fault injection, but instead of changing the inputs of the program or introducing faults, mutation testing makes small modifications to the program itself

The altered versions, mutants, are then tested and rejected if they cause failures

The mutations themselves are guided by rules that are inteded to create common errors as well as to replicate the types of errors that developers might introduce during their normal programming process

Mutation testing helps identify issues with code that’s infrequently used, but it can also help identify problems with test data and scripts by finding places where the scripts don’t fully test for possible issues

60
Q

Stress and Load Testing

A

Used to simulate a full app laod, or to stress the app and go beyond the normal level of load to see how it responds when pushed to the breaking point

Stress testing can also be conducted against individual components of an app to ensure that it’s capable of handling load conditions

During integration and component testing, fault injection may also be used to ensure that problems during heavy load are properly handled by the app

61
Q

Security Regression Testing

A

Focuses on testing to ensure that changes that have been made do not create new issues

From a security perspective, this often comes into play when patches are installed or when new updates are applied to a system or app

Security regression testing is performed to ensure that no new vulnerabilities, misconfigurations, or other issues have been introduced

Automated testing tools like web app vulnerability scanners and other vulnerability scanning tools are often used as part of an automated or semiautomated regression testing process

62
Q

UAT

A

User Acceptance Testing

Once all of the functional and security testing is completed for an app or program, users are asked to validate whether it meets the business needs and usability requirements

Since devs rarely know or perform all of the business functions that the apps they write will perform, this is important to validate that things work as expected

63
Q

Debuggers

A

These tools support devs in troubleshooting their work, and they also allow testers to perform dynamic analysis of executable files

For the exam, there are two common ones to know:
1. Immunity Debugger: Designed specifically to support pentesting and the reverse engineering of malware
2. GNU Debugger (GDB): Open source debugger for Linux that works with a variety of languages

64
Q

Information Security Policy Framework

A

Contains a series of documents designed to describe the org’s cybersecurity program

While the scope and complexity of these docs vary widely, depending on the nature of the org and its information resources, there are four types of docs usually included:
1. Policies
2. Standards
3. Procedures
4. Guidelines

Page 325

65
Q

Policies

A

High-level statements of management intent

Compliance with policies is mandatory

An infosec policy will generally contain broad statements about cybersecurity objectives, like:
* A statement of the importance of cybersecurity to the org
* Requirements that all staff and contracts take measures to protect the CIA of info and info systems
* Statement on the ownership of info created and/or possessed by the org
* Designation of the CISO or other individual as the executive responsible for cyber issues
* Delegation of authority granting CISO the ability to create standards, procedures, and guidelines that implement the policy

EX on page 326

66
Q

Common Documents In an InfoSec Policy Library

A

Information Security Policy
* Provides high-level authority and guidance for the security program

AUP
* Provides network and system users with clearn direction on permissible uses of information resources

Data ownership policy
* Clearly states the ownership of information created or used by the org

Data classification policy
* Describes the classification structure used by the org and the process used to properly assign classifications to data

Data retention policy
* Outlines what information the org will maintain and the length of time different categories of work product will be retained prior to destruction

Account management policy
* Describes the account lifecycle from provisioning through active use and decommissioning

Password Policy
* Sets forth requirements for password length, complexity, reuse, and similar issues

Continuous Monitoring Policy
* Describes the org’s approach to monitoring and informs employees that their activity is subject to monitoring in the workplace

Code of Conduct/Ethics
* Describes expected behavior of employees and affiliates and serves as a backstop for situations not specifically addressed in policy

67
Q

Standards

A

Standards provide mandatory requirements describing how an org will carry out its infosec policies, and may include:
* Specific configuration settings used for a common OS
* Controls that must be put in place for highly sensitive information
* Any other security objective

Standards are often approved at a lower level than policies and might change more often

EX on page 328

68
Q

Procedures

A

Detailed, step by step processes that individuals or orgs must follow in specific circumstances

Similar to checklists, procedures ensure a consistent process for achieving a security objective

Orgs may create procedures for building new systems, releasing code to production environmetns, responding to security incidents, etc

Compliance with procedures is mandatory

EX on page 329

69
Q

Common Procedures in Policy Framework

A

Monitoring Procedures
* Describe how the org will perform security monitoring activities, including the possible use of continuous monitoring tech

Evidence Production Procedures
* Describe how the org will respond to subpoenas, court orders, and other legitimaite requests to produce digital evidence

Patching Procedures
* Describe the frequency and process of applying patches to apps and systems under the org’s care

70
Q

Guidelines

A

Guidelines provide best practices and recommendations related to a given concept, tech, or task

Compliance with guidelines is not mandatory, and guidelines are offered in the spirit of providing helpful advice

That said, the optionality of guidelines may vary significantly depending on the org’s culture

71
Q

Exceptions to Policy

A

When adopting new security policies, standards, and procedures, orgs should also provide mechanisms for exceptions to those rules

Inevitably, unforeseen circumstances will arise that require deviations from the requirements

The policy framework should lay out the specific requirements for receiving an exception and the individual or committee with the authority to approve exceptions

72
Q

Compensating Controls

A

Use these to mitigate the risk associated with exceptions to security standards

The PCI DSS includes one of the most formal compensating control processes in use today:
1. The control must meet the intent and rigor of the original requirement
2. The control must provide a similar level of defense as the original requirement, such that the compensating control suficiently offsets the risk that the original PCI DSS requirement was designed to defend against
3. The control must be above and beyond other PCI DSS requirements

73
Q

IAC

A

Infrastructure as Code
* A provisioning architecture in which deployment of resources is performed by scripted automation and orchestration
* Allows for the use of scripted approaches to provisioning infrastructure in the cloud
* Comes down to three key areas: scripts, templates, and policies
* Robust orchestration can lower overall IT costs, speed up deployments, and increase security

Snowflake Systems
* Any system that’s different in its configuration compared to a standard template within an IAC architecture
* Lack of consistency leads to security issues and inefficiencies in support

Idempotence
* A property of IAC that an automation or orchestration action always produces the same result, regardless of the component’s previous state
* Every time you give this input, you should get this output
* IAC uses carefully developed and tested scripts and orchestration runbooks to generate consistent builds

NOTE: Eliminate the special snowflakes