CH13 Security Testing Flashcards

1
Q

Four Bounds of Security Testing

A

Dynamic Testing
Static Testing
Automatic Testing
Manual Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Manual Dynamic Approach

A

Penetration Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Automatic Dynamic Approaches

A

DAST, IAST, Vulnerability Scanner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Automatic Static Approach

A

SAST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Manual Static Approach

A

Manual Code Review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Security Testing light definition

A

a systematic process for revealing flaws in information systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Dynamic Testing: The Core Idea

A

The running application is being tested

Strange inputs are made

The outputs are returned and the program behaviour is observed and reported

Dynamic Testing operates similarly to black-box testing, in that you’re comparing outputs against inputs and not necessarily assessing the code itself.
- means that hard-coded aspects of the program may get easily overlooked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Static Security Testing Core Idea

A

Parse all source code and config files
Analyze (an abstract representation of) the parsed files
Report any problems found

Static Testing is similar in execution to program compilation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Vulnerability existence against report spectrum

A

Weakness exists and reported = True Positive
Weakness exists and not reported = False Negative (DANGEROUS)
Weakness doesn’t exist and reported = False Positive (ANNOYING TIME WASTE)
Weakness doesn’t exist and not reported = True Negative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

For a fully automated security testing tool, we must make compromises regarding False Negatives and False Positives:

A

In case of doubt, report a potential vulnerability:
- we might “annoy” users with many findings that are not real issues
- we risk “the boy that cried wolf” phenomena

In case of doubt, we stay silent:
- we might miss severe issues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Reasons and Recommendations for False Negatives

A

Fundamental: Under-approximation of the tool
- missing language features (might intercept data flow analysis)
- missing support for complete syntax (parsing errors)
Therefore Report to tool vendor

Configuration: lacking knowledge of insecure frameworks
- insecure sinks (output) and sources (input)
Therefore Improve configuration

Unknown security threats
- XML verb tampering
Therefore Develop new analysis for tool (might require support from tool vendor)

Security expert: “I want a tool with 0 false negatives! False negatives increase the overall security risk”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Reasons and Recommendations for False Positives

A

Fundamental: over-approximation of the tool, e.g.m
- pointer analysis
- call stack
- control-flow analysis
Therefore report tool to vendor

Configuration: lacking knowledge of security framework, e.g.,
- sanitization functions
- secure APIs
therefore improve configuration

Mitigated by attack surface: strictly speaking a true finding, e.g.,
- No external communication due to firewall
- SQL injections in a database admin tool
Therefore should be fixed. In practice often mitigated during audit or local analysis configuration

Developer: “I want a tool with 0 false positives!” False positives create unnecessary effort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Prioritization of Findings

A

A pragmatic solution for too many findings

Classification with clear instructions for each vulnerability has proven to be the most easy to understand.

Can clearly see:
- What needs to be audited
- What needs to be fixed
- as security issue
- quality issue
- Different rules for
- old code
- new code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Mainly two patterns that cause security vulnerabilities

A

Local issues
- insecure functions
- secrets stored in the source code
Data-flow related issues
- XSS
- Secrets stored in the source code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

generic defects visible in the code

A

static analysis sweet spot. built-in rules make it wasy for tools to find these without programmer guidance

e.g. buffer overflows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

generic defects visible only in the design

A

Most likely to be found through architectural analysis.

e.g. the program executes code downloaded as an email attachement

17
Q

context specific defects visible in the code

A

Possible to find with static analysis, but customisation may be required
e.g. mishandling of credit card information

18
Q

context specific defects visible only in design

A

Requires both understanding of general security principles along with domain-specific expertise.

e.g. cryptographic keys kept in use for an unsafe duration.

19
Q

Static Application Security Testing SAST, Pragmatic static analysis is based on

A

successful developments from research community
- type checking
- property checking (model-checkng, SMT solving, etc.)
- Abstract interpretation
- …

techniques from the software engineering community
- style checking
- program comprehension
- security reviews
- …

20
Q

Type checkers are useful, but

A

may suffer from false positives/negatives
identifying which computations are harmful is undecidable

21
Q

why will the java compiler flag this as an error?

short s = 0;
int i = s;
short r = 1;

A

Java is a statically typed, compiled language. types are set at compilation. even prior to running the program, an error will flag as an integer is being assigned seemingly compatible data from an incompatible type short.

22
Q

why will the java compiler flag this as an error?

A

Object [] objs = new String[1];
objs[0] = newObject();

23
Q

Style Checkers

A

Enforce more superficial rules than type checkers
- like x == 0 vs 0 == x
Style checkers are often extensible
- PMD
- JSHint
Simple, but very successful in practice

24
Q

Program Understanding

A

Tools can help with
- understanding large code ases
- reverse engineering abstractions
- finding declarations and uses
- Analysing dependencies
- …
Useful for manual code/architectural reviews

25
Q

Fuzzing (DAST) core idea

A

Create large random strings
Pipe input into Unix utilities
Check if they crash

e.g.
Bash
$ fuzz 100000 -o outfile | deqn

started in 1988 by Barton Miller at University of Wisconsin

26
Q

Industrial Case Study: Fuzzing Chrome
Chrome’s Fuzzing Infrastructure

A

Automatically grab the most current Chrome LKGR (Last Known Good Revision)

Fuzz and test Chrome, starting with multi-million test cases

Thousands of Chrome instances on hundreds of virtual machines

27
Q

AddressSanitizer

A

Compiler which performs instrumentation

Run-time library that replaces malloc(), free(), etc

custom malloc() allocates more bytes than requested and “poisons” the redzones around the region returned to the caller

Heap buffer overrun/underrun (out-of-bounds access)

Use after free

Stack buffer overrun/underrun

28
Q

AddressSanitizer: Result

A

0 months of testing the tool with Chromium (May 2011)

300 previously unknown bugs in the Chromium code and in third-party libraries

29
Q

SyzyASAN

A

AddressSanitizer works only on Linux and Mac

SysyASAN uses a different instrumentation (using MS Visual Studio)

30
Q

ThreatSanitizer

A

Runtime data race detector based on binary translation

Supports also compile-time instrumentation
- Greater speed and accuracy

Data races in C++ and Go code

Synchronization issues

31
Q

libFuzzer

A

Engine for in-process, coverage-guided, whitebox fuzzing (works on top of fuzzers like addressSanitizer)

32
Q

Cluster Fuzzing: ClusterFuzz

A

A fuzzing tool making use of the following memory debugging tools with libFuzzer-based fuzzers:
- AddressSanitizer (ASan): 500 GCE VMs
- MemorySanitizer (MSan): 100 GCE VMs
- UndefinedBehaviorSanitizer (UBSan): 100 GCE VMs

33
Q

Fuzzing Challenges

A

Detecting input channel
Input generation
Deciding if the response is a bug or noot
How to get a test setup that is safe to use?
When did we test enough? (coverage)

34
Q

Random Fuzzing

A

All randomly generated input data

Very Simple
Inefficient
- random input is often rejected, as a specific format is required
- probability of causing a crash is very low
Unlikely generating random HTML documents that trigger edge cases

35
Q

Mutation-based Fuzzing

A

mutate existing data samples to create new test data

  • little or no knowledge of the structure of the inputs is assumed
  • anomalies are added to existing valid inputs
  • anomalies may be completely random or follow some heuristics
  • requires little to no set up time
  • dependent on the inputs being modified
  • may fail for protocols with checksums, those which depend on challenge response, etc.
  • examples include Taof, Peach Fuzzer, ProxyFuzz
36
Q

Mutation-Based Fuzzing pros and cons

A

+easy to set up and implement
+requires little to no knowledge of the input format/protocol

-effectiveness limited by selection of initial data set
-has problems with file formats/protocols that require valid checksums

37
Q

Generation Based Fuzzing

A

define new tests based on models of the input format

Generate random inputs with the input specification in mind (RFC, documentation, etc.)

Add anomalies to each possible spot

Knowledge of the input format allows to prune inputs that are rejected by the application

Input can be specified by a grammar-based fuzzing tool

examples include Peach Fuzzer and SPIKE

38
Q

Generation-Based Fuzzing Pros and Cons

A

+ completeness (you can measure how much of the specification has been covered)
+ can handle complex inputs (e.g., that require matching checksums)

  • building a generator can be a complex problem
  • specification needs to be available
  • Greybox-Fuzzing (Concolic Testing)
    • uses symbolic execution to trigger unused paths
    • invented by Microsoft and used for fuzzing file input routines
  • Autodafe
    -Fuzzing by weighting attacks with markers
    • Open Source