Final Flashcards

1
Q

Error

A

Mistake that introduces a fault (making a typo or conceptual misunderstanding)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Fault

A

Instance of incorrect code that can lead to a failure (a “bug” in the code)

A fault is something “wrong” with the code that leads to the software behaving in unexpected ways (aka failures).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Failure

A

Deviation from the expected behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why do failures occur?

A

Failures occur because there exists a fault (bug) in the code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What introduces a fault into a program?

A

A fault (bug) is introduced to the program when a programmer makes an error (mistake).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When you make a typo it’s a _____, but the typo in the code itself is a ____.

A

Making a typo is an error, but the typo in the code itself is a fault.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How are failures discovered?

A

Failures are not discovered by looking at the code, but by observing that the “output” for a given “input” is not what we expected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Functional testing

A

Used to verify the software meets the requirement specifications when it comes to functionality: does it do what it is expected to do?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Types of functional testing

A

Unit testing
Integration testing
Regression testing
Acceptance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Non-functional testing

A

Used to verify software performs at the required levels (performance, usability, reliability, and robustness)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Types of non-functional testing

A

Performance testing
Scalability testing
Usability testing
Acceptance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Pros and cons of manual testing

A

Pros: intuitive, no upfront cost
Cons: time-consuming, human mistakes could miss software failures, not easily repeatable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Pros and Cons of Automated Testing

A

Pros: Easy to repeat, fewer mistakes, very efficient / Cons: high upfront cost/time, not suited for everything (like UI testing), test maintenance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

5 categories of software development process

A

Requirements, design, implementation, verification, maintenance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SDP: requirements

A

determine what the software must do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

SDP: design

A

planning how to bring requirements to life

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

SDP: implementation

A

coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

SDP: verification

A

ensure the implementation meets the requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

SDP: maintenance

A

bug fixes, add features, fix non-functional requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is software testing?

A

trying to generate a fail state in software with ultimate goal of being unable to do so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Testing Framework

A

Used for automated testing, provides following functionality: 1) test fixture, 2) test case, 3) test suite, 4) test runner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

TF: test fixture

A

a way to set up elements required for a test and then roll back the setup when test is complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

TF: test case

A

a way to test a particular unit of the software with a specific input for a given response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

TF: Test suite

A

a collection of test cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

TF: Test runner

A

a way to execute the tests and report the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Testing Framework for this class

A

unittest in python

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

When is black box testing useful?

A

When you want to validate the functionality of the software based on specifications without needing to examine the internal code structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Advantages of black box testing

A

Focuses on input domain for software, no need for actual code (non-devs can write tests, can be written before code - TDD), can catch logic errors that other types of testing cant, can be used at all levels of testing (unit, integration, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Disadvantages of black box testing

A

Isn’t possible to test every input, so tests may miss logic branches/program paths, no way to know why the failure occurs, just that it indicates a fault, poorly written specifications can lead to inaccurate tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Why use partition testing?

A

identifies sub-domains that can allow for more intelligent testing but with fewer test cases to cover the entire input domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Six steps to partition (equivalence testing)

A

identify independently testable features, identify categories, partition categories into choices, identify constraints among choices, produce/evaluate test case specifications, generate test cases from test case specifications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Black box testing techniques

A

Random testing, boundary testing, partition testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Random testing

A

test varying random inputs across the input domain; a form of black box testing where the inputs are generated randomly across the input domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Boundary testing

A

test values around the boundaries of the input domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Partition testing

A

Identify subdomains that can allow for more intelligent testing with fewer cases to cover entire input domain (inputs within each subdomain share an equivalence)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Advantages of random testing

A

quick to write, can cover large poritions of input domain with very little code, tests have no bias, can potentially generate an input nobody considered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Disadvantages of random testing

A

many inputs can fall under same “test case” so they are redundant, may not test tricky parts of input domain (eg. Edge cases), can easily miss errors (without any targeting), some runs may pass while others may fail due to random nature of testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Advantages of white box testing

A

based on code, so quality tests can be measured objectively, can be used to compare test suites by measuring their quality, can directly test the coded behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Disadvantages of white box testing

A

Can’t discover errors from missing paths (unimplemented specifications), large software systems make it difficult to test every part of code, tests must be written by developers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Subsumption hierarchy

A

100% branch and condition guarantees 100% statement coverage; 100% branch coverage guarantees 100% statement coverage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

When should white box testing be used?

A

Measuring code quality (code coverage - how much is tested?), identifying uncovered code (ensure all branches and conditions are tested), testing internal logic, comparing test suites

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How does white box testing complement black box testing?

A

White box testing doesn’t care about specifications, it just needs as much of the program to execute as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Types of coverage

A

Code coverage, statement coverage, branch (decision coverage), condition (predicate) coverage, branch and condition (decision/condition) coverage, path coverage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Code coverage

A

the extent to which a given test suite executes the source code of the software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Statement coverage

A

way of measuring the quality of a testing suite based on the number of statements the tests execute in the program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Branch (decision) coverage

A

way of measuring the quality of a testing suite based on the number of branches that are covered; ensure each conditional is tested as evaluating as T and F

47
Q

Condition (predicate) coverage

A

way of measuring the quality of a testing suite based on the number of conditions (predicates) that are covered; concerned with each condition within the conditionals; requires we evaluate each condition/predicate of each conditional as T or F; truth tables help!

48
Q

Branch and Condition (Decision/Condition) coverage

A

way of measuring the quality of a testing suite based on the number of branches and conditions (predicates) that are covered; attempts to have 100% branch and 100% condition coverage; can result in a large number of tests

49
Q

Path coverage

A

tests strive to evaluate every path through the code (path - unique series of branches)

50
Q

Oracle

A

part of random testing system that monitors for error states in the software and saves the random inputs that generated those states for later inspection; can be as simplea s displaying the input to the screen, or can be a formal piece of software that generates formal bug reports

51
Q

How can we make our random testers “smarter”?

A

Unguided random testing, guided random testing, heuristic

52
Q

Unguided random testing

A

inputs are generated relatively evenly through the input domain

53
Q

Guided random testing

A

inputs are generated following a heuristic that informs “smarter” input choices

54
Q

Heuristic

A

cognitive tool to help make decisions - to make random generate smarter; guide tests to cluster around boundary values and values guided by error guessing knowledge

55
Q

Advantages of random testing for large input domains

A

automate test writing process, quick to write, can cover large poritions of input domain with very little code, tests have no bias, generate an input no one considered

56
Q

Disadvantages of random testing for large input domains

A

Many random inputs could fall under the same “test case” (redundant), might not test tricky parts of code (edge cases), could easily miss glaring errors, some runs might pass others will fail, need to test logic in the random tests

57
Q

Input domain

A

the pool of all possible inputs that a unit/program can take

58
Q

When is it good to use random testing?

A

Broad input domain coverage, finding unexpected bugs, summplementary testing, overnight testing, system testing

59
Q

Parts of random testing system

A

Random case generator, software under test, oracle

60
Q

What software design approach is most directly associated with TDD?

A

Agile process

61
Q

Agile process

A

employs an iterative approach with a focus on getting a minimum viable product out the door as soon as possible

62
Q

What does it mean to write the “bare minimum” of code?

A

writing just enough code to make a failing test pass; in the context of TDD, this means implementing the simplest possible solution that fulfills the requirements of the test, without adding any additional functionality or complexity

63
Q

How do you know you’re done with TDD?

A

We are done when we have added enough tests to cover all the requirements specified and all tests pass without triggering any new failures; ensures that the code meets the specifications and requirements without any unnecessary additions.

64
Q

Test Driven Development

A

the approach where one only writes new code if there exists at least one failing unit test; not primarily a verification process, it’s a way of approaching implementation

65
Q

Steps of TDD

A

1) Write a test, 2) run all currently written tests - if the tests all pass, go to step 1, if fails, go to step 3. 3) write the bare minimum of code to make the test pass 4) run all the currently written tests - if tests pass, go to step 1, if fail, go to step 3, 5) occassionally evaluate if the code can be refactored to reduce duplication or eliminate no longer used parts of the code. 6) eventually stop development after adding “enough” tests without triggering a new failure

66
Q

Why use TDD?

A

Forces you to think about requirements during implementation; sets of a good indicator of “done” by writing tests to specifications and then writing just enough code to pass the tests. ; Reduce duplicate code, if new feature/functionality needed, code should only be written if the new tests fail.

67
Q

What software development approach is most commonly assocated with continuous integration?

A

Agile, specifically extreme programming

68
Q

Extreme programming

A

A form of agile development that stresses frequent releases that can be shown to customers to gather feedback to inform the next phase of development; changes to codebase had to be checked-in mulptiple times per day

69
Q

What role does code review play in continuous integration?

A

Ensures changes to code base are sound and don’t introduce new issues; when a dev makes changes, they create a pull request which triggers an automated build and test suite; if tests pass, the changes are then reviewed by other devs to ensure that they meet the project’s standards and do not introduce new problems; helps to catch errors, enforce coding standards, and share knowledge among team members.

70
Q

Continuous integration

A

A set of guiding principles on how to manage a team working on a shared codebase.

71
Q

Continuous integration Principles

A

1) use a VCS to maintain central codebase, 2) building the software should be automated and easily triggered, 3) Once built, software should be able to test itself against a provided test suite. 4) Everyone needs to commit work to the shared codebase at least once a day, 5) Every commit to main should be built and tested, 6) Mandatory code review when requesting changes be merged into the shared codebase

72
Q

Why is continuous integration helpful in a team environment?

A

Early detection of errors, Maintaining code quality (mandatory code reviews and automated tests), Facilitating collaboration (regular communication, code reviews), Streamlining deployment (automated build process)

73
Q

Fagan Inspection

A

Formal code review; very structured code review process with 6 steps and defined roles; time-consuming and resource-intensive, most companies have moved to less formal code review processes

74
Q

Steps of Fagan Inspection

A

Planning (gather participants and resources needed for the inspection process), Overview (meeting to discuss important aspects of project), Preparation (review material before meeting), Inspection meeting (code is inspected), Rework (defects fixed), Follow-up (follow up with dev to ensure reworks done correctly)

75
Q

Fagan Inspection Participant Roles

A

Moderator (leader, schedules meetings, contacts participants, follow-ups), Author (code dev), Reader (reads code during inspection), Recorder (makes notes during inspection), Inspector

76
Q

Lightweight code review methods

A

Pair-programming, Over-the-shoulder, Change-based, Meetings

77
Q

Code Review Pair-programming

A

All coding is done in teams of two; each takes turns writing code while the other sits next to them and provides feedback; very collaborative process; other dev acts as real-time code reviewer, catching errors and giving advice (con - usually similar level, so not much additional knowledge)

78
Q

Code Review Over-the-shoulder

A

a dev will complete some task and then ask the reviewer to come over to desk to provide feedback as the developer describes the code

79
Q

Code Review Change Based

A

developers submit changes to the codebase that are then inspected later by the reviewer; once the review request is made, the developer is able to continue to work on another part of the task while they wait for feedback. When review is complete, the initial dev will be notified of any feedback and required changes (con - might be slow to get feedback)

80
Q

Code Review Meetings

A

round table discussion of code being reviewed, everyone acting as inspector, but prep beforehand is limited

81
Q

setup()

A

method in unittest framework; creates an artificial testing environment before each test method in a test case class

82
Q

tearDown()

A

method in unittest framework; cleans up/destroys testing environment after each test method in a test case class

83
Q

setUpClass()

A

class method in unittest framework; sets up the testing environment once before all tests runs

84
Q

tearDownClass()

A

class method in unittest framework; cleans up/destroys testing environment once after all tests run

85
Q

Basic idea of mocking?

A

Hijack calls to dependencies and simulate responses and behaviors

86
Q

Stub:

A

contains predefined data that is returned when called, but does not imitate behavior (like parrot mimicing speech). Focus is on the verification of data handling, define as response to mimic a dependency. Unaware of what it is passed; Usually mimics function calls, not full objects

87
Q

Mock:

A

simulates behavior of a service and its actions can be verified. “like siri mimics human understanding”. More complex simulations that allow for testing object behavior, often simulate entire objects or interfaces; are self aware and can tell you how many times they have been called and with what

88
Q

What does unittest’s patch do?

A

used fo replacing objects in a test with mock objects. Allows you to mock dependencies or external resources used by the code under test; can be used as a decorate or a context manager to temporarily replace the specified object with a mock during the execution of the test

89
Q

When to use mocks

A

Best suited when you want to simulate the behavior of dependencies or external systems without actually invoking them during testing. When you need to isolate the code under test from its dependencies to focus on testing specific behaviors or interactions. Useful when testing complex interactions or when the dependencies are slow, unreliable, or difficult to set up in a testing environment

90
Q

Modified Condition/Decision Coverage (MC/DC):

A

purpose is to only test the important conditions to limit the number of test cases required; only include test cases where each condition affects the outcome of the statement independently (done by writing out truth table for all conditions, then mark the ones that are the same for each condition (except 1) and have different outcomes. Those are conditions to test)

91
Q

When to use MC/DC?

A

When 100% branch coverage is not good enough, like safety-critical industries with high “costs” for failure with complex systems where full B&C coverage is impractical

92
Q

Which type of testing requires the input of the client/user?

A

Acceptance testing

93
Q

Acceptance testing:

A

requires the input of the client/user and any other stakeholders

94
Q

Mutation-Based Fuzzer:

A

starts by selecting a valid input, then mutates it in some way and throw it at the software under test. Does not have to be completely random, tester can configure fuzzer to only modify valid inputs in specific ways or only make a certain number of mutations. Very helpful in certain situations where the software performs input validation before accepting it.

95
Q

Generation-Based Fuzzer:

A

uses some “knowledge” of input domain to create random inputs (Similar to how “rules” generate random inputs)

96
Q

Load testing

A

software is tested for performance under expected operating conditions (ex. For a word processor - how well can it handle being switched back and fourth between applications?)

97
Q

Stress testing

A

software is tested for performance under extreme operating conditions (ex. Black Friday)

98
Q

Load vs Stress

A

Load under normal operating conditions, done during software development process. Stress is extreme conditions and reserved for a more complete piece of software

99
Q

How MC/DC relates to Statement Coverage

A

Statement coverage ensures that each line of code is executed at least once. MC/DC subsumes statement coverage because to achieve MC/DC, every possible decision in the code must be evaluated for both T and F outcomes, inherently executing each statement.

100
Q

How MC/DC relates to Branch Coverage

A

Branch coverage ensures that every possible branch (T/F) of each decision point is executed. MC/DC also subsumes branch coverage. In the process of ensuring that each codnition within a decision independently affects the outcome, all branches will necessarily be evaluated.

101
Q

How MC/DC relates to Condition Coverage

A

Condition coverage ensures that each condition in a decision is evaluated to both T and F. MC/DC goes beyond conditional coverage by not only ensuring each condition is tested for T and F but also ensuring that each condition’s effect on the decision’s outcome is independently tested.

102
Q

Integration Testing

A

next step up from unit testing, takes a broader look at how units and modules interact. (ex. Testing clicking send in an email client). Uses both black box and white box testing. Must be done by devs familiar with the software

103
Q

System testing:

A

next step up from integration testing, attempts to verify the entire program is working together. (test all pieces of app together). Form of black box testing. Can be conducted by non-devs. Teams will use quality assurance testers to try and “break” the software.

104
Q

Acceptance testing:

A

devs of the software present a version to the customer/client/end user for the stakeholders to “sign off” on if the software meets their expectations.

105
Q

Fuzzing vs random testing

A

Fuzzing for system testing, random is for unit testing

106
Q

Types of performance testing

A

Load and Stress

107
Q

When is load testing conducted in the SDP?

108
Q

When is stress testing conducted in the SDP?

A

Usually at the end

109
Q

Parts of a fuzzer

A

test case generator, software under test, oracle to detect crashes/exceptions, ways to save the test case and machine state at the time of the crash

110
Q

What are we fuzzing for?

A

To find nearly any type of software bug. Try to trigger crashes to test stability. Find memory leaks. Security exploits (penetration testing)

111
Q

Common approaches for stress testing

A

Spike testing and Soak/Endurance testing

112
Q

Spike testing:

A

run a spike of data/users/etc. through software

113
Q

Soak/Endurance testing:

A

slowly add more data/users/etc. to software until it crashes to find its breaking point.