Foundation Level Flashcards

1
Q

Testing vs Debugging

A

Executing tests can show failures that are caused by defects in the software.

Debugging is the development activity that finds, analyzes, and fixes such defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Test Activities and Tasks

A
  1. Test planning
  2. Test monitoring and control
  3. Test analysis
  4. Test design
  5. Test implementation
  6. Test execution
  7. Test completion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Confirmation Testing

A

After a defect is fixed, the software may be tested with all test cases that failed due to the defect, which should be re-executed on the new software version. The software may also be tested with new tests to cover changes needed to fix the defect. At the very least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new software version. The purpose of a confirmation test is to confirm whether the original defect has been successfully fixed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Regression testing

A

Looking for unintended changes in behavior resulting from software or environment changes

It is possible that a change made in one part of the code, whether a fix or another type of change, may accidentally affect the behavior of other parts of the code, whether within the same component, in other components of the same system, or even in other systems. Changes may include changes to the environment, such as a new version of an operating system or database management system. Such unintended side-effects are called regressions. Regression testing involves running tests to detect such unintended side-effects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Test type

A

a group of test activities aimed at testing specific characteristics of a software system, or a part of a system, based on specific test objectives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Functional quality characteristics

A

completeness, correctness, and appropriateness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Non-functional quality characteristics

A
  1. reliability
  2. performance
  3. efficiency
  4. security
  5. compatibility
  6. usability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Functional coverage

A

the extent to which some functionality has been exercised by tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Non-functional testing

A

evaluates characteristics of systems and software such as usability, performance efficiency or security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Non-functional testing

A

evaluates characteristics of systems and software such as usability, performance efficiency or security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Non-functional testing: details

A
  1. can and often should be performed at all test levels, and done as early as possible
  2. boundary value analysis can be used to define the stress conditions for performance tests.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Non-functional coverage

A

Non- functional coverage is the extent to which some type of non-functional element has been exercised by tests

expressed as a percentage of the type(s) of element being covered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

White-box Testing

A

derives tests based on the system’s internal structure or implementation (code, architecture, work flows, and/or data flows within the system)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Structural coverage

A

Structural coverage is the extent to which some type of structural element has been exercised by tests, and is expressed as a percentage of the type of element being covered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Change-related Testing

A
  1. Confirmation testing

2. Regression testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test Levels

A

Def: groups of test activities that are organized and managed together

Misc
1. For every test level, a suitable test environment is required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Test Levels Attributes

A
  1. Specific objectives
  2. Test basis, referenced to derive test cases
  3. Test object (i.e., what is being tested)
  4. Typical defects and failures
  5. Specific approaches and responsibilities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Test Levels List

A
  1. Component testing
  2. Integration testing
  3. System testing
  4. Acceptance testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Black Box Techniques (List)

A
  1. equivalence partitioning to derive test cases from given requirements
  2. boundary value analysis to derive test cases from given requirements
  3. decision table testing to derive test cases from given requirements
  4. state transition testing to derive test cases from given requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

White-box Test Techniques (List)

A
  1. Statement coverage

2. Decision coverage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Experience-based Test Techniques (List)

A
  1. Error guessing
  2. Exploratory testing
  3. Checklist-based testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The purpose of a test technique

A

help in identifying test conditions, test cases, and test data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The choice of which test techniques to use depends on

A
  1. Component or system complexity
  2. Regulatory standards
  3. Customer or contractual requirements
  4. Risk levels and types
  5. Available documentation
  6. Tester knowledge and skills
  7. Available tools
  8. Time and budget
  9. Software development lifecycle model
  10. The types of defects expected in the component or system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Seven Testing Principles

A
  1. Testing shows the presence of defects, not their absence
  2. Exhaustive testing is impossible
  3. Early testing saves time and money
  4. Defects cluster together
  5. Beware of the pesticide paradox
  6. Testing is context dependent
  7. Absence-of-errors is a fallacy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Root cause of a defect

A

The earliest actions or conditions that contributed to creating the defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Software testing

A

is a way to assess the quality of the software and to reduce the risk of software failure in operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Dynamic testing

A

testing does involve the execution of the component or system being tested

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Static testing

A

Does not involve the execution of the component or system being tested

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Misperceptions about software testing

A
  1. it only consists of running tests, i.e., executing the software and checking the results
  2. it focuses entirely on verification of requirements, user stories, or other specifications => it also includes validation: i.e. checking whether the system will meet user and other stakeholder needs in its operational environment(s)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Quality management

A
  1. includes all activities that direct and control an organization with regard to quality.
  2. includes both quality assurance and quality control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Quality assurance

A
  1. typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved
  2. quality assurance supports proper testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Quality control

A

involves various activities, including test activities, that support the achievement of appropriate levels of quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Test planning

A

involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Test monitoring and control

A
  1. on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan
  2. supported by the evaluation of exit criteria
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Test analysis

A

the test basis is analyzed to identify testable features and define associated test conditions. test analysis determines “what to test” in terms of measurable coverage criteria.

36
Q

Activities in test analysis

A
  1. Analyzing the test basis appropriate to the test level being considered
  2. Evaluating the test basis and test items to identify defects of various types
37
Q

Test design

A

the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware (how to test)

38
Q

Activities of test design

A
  1. Designing and prioritizing test cases and sets of test cases
  2. Identifying necessary test data to support test conditions and test cases
  3. Designing the test environment and identifying any required infrastructure and tools
  4. Capturing bi-directional traceability between the test basis, test conditions, and test cases
39
Q

Test implementation

A

he testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures

40
Q

Test completion

A

collect data from completed test activities to consolidate experience, testware, and any other relevant information.

41
Q

Activities of test implementation

A
  1. Developing and prioritizing test procedures, and, potentially, creating automated test scripts
  2. Creating test suites from the test procedures and (if any) automated test scripts
  3. Arranging the test suites within a test execution schedule in a way that results in efficient test execution
  4. Building the test environment (including, potentially, test harnesses, service virtualization, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly
  5. Preparing test data and ensuring it is properly loaded in the test environment
  6. Verifying and updating bi-directional traceability between the test basis, test conditions, test
    cases, test procedures, and test suites
42
Q

Activities of test execution

A
  1. Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
  2. Comparing actual results with expected results
  3. Analyzing anomalies to establish their likely causes
  4. Reporting defects based on the failures observed
  5. Logging the outcome of test execution
  6. Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing
  7. Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.
43
Q

Activities of test completion

A
  1. Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution
  2. Creating a test summary report to be communicated to stakeholders
  3. Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for later reuse
  4. Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use
  5. Analyzing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects
  6. Using the information gathered to improve test process maturity
44
Q

Test plan

A

The test plan includes information about the test basis, to which the other test work products will be related via traceability information (see below and section 1.4.4), as well as exit criteria (or definition of done) which will be used during test monitoring and control.

45
Q

Traceability between the Test Basis and Test Work Products

A

effective test monitoring and control, it is important to establish and maintain traceability throughout the test process between each element of the test basis and the various test work products associated with that element

46
Q

Good traceability supports:

A
  1. Analyzing the impact of changes
  2. Making testing auditable
  3. Meeting IT governance criteria
  4. Improving the understandability of test progress reports and test summary reports to include the status of elements of the test basis
  5. Relating the technical aspects of testing to stakeholders in terms that they can understand
  6. Providing information to assess product quality, process capability, and project progress against
    business goals
47
Q

Test completion work products

A
  1. test summary reports
  2. action items for improvement of subsequent projects or iterations
  3. change requests or product backlog items
  4. finalized testware.
48
Q

Test execution work products

A
  1. Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
  2. Defect reports
  3. Documentation about which test item(s), test object(s), test tools, and testware were involved in
    the testing
49
Q

Test monitoring and control work products

A
  1. test progress reports
  2. test summary reports

should also address project management concerns, such as task completion, resource allocation and usage, and effort.

50
Q

Test analysis work products

A
  1. defined and prioritized test conditions (bi- directionally traceable to test basis)
  2. creation of test charters
  3. may also result in the discovery and reporting of defects in the test basis
51
Q

Test design work products

A
  1. test cases and sets of test cases to exercise the test conditions defined in test analysis.
  2. the design and/or identification of the necessary test data
  3. the design of the test environment
  4. the identification of infrastructure and tools
52
Q

high-level test case

A
  1. without concrete values for input data and expected results
  2. eusable across multiple test cycles with different concrete data, while still adequately documenting the scope of the test case
  3. ideally bi-directionally traceable to test conditions it covers
53
Q

Test implementation work products

A
  1. Test procedures and the sequencing of those test procedures
  2. Test suites
  3. A test execution schedule
54
Q

Maintenance Testing

A
  1. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system
  2. When any changes are made as part of maintenance, maintenance testing should be performed
54
Q

Maintenance Testing

A
  1. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system
  2. When any changes are made as part of maintenance, maintenance testing should be performed
    a) to evaluate the success with which the changes were made
    b) to check for possible side-effects
55
Q

The scope of maintenance testing depends on:

A
  1. The degree of risk of the change
  2. The size of the existing system
  3. The size of the change
56
Q

Triggers for Maintenance

A
  1. Modification, such as planned enhancements (e.g., release-based), corrective and emergency changes, changes of the operational environment (such as planned operating system or database upgrades), upgrades of COTS software, and patches for defects and vulnerabilities
  2. Migration, such as from one platform to another, which can require operational tests of the new environment as well as of the changed software, or tests of data conversion when data from another application will be migrated into the system being maintained
57
Q

Impact Analysis for Maintenance

A

evaluates the changes that were made for a maintenance release to identify the intended consequences as well as expected and possible side effects of a change, and to identify the areas in the system that will be affected by the change. Impact analysis can also help to identify the impact of a change on existing tests.

58
Q

Impact analysis can be difficult if:

A
  1. Specifications (e.g., business requirements, user stories, architecture) are out of date or missing
  2. Test cases are not documented or are out of date
  3. Bi-directional traceability between tests and the test basis has not been maintained
  4. Tool support is weak or non-existent
  5. The people involved do not have domain and/or system knowledge
  6. Insufficient attention has been paid to the software’s maintainability during development
59
Q

Success Factors for Reviews

A
  1. Each review has clear objectives, defined during review planning, and used as measurable exit criteria
  2. Review types are applied which are suitable to achieve the objectives and are appropriate to the type and level of software work products and participants
  3. Any review techniques used are suitable for effective defect identification in the work product to be reviewed
  4. Any checklists used address the main risks and are up to date
  5. Large documents are written and reviewed in small chunks, so that quality control is exercised by
    providing authors early and frequent feedback on defects
  6. Participants have adequate time to prepare
  7. Reviews are scheduled with adequate notice
  8. Management supports the review process
  9. Reviews are integrated in the company’s quality and/or test policies.
60
Q

Objectives of component testing

A
  1. Reducing risk
  2. Verifying whether the functional and non-functional behaviors of the component are as designed
    and specified
  3. Building confidence in the component’s quality
  4. Finding defects in the component
  5. Preventing defects from escaping to higher test levels
61
Q

Test basis component testing

A
  1. Detailed design
  2. Code
  3. Data model
  4. Component specifications
62
Q

Test objects of component testing

A
  1. Components, units or modules
  2. Code and data structures
  3. Classes
  4. Database modules
63
Q

Typical defects and failures of component testing

A
  1. Incorrect functionality (e.g., not as described in design specifications)
  2. Data flow problems
  3. Incorrect code and logic
64
Q

White-box Test Techniques

A
  1. Statement Testing and Coverage

2. Decision Testing and Coverage

65
Q

The Value of Statement and Decision Testing

A
  1. When 100% statement coverage is achieved, it ensures that all executable statements in the code have been tested at least once, but it does not ensure that all decision logic has been tested.
  2. statement testing may provide less coverage than decision testing, Achieving 100% decision coverage guarantees 100% statement coverage S
  3. When 100% decision coverage is achieved, it executes all decision outcomes
  4. tatement coverage helps to find defects in code that was not exercised by other tests.
66
Q

Statement Testing and Coverage

A
  1. exercises the potential executable statements in the code.
  2. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object
67
Q

Decision Testing and Coverage

A
  1. exercises the decisions in the code and tests the code that is executed based on the decision outcomes
  2. Coverage is measured as the number of decision outcomes executed by the tests divided by the total number of decision outcomes in the test object
68
Q

Experience-based Test Techniques Intro

A
  1. the test cases are derived from the tester’s skill and intuition, and their experience with similar applications and technologies
  2. these techniques may achieve widely varying degrees of coverage and effectiveness.
  3. coverage may not be measurable with these techniques.
69
Q

Experience-based Test Techniques

A
  1. Error Guessing
  2. Exploratory Testing
  3. Checklist-based Testing
70
Q

Error Guessing

A
71
Q

Work Product Examined by Static Testing

A
  1. Specifications, including business requirements, functional requirements, and security requirements
  2. Epics, user stories, and acceptance criteria
  3. Architecture and design specifications
  4. Testware, including test plans, test cases, test procedures, and automated test scripts
  5. User guides
  6. Web pages
  7. Contracts, project plans, schedules, and budget planning
  8. Configuration set up and infrastructure set up
  9. Models, such as activity diagrams, which may be used for Model-Based testing
72
Q

Static analysis

A
  1. example of tool-driven evaluation of the code or other work products
  2. Static analysis can be applied efficiently to any work product with a formal structure (typically code or models) for which an appropriate static analysis tool exists.
73
Q

Benefits of Static Testing

A
  1. Detecting and correcting defects more efficiently, and prior to dynamic test execution
    => Using static testing techniques to find defects and then fixing those defects promptly is almost always much cheaper for the organization than using dynamic testing to find defects in the test object and then fixing them
  2. Identifying defects which are not easily found by dynamic testing
  3. Preventing defects in design or coding by uncovering inconsistencies, ambiguities, contradictions, omissions, inaccuracies, and redundancies in requirements
  4. Increasing development productivity
  5. Reducing development cost and time
  6. Improving communication between team members in the course of participating in reviews
74
Q

Differences between Static and Dynamic Testing

A
  1. Static and dynamic testing complement each other by finding different types of defects.
  2. static testing can be used to improve the consistency and internal quality of work products, while dynamic testing typically focuses on externally visible behaviors.
75
Q

typical defects that are easier and cheaper to find and fix through static testing

A
  1. Requirement defects (e.g., inconsistencies, ambiguities, contradictions, omissions, inaccuracies, and redundancies)
  2. Design defects (e.g., inefficient algorithms or database structures, high coupling, low cohesion)
  3. Coding defects (e.g., variables with undefined values, variables that are declared but never used,
    unreachable code, duplicate code)
  4. Deviations from standards (e.g., lack of adherence to coding standards)
  5. Incorrect interface specifications (e.g., different units of measurement used by the calling system than by the called system)
  6. Security vulnerabilities (e.g., susceptibility to buffer overflows)
  7. Gaps or inaccuracies in test basis traceability or coverage (e.g., missing tests for an acceptance
    criterion)
  8. Maintainability defects (e.g., improper modularization, poor reusability of components, code that is difficult to analyze and modify without introducing new defects).
76
Q

Informal Review

A
  1. do not follow a defined process

2. do not have a defined documentation output

77
Q

Formal Review

A
  1. team participation
  2. documented results of the review
  3. documented procedures for conducting the review
78
Q

Formality of a review is related to

A
  1. software development lifecycle model
  2. the maturity of the development process
  3. the complexity of the work product to be reviewed
  4. any legal or regulatory requirements
  5. need for audit trial
79
Q

Activities in the review process

A
  1. Planning
  2. Initiate Review
  3. Individual Review (preparation)
  4. Issue communication and analysis
  5. Fixing and reporting
80
Q

Roles and Responsibilities in a Formal Review

A
  1. Author
  2. Management
  3. Facilitator
  4. Review Leader
  5. Reviewers
  6. Scribe
81
Q

Most Common Review Types

A
  1. Informal review
  2. Walkthrough
  3. Technical review
  4. Inspection
82
Q

Individual Review Techniques

A
  1. Ad hoc
  2. Checklist-based
  3. Scenarios and dry runs
  4. Perspective-based
  5. Role-based
83
Q

Ad hoc Review

A
  1. little or no guidance on how this task should be performed.
  2. needing little preparation
  3. highly dependent on reviewer skills
  4. may lead to many duplicate issues being reported by different reviewers
84
Q

Checklist-based Review

A
  1. A review checklist consists of a set of questions based on potential defects, which may be derived from experience
  2. Checklists should be maintained regularly to cover issue types missed in previous reviews
  3. Yields systematic coverage of typical defect types
  4. Careful not to miss items outside of the checklist
85
Q

Perspective-based Review

A
  1. reviewers take on different stakeholder viewpoints in individual reviewing
  2. typical stakeholder viewpoints include end user, marketing, designer, tester, or operations
  3. empirical studies have shown perspective-based reading to be the most effective general technique for reviewing requirements and technical work products
  4. a key success factor is including and weighing different stakeholder viewpoints appropriately, based on risks