Chapter 1 Fundamentals Of Testing Flashcards

1
Q

What is test coverage?

A

The degree to which specified coverage items are exercised by a test suite expressed as a percentage.

Synonyms: test coverage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is debugging?

A

The process of finding, analyzing, and removing the causes of failures in a component or system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a defect?

A

An imperfection or deficiency in a work product where it does not meet its requirements or specifications.

Synonyms: bug, fault.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an error?

A

A human action that produces an incorrect result.

Synonyms: mistake.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a failure?

A

An event in which a component or system does not perform a required function within specified limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is quality?

A

The degree to which a work product satisfies stated and implied needs of its stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is quality assurance?

A

Activities focused on providing confidence that quality requirements will be fulfilled.

Abbreviation: QA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a root cause?

A

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.

References: CMMI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is test analysis?

A

The activity that identifies test conditions by analyzing the test basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a test basis?

A

The body of knowledge used as the basis for test analysis and design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a test case?

A

A set of preconditions, inputs, actions (where applicable), expected results, and postconditions, developed based on test conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is test completion?

A

The activity that makes testware available for later use, leaves test environments in a satisfactory condition, and communicates the results of testing to relevant stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a test condition?

A

A testable aspect of a component or system identified as a basis for deriving test cases and test data

Initial idea for testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is test control?

A

The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is test data?

A

Data needed for test execution.

Synonyms: test dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is test design?

A

The activity that derives and specifies test cases from test conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is test execution?

A

The activity that runs a test on a component or system producing actual results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is test implementation?

A

The activity that prepares the testware needed for test execution based on test analysis and design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is test monitoring?

A

The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a test object?

A

The work product to be tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is test planning?

A

The activity of establishing or updating a test plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a test procedure?

A

A sequence of test cases in execution order and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution.

References: ISO 29119-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a test result?

A

The consequence/outcome of the execution of a test.

Synonyms: outcome, test outcome, result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is testing?

A

The process within the software development life cycle that evaluates the quality of a component or system and related work products.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is testware?

A

Work products produced during the test process for use in planning, designing, executing, evaluating, and reporting on testing.

After ISO 29119-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is validation?

A

Confirmation by examination that a work product matches a stakeholder’s needs.

After IREB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is verification?

A

Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

References: ISO 9000.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

7 testing activities

A
  1. Test planning
  2. Test monitoring and test control
  3. Test analysis
  4. Test design
  5. Test implementation
  6. Test execution
  7. Test completion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Software development life cycle (SCDL)

A

Process of planning, creating, testing, and deploying an information system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

7 Stages of SDCL

A
  1. Planning
  2. Analysis
  3. Design
  4. Implementation
  5. Testing and Integration
  6. Maintenance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Testing work products

A
  • Requirements
  • User stories
  • source code
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

3 steps of debugging

A
  1. Failure reproduction
  2. Diagnosis
  3. Fixing the code
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are the key components of successful testing contributions?

A

Produce quality, process quality, project goals, and people skills.

34
Q

What is the focus of Quality Assurance?

A

Quality assurance focuses on establishing, implementing, monitoring, improving, and adhering to quality-related processes.

35
Q

What is the relationship between quality control and quality assurance?

A

Proper conduct of QC, especially testing activities, is important for quality assurance, and quality assurance supports proper testing.

36
Q

What is the general description of quality assurance?

A

Implementing processes, methodologies, and standards to ensure that the developed product meets the required quality standards.

37
Q

What is the general description of quality control?

A

Performing activities to verify that the developed product meets the required quality standards.

38
Q

What is the target of quality assurance?

A

Improving the manufacturing process.

39
Q

What is the type of process for quality assurance?

A

Preventive (defect prevention), proactive.

40
Q

What are examples of activities in quality assurance?

A

Implementation of processes, e.g., defect management, change management, software release; quality audits; process and product measurements; verification of correct implementation and execution of processes; training of team members; selection of tools.

41
Q

What is the target of quality control?

A

Product improvement through failure and defect detection.

42
Q

What is the type of process for quality control?

A

Control (defect detection), reactive.

43
Q

What are examples of activities in quality control?

A

Static analysis of project documentation; code reviews; analysis, design, implementation of test cases; dynamic testing; writing and executing test scripts; defect reporting; using tools to support testing.

44
Q

7 Basic Principles of testing

A
  1. Testing shows the presence not the absence of defects
  2. Exhaustive testing is impossible
  3. Early testing saves time and money
  4. Defects cluster together
  5. Tests wear out
  6. Testing is context dependant
  7. Absence-of-defects fallacy
45
Q

7 parts of testing process

A
  1. Test planning
  2. T. Monitoring and t. Control
  3. T. Analysis
  4. T. Design
  5. T. Implementation
  6. T. execution
  7. T. Completion
46
Q

What affects the selection of an organization’s test process

A
  1. Type of SDLC and project methodologies
  2. Test levels and test types considered
  3. Product risks and project risks
  4. Business domain
  5. Contractual and regulatory requirements
  6. Operational limits: budget & schedules
  7. Complexity of domain
  8. Test policy in organization
  9. Required internal and external norms/standrds
47
Q

TEST PLANNING - activities

A
  • defining test objectives
  • identifying the test activities needed to fulfilled the project’s mission and meet the test objectives
  • Defining an approach to achieving test objectives within the limits set by the context
  • Determining appropriate test technique and test tasks
  • formulation a test execution schedule
  • Defining metrics
48
Q

TEST MONITORING

A

Continuous comparison of actual and planned test progress using metrics specifically defined for the purpose in the test plan

49
Q

TEST CONTROL

A

Proactive taking action that are necessary to achieve the objectives set in the test plan (taking into account its possible updates

50
Q

Evaluation of exit criteria from the test plan
(DoD - Definition of Done)
Can include:

A
  • checking the test results and test logs against specified coverage criteria
  • estimate the quality level of a component or a system, based on test results and test logs
  • determining whether further tests are necessary
    Informing stakeholders about the progress of the test plan
  • writing test progress reports
51
Q

TEST ANALYSIS

A
  • Looking at the test basis
  • analysing it to identify testable features
  • defining the associated test conditions
  • determining „what tot test”
    (In terms of measurable coverage criteria)
  • general test objectives are transformed into specific test conditions
52
Q

TEST BASIS

A
  • ad any documentation or info that describes how software should work
  • serves as foundation or reference for designing and executing test cases
53
Q

COMMON EXAMPLES OF TEST BASIS

A
  • requirements specification
  • design specification
  • uses cases
    -user stories
  • source code
  • business rules
54
Q

TEST ANALYSIS VERIFIES THAT REQUIREMENTS ARE:

A
  • consistent
  • correctly expressed
  • complete
  • testable
  • ready for starting developing the software (DoR - Definition of Ready)
  • don’t need further grooming, can be used as a source for estimation
  • properly reflect the need of the customers, users, and other stakeholders
55
Q

TYPICAL TEST ANALYSIS ACTIVITIES:

A
  1. Familiarising with the test basis - definition of desired functional and non-functional behaviour of a component or system
  2. Analysis of design and implementation information (e.g. diagrams/docus describing system or software architecture, control flow diagrams, UML, entity relationship diagram, interface specification) - staff that define the structure or system
  3. Analysis of the implementation of the component or a system itself: code, metadata, database queries, and interfaces
  4. Analysis of risk analysis reports (functional, nonfunctional and structural aspects of a component or system)
  5. Assessing testability of the test basis to identify common types of defects: ambiguities, omissions, inconsistencies, contradictions, redundant instructions
  6. Identifying the features and feature sets to be tested
  7. Defining test conditions for individual features nad prioritising them based on yes base analysis, taking into account different parameters (functional, nonfunctional, structural), business, technical and risk factors
  8. Creating bidirectional traceability between test basis elements and their associated test conditions
56
Q

TEST MODELS

A

Formal test conditions
E.g. state transition diagrams, decision tables, control flow diagrams

57
Q

TEST DESIGN

A

Transformation of test conditions into taste cases, collections of test cases and into other testware
„How to test”
- identifying test coverage items and using testing techniques
- creating guidelines (based on TCI) for determining test case input)
- defining test data - e.g. identifying values for the boundary value analysis
- sometimes done along with test implementation

58
Q

TEST DESIGN ACTIVITIES

A
  • designing (sets of) high-level test cases and prioritizing them
  • identifying necessary test data
  • identifying any necessary tools and infrastructure elements
  • creating bidirectional traceability between test basis, test conditions, test cases, test procedures (expanding the traceability matrix)
  • identifying defects in the test absis
59
Q

TEST IMPLEMENTATION

A

Tester creates/finalizes the testware necessary for test execution (transforming high-level test cases into low-level/concrete test cases, assembling test cases into TEST PROCEDURES, creating automated test scripts, acquiring test data, implementing test environment

„DO WE HAVE EVERYTHING WE NEED TO RUN THE TESTS?”

60
Q

TEST IMPLEMENTATION ACTIVITIES:

A
  1. If needed - making high-levels test cases more concrete by specifying detailed data
  2. Developing test procedures and priotizing them
  3. Creating test suites (based on test procedures) and automated test scripts (if automation is used)
  4. Organizing test sets into a test execution schedule to ensure that the entire process runs efficiently
  5. Building a test environment, including - if necessary - mock objects, service virtualization, simulators, and other infrastructure elements, and verifying that it has been configured correctly
  6. Preparing test data and weighing that it has been correctly loaded into the test environment
  7. Verifying and updating traceability between test basis, test conditions, test cases, test procedures, and test stes
61
Q

TEST EXECUTION - ACTIVITIES

A
  1. Registering the identification and version data of test items, test objects, test tools, and other testware
  2. Performing tests manually or with tools including smoke (simple test to check the correct implementation of basic functionality) or sanity tests
  3. Comparing actual test results with expected ones
  4. Analyzing anomalies to determine their likely causes (defects in the code, false positives)
  5. Reporting defects, based on observed failures
  6. Logging the test execution results (passed, failed, blocking test)
  7. Repeating the necessary testing activities (confirmation testing, execution of a revised test, regression testing)
  8. Verifying and updating bidirectional traceability between the test basis and all the testware used
62
Q

TEST COMPLETION ACTIVITIES

A
  1. Handing over the software system for operation
  2. Completion/cancelling of the project
  3. Completion of an iteration of an agile project
  4. Completion of a test level
  5. Completion of work on the maintenance release
  6. Checking that all defects reports are closed and creating change request or Product Backlog items for any unresolved defects
  7. Identifying and archiving any test cases that may be useful in the future
  8. Handing over testware to the operation team, other project teams etc
  9. Bringing the test environment to an agreed state
  10. Analyzing completed test activities to identify lessons learned and identify improvements for future iterations, releases or projects
  11. Creating a report on the completion of testing and distributing it to stakeholders
63
Q

CONTEXTUAL FACORS IN TESTING

A
  1. Stakeholders (needs, expectations, requirements, business ones, willingness to cooperate with the test team)
  2. Team members (skills, knowledge, level of experience, availability, training needs, atmosphere)
  3. Business domain (identified product risks, market needs, specific legal conditions)
  4. Technical factors (project architecture, technology used)
  5. Project constraints (project scope, time, available budget, resources, project risks)
  6. Organisational factors (organizational structure, existing policies, test policies, practices used)
  7. Software development life cycle (engineering practices, developmental methods)
  8. Tools (availability, usability, compliance)
  9. Policies (data, privacy, cookies)
64
Q

WHAT CONTEXTUAL FACTORS CAN INFLUENCE

A
  1. Test strategy
  2. Test techniques
  3. Degree of automation
  4. Required coverage level for the requirements and identified risks
  5. Level of detail and type of test documentation to be developed
  6. Level of detail of test progress reporting
  7. Level of detail of defect reporting
65
Q

TESTWARE

A

Work products associated with testing. Sometimes managed using test management tools and defect management tools

66
Q

TEST PLANNING WORK PRODUCTS

A
  1. Test plan
    info based on test basis; all other test work products will be linked via bidirectional traceability info. There: definition of EXIT CRITERIA (DoD); everything can be verified at any level (during monitoring and controlling)
  2. Risk register
    risks identified by a team, info about probability, impact nad how they can mitigated
  3. Entry criteria and exit criteria
67
Q

TEST MONITORING AND TEST CONTROL WORK PRODUCTS

A
  1. Test progress reporting
    created on an ongoing basis/ or at regular intervals; info about project management issues, info on completed tasks, resources allocation and consumption
  2. Documentation of control directives
  3. Risk information
68
Q

TEST ANALYSIS WORK PRODUCTS

A
  1. (Prioritized) Test conditions - defined
  2. Acceptance criteria
  3. Defect Reports in the test basis (if not fixed directly)
69
Q

TEST DESIGN WORK PRODUCTS

A
  1. HIGH-LEVEL (LOGICAL) TEST CASES
    those that do not include specific input data values and expected results
    those can be reused many times with different data, documenting the scope of the test case
    should be bidirectional traceability between test case and the test condition it covers
  2. COVERAGE ITEMS
  3. TEST SATA REQUIREMENTS
  4. TEST ENVIRONMENT DESIGN
70
Q

TEST IMPLEMENTATION WORK PRODUCTS

A
  1. Low level (concrete) test cases
  2. Test procedures (including the order in which they are executed)
  3. Automated test scripts
  4. Test sets
  5. Test data
    assigning specific data to specific values, along with guidelines how to use it
  6. Test execution schedule
  7. Elements of the test environment
    7a. MOCK OBJECTS (e.g. stubs, drivers)
    7b. Simulators
    7c. Service virtualisation
71
Q

TEST EXECUTION WORK PRODUCTS

A
  1. TEST LOGS
  2. DOCUMENTATION OF THE STATUS OF TEST PROCEDURES
  3. DEFECT REPORTS
  4. DOCUMENTATION INDICATIONG WHAT WAS USED IN TESTING (e.g. test objects, test tools and testware)
72
Q

TEST COMPLETION WORK PRODUCTS

A
  1. TEST COMPLETION REPORT
    detailed info on the progress if the testing process to date, summary of the results of test execution, info of deviation form the plan and corrective actions
  2. ACTION ITEMS TO IMPROVE SUBSEQUENT PROJECTS OR ITERATIONS (e.g. retrospective action items transformed to Product Backlog items for future iterations)
  3. CHANGE REQUESTS (e.g. as elements of a product backlog(
73
Q

TRACEABILITY BETWEEN THE TEST BASIS AND TESTING WORK PRODUCTS - IT ENABLES

A
  1. Evaluation of test coverage
  2. Analyzing the impact of change
  3. Conduction of test audits
  4. Meeting criteria related to IT management
  5. Creating easy-to-understand test status reports and summary test completion reports
  6. Presenting the status of test basis elements)
    requirements for which tests have been passed, failed, or are waiting to be executed)
  7. Providing stakeholders with info about technical issues
  8. Providing the information needed to assess product quality, process capabilities, and project progress against business objectives
74
Q

What are the two fundamental roles in testing

A
  1. A test management role (managerial)
  2. A testing role (technical)
75
Q

Test management role

A

Responsible for implementing the test process, organizing the work of the test team, directing test activities.
Activities related to:
- test planning
- test monitoring and control
- test completion

76
Q

Testing role

A

Engineering aspect of testing
Mainly focus on:
- test analysis
- test design
- test implementation
- test execution

77
Q

TASKS OF TEST MANAGER

A
  • developing or reviewing test strategies and test policies
  • CONTEXT-SENSITIVE TEST PROJECT PLANNING:
    • creating and updating test plans
    • iteration and release planning agile projects
    • choosing test approach
    • defining entry criteria and exit criteria
    • introducing appropriate metrics for measuring the test progress and assessing the quality of testing and the product
      - defining test levels and test cycles
    • estimating the time, effort and cost of testing
    • test prioritization
  • risk management
  • monitoring tet results, checking the status of exit criteria (DoD), performing tet control (e.g. adjusting plans according to test results and progress)
  • PROGRESS SUPERVISION
    • configuration management
    • defect management
  • resource acquisition
  • coordinating test strategy and test plan with project managers and stakeholders
  • presenting the testers’ point of view
  • initiating processes for test analysis, test design, test implementation, tet execution
  • reporting test progress, creating a test completion reports
  • supporting the team in the use of tools to implement the testing progress (e.g. funds for tools, purchasing licenses, controlling implementation of the tool)
  • deciding on the implementation of test environments
  • promoting testers and test team
  • developing testing skills, performance evaluation, coaching
78
Q

TASKS OF A TESTER

A
  1. Reviewing test plans and participation their developments
  2. Co-authoring the requirements (user stories) while performing collaborative user story writing
  3. Deriving testable acceptance criteria for each Product Backlog item
  4. Analysing, reviewing, and evaluating the test basis (i.e. requirements, user stories, acceptance criteria, specifications, and models) for testabiltiy
  5. Identifying and documenting test conditions and recording the relationship between test cases, test conditions, and test basis
  6. Designing, configuring, and verifying test environments
  7. Designing and implementing test cases, test procedures, and test scripts
  8. Preparing and acquiring test data
  9. Co-creating the test execution schedule
  10. Performing tests, evaluating results, and documenting deviations from expected results
  11. Using appropriate tools to streamline the testing process ( e.g. test automation tools)
  12. Evaluations and measuring nonfunctional characteristics of the software
  13. Collaborating with the team
  14. Using - if necessary - tools for test management
  15. Test automation
79
Q

Who can perform testing tasks depending on the level

A
  1. At component and component integration level levels - usually developers
  2. At the system testing level - testers, members of an independent test team
  3. At the acceptance test level - business experts and users
  4. At the operational acceptance testing level - usually system operators and system administrators
80
Q

Characteristics of a good tester

A
  1. TESTING KNOWLEDGE (increase effectiveness of testing, e.g using testing techniques)
  2. THOROUGHNESS, CAREFULLNESS, CURIOSITY, ATTENTION TO DETAIL, BEING METHODICAL (to identify different types of defects)
  3. GOOD COMMUNICATION SKILLS, ACTIVE LISTENING, BEING A TEAM PLAYER (effective interaction with all stakeholders, communicating info to others, be understood, be able to report and discuss defects)
  4. ANALYTICAL THINKING, CRITICAL THINKING, CREATIVITY
  5. TECHNICAL KNOWLEDGE
  6. DOMAIN KNOWLEDGE (to understand and communicate with end users and business representatives)
81
Q

PSYCHOLOGICAL ASPECTS IN TESTING

A
  • UNWILLINGNESS OF DEVELOPERS TO LISTEN, FEAR OF CRITICISM
  • CONFIRMATION BIAS
    hard to accept information that contradicts beliefs
    selective memory, selective hypotheses testing
  • BAD COMMUNICATION, COCKY - BOTH TESTERS AND DEVOPS