CHAPTER 5 MANAGING TEST ACTIVITIES Flashcards

1
Q

DEFECT MANAGEMENT

A

The process of :
- RECOGNIZING
- RECORDING
- CLASSIFYING
- RESOLVING
- DISPOSING OF DEFECTS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

DEFECT REPORT

A

Documentation of the occurence, nature, and status of a defect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ENTRY CRITERIA

A

The set of conditions for officially starting a defined task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

EXIT CRITERIA

A

Set of conditions for officially completing a defined task
Synonymes: TEST COMPLETION CRITERIA, COMPLETION CRITERIA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

PRODUCT RISK

A

A risk impacting the quality of a product

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

PROJECT RISK

A

A risk that impacts project success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

RISK ANALYSIS

A

OVERALL PROCESS OF RISK IDENTIFICATION AND RISK ASSESSMENT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

RISK ASSESSMENT

A

Process to examine identified risks and determine the risk level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

RISK CONTROL

A

THE OVERALL PROCESS OF RISK MITIGATION AND MONITORING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

RISK LEVEL

A

The measure of a risk defined by impact and likelihood

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

RISK MITIGATION

A

PROCESS THROUGH WHICH DECISIONS ARE REACHED AND PROTECTIVE MEASURES ARE IMPLEMENTED FOR REDUCING OR MAINTAINING RISKS TO SPECIFIED LEVELS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

RISK MONITORING

A

The activity that checks and reports the status of known risks to stakeholders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RISK-BASED TESTING

A

Testing in which the management, selection, prioritisation, and use of testing activities and recourses are based on corresponding risk types and risk levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

TEST COMPLETION REPORT

A

A type of report produced at completion milestones that provides an evaluation of corresponding test items against exit criteria

TEST SUMMARY REPORT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

TEST CONTROL

A

The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

TEST MONITORING

A

The activity that checks the status of testing activities, identifies any variances form planned or expected, and reports status to stakeholders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

TEST PLAN

A

Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities

•usually single test plan but also:
- MASTER TEST PLAN AND CORRESPONDING PLANS
- or plans according to the test levels defined in the project
• LEVEL TEST PLAN
component integration plan, a system test plan, acceptance test plan
• PHASE TEST PLAN
• TYPE TEST PLAN
performance test plan etc.
• DEFING ENTRY AND EXIT CRITERIA
• CHOOSING TEST APPROACH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

TEST PROGRESS REPORT

A

Periodic test report that includes progress against:
- a baseline
- RISKS
- ALTERNATIVES REQUIRING A DESISION

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

TEST PYRAMID

A
  • graphical model representing the relationship of the amount of testing per level
    • more at the bottom than at the top
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

TESTING QUADRANTS

A
  • classification model of test types/test levels in 4 quadrants
    2 dimensions of test objectives:
    • supporting the product team
    VS critiquing the product
    • technology facing approach
    VS business facing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

TEST APPROACH

A

• DESCRIBED IN TEST PLAN
- helps making sure that test activities can started

22
Q

TEST PLANNING = FACTORS THAT SHOULD BE CONSIDERED

A

• TEST POLICY AND TEST STRATEGY
• SDLC
• SCOPE OF TESTING
• OBJECTIVES
• RISKS
• LIMITATIONS
• CRITICALITY
• TESTABILITY
• RESOURCE AVAILABILITY

23
Q

TEST PLAN - WHAT IT (USUALLY) INCLUDES

A
  1. CONTEXT OF TESTING
    scope, objectives, limitations, and test basis info
  2. ASSUMPTIONS AND LIMITATIONS OF THE TESTING scope PROJECT
  3. STAKEHOLDERS
    roles, responsibilities, influence on testing process (e.g., power vs interest) and hiring and training need of people
  4. COMMUNIACATION
    types and frequency of communication and templates of documents used
  5. LSIK OF RISKS
    product and project risks
  6. APPROACHES TO TESTING
    test levels - test types - test techniques - level of test independence - definition of test metrics used - test data requirements - test environment requirements - deviations from organisational best practices (with justification)
  7. SCHEDULE

CONSTANTLY UPDATED DEPENDING ON THE RESULTS OF TESTS
CO

24
Q

STEPS OF TEST PLANNING

A
  1. Defining the SCOPE and OBJECTIVES of testing and the ASSOCIATED RISKS
  2. Determining the overall APPROACH TO TESTING
  3. INTEGRATING AND COORDINATINg the test activities within SDLC activities
  4. Deciding WHAT to test, WHO personnel, WITH WHAT resources and HOW
  5. PLANNING ACTIVITIES OF TEST ANALYSIS, DESIGN, IMPLEMENTATION, EXECUTION AND EVALUATION
    • deadlines (sequential approach)
    • placing them in context of individual iterations
  6. Making SELECTION OF MEASURES FOR TEST MONITORING AND TEST CONTROL
  7. BUDGET FOR TEH COLLECTION AND EVALUATION OF METRICS
  8. DETERMINING THE LEVEL OF DETAIL AND STRUCTURE OF DOCUMENTION
25
Q

TESTING STRATEGIES

A
  1. ANALYTICAL
    analysis of specific factor (e.g. requirements or risk) - risk based testing - that’s a starting point for design and prioritization
  2. MODEL-BASED
    basis of a model of a specific required aspect of the product - e.g. function, business process, internal structure, non-functional characteristics (reliability)
  3. METHODICAL STRATEGY
    systematic application of a predetermined seto of tests/checklist
    fault stacks, lists containing typical defects, quality characteristics
  4. PROCESS-COMPLIANT (standard compliant)
    test cases based on external rules and standards imposed on org
  5. DIRECTED (conductive) STRATEGY
    advice and guidance from stakeholders , technical experts etc.
  6. REGRESSION-AVERSE
    motivated by desire to avoid regression, extensive automation, use of standard test suites
  7. REACTIVE STRATEGY
    tests geared towards events, rather than predetermined plan
    tests are designed and can be immediately executed based on knowledge from results of previous tests
26
Q

BASICS OF TEST STRATEGY SELECTION

A

1) RISK OF PROJECT FAILURE
- to the product
- danger to people, environment etc.
- lack of skills and experience
2) REGULATIONS (external and internal) ON THE DEVELOPMENT PROCESS
3) PURPOSE OF THE TESTING VENTURE
4) MISSION OF TEH TEST TEAM
5) TYPE AND SPECIFICS OF THE PRODUCT

27
Q

TESTER’S CONTRIBUTION TO ITERATION AND RELEASE PLANNING

A
  • RELEASE PLANNING
    • defining and re- the product backlog
    • refining user stories into smaller ones
    • provision of basis for a test approach and test covering
    • identification of risks
    • estimation of effort
      TESTER CONTRIBUTION:
      • defining testable user stories and acceptance criteria
    • participating in risk analysis
    • estimating testing efforts
    • defining test levels
    • planning testing for the release
      • ITERATION PLANNING
    • team selects user stories from prioritized product backlog, clarifies them and slices them, performs risk analysis, estimates work needed
28
Q

TEAM VELOCITY

A

• empirically determined amount of work a team is able to perform during single iteration
• expressed in terms of USER STORY POINTS
• size of ech story is also estimated in this unit
• used not to overwork, overburden teams or underburden and create entry run

29
Q

TESTERS INPUT IN ITERATION PLANNING

A

• participating in detailed risk analysis of user stories
• determining the testability of user stories
• co-creating acceptance tests
• splitting user stories into tasks
• estimating the testing effort
• identifying functional and non-functionally testing aspects of the system
• supporting and participating in test automation

30
Q

ENTRY CRITERIA (close to Definition of Ready in agile)

A

Preconditions that must be met before a test activity can begin, like availability of resources or testware:
1. Testable requirements, user stories, and/or models
2. Testable items that met the exit criteria of earlier levels of testing (in waterfall approach)
3. Test environment
4. Necessary test tools
5. Test data and other necessary resources
6. Initial quality level of rest object (all smoke tests pass)

31
Q

EXIT CRITERIA (Definition of Done)

A

Conditions that must be met in order for the execution of a test level or set of tests to be considered completed
defined for each test level and types
TYPICAL EXIT CRITERIA:
- completion of the execution of scheduled test
- achieving right level of coverage
- not exceeding the agreed limit of unrepaired defects
- obtaining a sufficiently low estimated defect density
- achieving suffienctly high reliability rates

32
Q

TESTS MAY BE SHORTENED DUE TO:

A
  • USE OF THE ENTIRE BUDGET
  • PASSAGE OF SCHEDULED TIME
  • PRESSURE OF BRINGING A PRODUCT TO THE MARKET
33
Q

ESTIMATION TECHNIQUES

A

TEST EFFORT estimation as:
- LABOUR INTENSITY, A PRODUCT MEASURE (TIME * RESOURCES)
- product measure do not scale
- 12 person-days labor intensity may not result in 2 people 6 days
• using an estimation range

34
Q

WORK BREAKDOWN STRUCTURE (WBS)

A
  • estimating a complex task
  • decomposition technique
    • main complex task or activity is hierarchically decomposed into smaller subtasks
      GOAL - break down main task to more estimable but executable components
35
Q

2 GROUPS OF ESTIMATING TECHNIQUES

A
  1. METRIC-BASED TECHNIQUES
    METRCIS OF PREVIOUS SIMILAR PROJECTS, ON HSITORICAL DATA FROM CURRENT ONE OR TYPICAL VALUES
  2. EXPERT-BASED
    EXPERIENCE OF TESTER
36
Q

4 FREQUENTLY USED ESTIMATION TECHNIWUES (in syllabus)

A
  1. ESTIMATION BASED ON RATIO
  2. EXTRAPOLATION
  3. WIDEBAND DELPHI
  4. THREE-POINT ESTIMATION
37
Q

ESTIMATION BASED ON RATIOS

A

• data from previous projects - DERIVATION OF STANDARD RATIOS of various indicators for similar projects

E.g. previous similar projects implementation effort to test effort was 3:2 , current development efforts is to be 600 person-days, so test effort estimated as 200 person-days

38
Q

EXTRAPOLATION

A
  • measurements are taken as early as possible to collect real, historical data from current project
    – with enough DATA POINTS (observations) - extrapolation is conducted
  • in iterative software development models - how it worked in previous iterations
39
Q

WIDEBAND DELPHI

A

– estimation based on experts’ experience
- each escort in isolation estimates the workload - results are discussed - then new prediction based on this info
-> process repeated till they reach consensus
WISDOM OF THE CROWD
PLANNING POKER - version of wideband Delphi
• cards and Fibonacci sequence
• t-shirt sizes
• successive powers of 2
- transform scale into a unit od effort

40
Q

THREE POINT ESTIMATION

A

DIVING ESTIMATIONS INTO:
1. Most optimistic (O)
2. Most likely (L)
3. Most pessimistic (P)
E - final estimate
E =(O + 4L + P)/6
SD - measurement error
SD = (P-O)/6

Derived from - PROGRAM EVALUATION AND REVIEW TECHNIQUE (PERT)

41
Q

FACTORS AFFECTING THE TEST EFFORT

A
  • PRODUCT CHARACTERISTICS - (e.g. product risks, quality of specifications (i.e. test basis), product size, complexity of domain, requirements for quality characteristics (e.g. safety and reliability), level of detail in documentation, regulatory compliance requirements)
  • CHARACTERISTICS OF THE SOFTWARE DEVELOPMENT PROCESS ( stability and maturity of the org, SDLC, test approach, tools used, test process)
  • HUMAN FACTOS
  • TEST RESULTS (e.g. number, type, significance of defects, number of corrections required, tests failed
42
Q

TEST CASE PRIORITIZATION

A

• TEST CASES AND TEST PROCEDURES ORGANIZED INTO TEST SUITES
• test suites - INTO TEST EXECUTION SCHEDULE
How to organize - what factors are considered:
- PRIORITIES
• DEPENDENCIES
• NEED FOR CONFIRMATION AND REGRESSION TESTING
• MOST EFFECTIVE ORDER

CREATING SCHEDULE - FACTORS:
- dates of the core activities
- schedule within the timeframe of the project schedule
- milestones: start and end of each stage

43
Q

GANTT CHART

A

Common graphical way of presenting te schedule
Tasks are represented as rectangles expressing their duration
Arrows - define different types of relationships between tasks

FINAL TESTING PERIOD - MAXIMUM NUMBER OF RESOURCES

44
Q

3 MOST COMMON TEST CASE PRIORITIZATION STRATEGIES

A
  1. RISK- BASED
    order - based on risk analysis
    test cases covering the most important risks executed first
  2. COVERAGE-BASED
    based on coverage (e.g. code coverage, requirements coverage)
    test achieving highest coverage executed first
  3. REQUIREMENTS-BASED
    based on requirements priorities
    That are linked to corresponding test cases
    most important first
    requirements created by stakeholders

BUT ALWAYS TAKING INTO ACCOUNT DEPENDENCIES - CONTEXT - REGULATIONS

45
Q

TEST PYRAMID

A
  • model showing that different tests can have different granularity
  • supports test automation and test effort allocation
    LAYERS OF THE PYRAMID - group tests
    THE HIGHER THE LAYER - LOWER TEST GRANULARITY, ISOLATION, EXECUTION SPEED, COST OF QUALITY

BOTTOM LAYER - tests are small, isolated, fast, check small pieces of functionality - you need a lot to get reasonable coverage
TOP LAYER - large, high-level end-to-end ; slower, check a large chunk of functionality, you need lower number of those tests

Example of layers
UI
Tests
___________
service tests
——————————
unit/component tests

		end-to-end
	\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
	integration tests 
—————————————-
unit/component test
46
Q

TESTING QUADRANTS GRAPH

A

Automated——BUSINESS—— Manual
And manual. FACING
Q2 | Q3 CR
T | IT
E | IQUE
A ______________________________________
M | PR
| od
Q1 | Q4 uct
Automated___technology ____________|
facing

Q1: unit testing and component testing
Q2: functional testing, examples, user story testing, prototypes, simulations
Q3: exploratory testing, scenarios, usability testing, user acceptance testing, alpha/beta testing
Q4: performance testing, security, scalability, interoperability, data migration

47
Q

TESTING QUADRANTS

A
  • aligning test levels with relevant test types, activities, and work products in agile
  • all test types and test levels are included in software development process, understanding that some test types are more related to certain test levels
48
Q

QUADRANT Q1

A
  • component test level
  • oriented TECHNOLOGY
  • supporting TEAM
  • AUTOMATED
  • integrated into CI
49
Q

QUADRANT Q2

A
  • SYTEM TESTING
  • BUSINESS ORIENTED - TEAM SUPPORTING
  • functional testing, examples, user story testing, prototypes and simulation
  • ACCEPTANCE CRITERIA
  • MANUAL AND AUTOMATED
  • test created during development of user stories, improving their quality
  • useful when creating automated regression test suites
50
Q

QUADRANT Q3

A
  • BUSINESS ORIENTED ACCEPTANCE TESTING + PRODUCT CRITIQUE
  • exploratory testing ,scenario-based testing, process flow testing, usability testing, user acceptance testing, alpha and beta testing
  • manual + USER ORIENTED
51
Q

QUADRANT Q4

A

TECHNOLOGY-ORIENTED ACCEPTANCE
PRODUCT CRITIQUE
- NON-FUNCTIONAL TESTING, except usability
Automated