CHAPTER 5 PART 2 RISK MANAGEMENT Flashcards
RISK MANAGEMENT - GOALS
- increase likelihood of achieving goals
- improve product quality
- increase stakeholder confidence and trust
RISK ANALYSIS =
RISK ASSESSMENT + RISK IDENTIFICATION
RISK CONTROL
RISK MITIGATION + RISK MONITORING
RISK-BASED TESTING - BEFEFITS
Contributes to reducing product risk level and provides comprehensive info to help decide is product is ready
Relationship between product risk and testing - bi-directional traceability mechanism
MAIN BENEFITS OF RISK-BASED TESTING
1. Increase of probability of discovering defects in order of their priority by performing. Tests in order related to risk prioritization
2. Minimising residual product risk after release - allocations test effort according to risk
3. Reporting residual risk - measuring test results in terms of levels of related risks
4. Counteracting the effects of time pressure on testing and allowing testing period to br shortened with the least possible increase in product risk after release
RISK-BASED TESTING - DISCIPLINED APPROACH TO:
– analysis ( and regular reassessment) what can go wrong
- which risks are important enough to address
- implement measure to mitigate these risks
- design contingency plans to deal with risks when they occur
- identification of new risks
- which risks should be mitigated
- reduce risk-related uncertainty
RISK AND RISK LEVEL
Factor that may result in negative consequences in the future
Described by IMPACT AND LIKELIHOOD
RISK LEVEL = RISK LIKELIHOOD * RISK IMPACT
RISK LIKELIHOOD - probability in the interval (0,1)
IMPACT - money
PROJECT RISK
Related to PROJECT MANAGEMENT AND CONTROL
What affect success of a project (achieving its objectives)
Examples:
- ORGANIZATIONAL ISSUES (delays in delivering work products, poorly implemented, managed etc)
- HUMAN ISSUES (insufficient skills, staff shortages etc)
- TECHNICAL ISSUES (poor tool support, inaccurate estimation, migration issue, accumulation of defects etc.)
- SUPPLIER-RELATED ISSUES (bankruptcy of the supporting company)
PRODUCT RISKS
Quality characteristics of a product - functionality, reliability, performance, usability
When work products fails to meet needs of users/stakeholders
Areas of possible failure in the product under test, as they threaten the quality
EXAMPLES
- missing or inadequate functionality
- incorrect calculations
- failures during the operation of the software
- poor architecture
- inefficient algorithms
- inadequate response time
- bad user experience
- security vulnerabilities
PRODUCT RISKS NEGATIVE CONSEQUENCES
- END USER DISSATISFACTION
- LOSS OF REVENUE
- DAMAGE CAUSED TO THIRD PARTIES
- HIGH MAINTENANCE COST
- OVERLOADING HELP DESK
- LOSS OF IMAGE
- LOSS OF CONFIDENCE IN PRODUCT
- CRIMINAL PENALTIES
RISK ANALYSIS
To provide awareness to focus testing effort so as to minimize the product’s residual risk level
- ideally it starts early in the development cycle
Info from risk analysis used in:
- TEST PLANNING
- SPECIFICATION, PREPARATION, EXECUTION OF TEST CASES
- TEST MONITORING AND CONTROL
BENEFITS OF EARLY RISK ANALYSIS
- identify specific test levels and test types to be performed
- define the scope of tests
- prioritise testing
- selection of appropriate test techniques
- set coverage
- detect defects associated with identified risks
- estimation - time and effort
- risk mitigation
- establish activities unrelated to testing
RISK IDENTIFICATION - TECHNIQUES
COMPREHENSIVE LIST OF RISKS
TOOLS AND TECHNIQUES:
1. BRAINSTORMING
2. RISK WORKSHOPS
3. DELPHI METHOD AND EXPERT ASSESSMENT
4. INTERVIEWS
5. CHECKLISTS
6. DATABASES OF PAST PROJECTS, RETROSPECTIVES, LESSONS LEARNED
7. CAUSE-AND-EFFECT DIAGRAMS (discover root cause)
8. RISK TEMPLATES
RISK ASSESSMENT
- categorising identified risks
- determining likelihood, impact and risk levels of identified risks
- prioritizing those items
- ways of dealing with them
QUANTATIVE APPROACH -
risk level is calculated as the product of likelihood and impact
QUALITATIVE APPROACH
risk levels calculated using risk matrix
RISK MATRIX - multiplication table, where risk level is defined as the product of certain categories of likelihood and impact
impact = low, medium, high
likelihood = low, medium, high, very high
Intersection is a category of risk
FAILURE MODE AND EFFECT ANALYSIS
(FMEA) risk assessment method
- both likelihood and impact on a numerical scale
Risk level calculated by multiplying those numbers
MIXED APPROACH IN RISK ASSESSMENT
- expressed in intervals
[a,b] * [c,d] = [ac, bd]
[0.2,0.4]*[$1000, $3000] = [$200,$1200]
Also calculating many risks
PRODUCT RISK CONTROL
- measures taken to mitigate and monitor risk throughout the development cycle
RISK MITIGATION + RISK MONITORING
RISK MITIGATION - POSSIBLE RESPONCES
- RISK ACCEPTANCE
- with low level risks
2, RISK TRANSFER - taking an insurance policy
- with low level risks
- CONTINGENCY PLAN
- for certain risks creating a plan how to deal with them
- RISK MITIGATION BY TESTING
- checking if risk is actually present; if test passes - false alarm
ACTIONS TO BE TAKEN BY TESTERS TO MITIGATE THE RISKS
- Selecting testers with the right level of experience
- Applying appropriate level of independence of testing
- Conducting reviews
- Performing static analysis
- Selecting appropriate test design techniques and test coverage
- Prioritizing test based on risk level
- Determining the appropriate range of regression testing
ADVANTAGES OF WHOLE TEAM APPROACH IN RISK MITIGATION - WHAT TYPE OF RISK MAY BE REDUCED
- Customer dissatisfaction - constant feedback from all stakeholders
- Risk of not completing all functionalities - high business priorities delivered first
- Inadequate estimation and planning
tracking daily - Not completing the development cycle
- Taking too much work and changing expectations
RISK MONITORING
- checking the status of product risks
- measuring the implementation of risk mitigation techniques
BENEFITS: - knowing the status current and exact of each product risks
- report progression reducing residual product risk
- focusing on the positive results of risk identified
- capturing new risks as they emerge
- tracking factors that affect risk managements costs
TEST MONITORING
Primary objective
- collecting and sharing of info to gain inside into testing activities and visualize testing process
- info collected manually or automatically
• test log
RESULT OBTAINED TO:
- measurement of fulfilment of exit criteria (e.g. assumed product risks coverage, requirements, acceptance criteria)
- assessing progress of work against the schedule and budget
TEST CONTROL - ACTIVITIES
- making decision based on info from test monitoring
- re-prioritising testa when identified risks materialize
- making changes to test execution schedule
- assessing availability or un- of TEH test environment or other resources
METRICS TI COLLECT DURING AND AFTER GIVEN TEST LEVEL TO ESTIMATE
- progress of schedule and budget implementation
- current quality level of test object
- appropriateness of the chosen test approach
- effectiveness of the test activities - point of view of achieving the objective
TYPICAL TEST METRICS
- PROJECT METRICS
e.g. task completion, resource utilisation, milestone dates - TEST CASE METRICS
e,g, test case implementation progress, test environment preparation process, number of test passed/failed, test execution time - PRODUCT QUALITY METRICS
e.g. availability, response time, mean time to failure - DEFECT METRICS
e.g. number found/repaired, priorities, defect density, percentage of defect detected - RISK METRICS
e.g. residual risks, riks priority - COVERAGE METRICS
e,g, user stories coverage, requirements coverage, code coverage - COST METRICS
e.g. cost of testing, average cost of defect repair
TEST REPORTS
Summarize and provide info on test activities both during and after their execution
TEST PROGRESS REPORT - during execution
TEST COMPLETION REPORT - after completion
TEST REPORT SHOULD INCLUDE
- summary of the tests performed
- description of what happened during the testing period
- info on deviations from the plan
- info on the status of testing and product quality, including info on meeting exit criteria (or DoD)
- info on factions blocking the tests
- measures related to defects, test coverage, work progress, resource utilisation
- residual risk info
- info on work products for reuse
TEST PROGRESS REPORT ALSO
* status of testing activities and the progress of test plan
* test scheduled for next report
TAILORED FOR BOTH TECHNICAL AUDIENCE AND BUSINESS STAKEHOLDERS
WAYS OF COMMUNICATING THE STATUS OF TESTING
- VERBAL - conversation with team members and other stakeholders
- DASHBOARDS - CI/CD dashboard, task boards, Burundian charts
- ELECTRONIC COMMUNICATION (email, chats)
- ONLINE DOCUMENTATION
- FORMAL TEST REPORTS
CONFIGURATION MANAGEMENT
- ensure and maintain the integrity of the component/system and testware and the interrelationships between them throughout project and product lifecycle
TO ENSURE:
- all configuration items, including test objects, test items, testware have been IDENTIFIED, VERSION CONTROLLED, TRACKED FOR CHANGES, LINKED TO EACH OTHER - to maintain traceability
- all identified documentation and software items are referenced explicitly in the test documentation
DEFECT MANAGEMENT
- all defects should be logged
- how defect is reported - depends on the testing context of the component or system, test level, chosen SDLC model
Varying from formal to informal
Minimise false-positive results that are reported as defects
OBJECTIVES OF DEFECT REPORT
- COMMUNICATION
- provide manufacturers with info to identify, isolate, repair defect if necessary
- provide test management with current info about quality
- ENABLING THE COLLECTION OF INFO ABOUT THE STATUS OF THE PRODUCT INDER TEST
- as a basis for decision making
- to identify risk early
- basis for post-project analysis and improvement of development procedures
INCIDENT REPORT
DEFECT REPORT FOT DYNAMIC TESTING SHOULD INCLUDE:
• unique identifier
• title and brief summary,
• date of the report (the date anomaly was discovered)
• info about the defect’ report’s author
Identification of the item under test
• phase of the software development cycle anomaly wa observed
• description of the anomaly to enable its reproduction and removal - kind of logs, database dumps, screenshots, recordings
• actual and expected results
• defect removal priority•
• status (e.g. open, differed, duplicate, re-open)
• description of the nonconformity to help determine the problem’s cause
• Urgency of solution
• identification of a software or system configuration items
• conclusions and recommendations
•History of changes
• references to other elements - test case through which the problem was revealed
DEFECT LIFECYCLE AND DEFECT STATUSES
LIFECYCLE
OPEN - REJECTED - CHANGED - POSTPONED
STATUSES
OPENED - ASSIGNED - FIXED - RETEST - CLOSED