Chapter 1 Fundamentals Of Testing Flashcards
What is test coverage?
The degree to which specified coverage items are exercised by a test suite expressed as a percentage.
Synonyms: test coverage.
What is debugging?
The process of finding, analyzing, and removing the causes of failures in a component or system.
What is a defect?
An imperfection or deficiency in a work product where it does not meet its requirements or specifications.
Synonyms: bug, fault.
What is an error?
A human action that produces an incorrect result.
Synonyms: mistake.
What is a failure?
An event in which a component or system does not perform a required function within specified limits.
What is quality?
The degree to which a work product satisfies stated and implied needs of its stakeholders.
What is quality assurance?
Activities focused on providing confidence that quality requirements will be fulfilled.
Abbreviation: QA.
What is a root cause?
A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.
References: CMMI.
What is test analysis?
The activity that identifies test conditions by analyzing the test basis.
What is a test basis?
The body of knowledge used as the basis for test analysis and design.
What is a test case?
A set of preconditions, inputs, actions (where applicable), expected results, and postconditions, developed based on test conditions.
What is test completion?
The activity that makes testware available for later use, leaves test environments in a satisfactory condition, and communicates the results of testing to relevant stakeholders.
What is a test condition?
A testable aspect of a component or system identified as a basis for deriving test cases and test data
Initial idea for testing
What is test control?
The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned.
What is test data?
Data needed for test execution.
Synonyms: test dataset.
What is test design?
The activity that derives and specifies test cases from test conditions.
What is test execution?
The activity that runs a test on a component or system producing actual results.
What is test implementation?
The activity that prepares the testware needed for test execution based on test analysis and design.
What is test monitoring?
The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders.
What is a test object?
The work product to be tested.
What is test planning?
The activity of establishing or updating a test plan.
What is a test procedure?
A sequence of test cases in execution order and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution.
References: ISO 29119-1.
What is a test result?
The consequence/outcome of the execution of a test.
Synonyms: outcome, test outcome, result.
What is testing?
The process within the software development life cycle that evaluates the quality of a component or system and related work products.
What is testware?
Work products produced during the test process for use in planning, designing, executing, evaluating, and reporting on testing.
After ISO 29119-1.
What is validation?
Confirmation by examination that a work product matches a stakeholder’s needs.
After IREB.
What is verification?
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
References: ISO 9000.
7 testing activities
- Test planning
- Test monitoring and test control
- Test analysis
- Test design
- Test implementation
- Test execution
- Test completion
Software development life cycle (SCDL)
Process of planning, creating, testing, and deploying an information system
7 Stages of SDCL
- Planning
- Analysis
- Design
- Implementation
- Testing and Integration
- Maintenance
Testing work products
- Requirements
- User stories
- source code
3 steps of debugging
- Failure reproduction
- Diagnosis
- Fixing the code
What are the key components of successful testing contributions?
Produce quality, process quality, project goals, and people skills.
What is the focus of Quality Assurance?
Quality assurance focuses on establishing, implementing, monitoring, improving, and adhering to quality-related processes.
What is the relationship between quality control and quality assurance?
Proper conduct of QC, especially testing activities, is important for quality assurance, and quality assurance supports proper testing.
What is the general description of quality assurance?
Implementing processes, methodologies, and standards to ensure that the developed product meets the required quality standards.
What is the general description of quality control?
Performing activities to verify that the developed product meets the required quality standards.
What is the target of quality assurance?
Improving the manufacturing process.
What is the type of process for quality assurance?
Preventive (defect prevention), proactive.
What are examples of activities in quality assurance?
Implementation of processes, e.g., defect management, change management, software release; quality audits; process and product measurements; verification of correct implementation and execution of processes; training of team members; selection of tools.
What is the target of quality control?
Product improvement through failure and defect detection.
What is the type of process for quality control?
Control (defect detection), reactive.
What are examples of activities in quality control?
Static analysis of project documentation; code reviews; analysis, design, implementation of test cases; dynamic testing; writing and executing test scripts; defect reporting; using tools to support testing.
7 Basic Principles of testing
- Testing shows the presence not the absence of defects
- Exhaustive testing is impossible
- Early testing saves time and money
- Defects cluster together
- Tests wear out
- Testing is context dependant
- Absence-of-defects fallacy
7 parts of testing process
- Test planning
- T. Monitoring and t. Control
- T. Analysis
- T. Design
- T. Implementation
- T. execution
- T. Completion
What affects the selection of an organization’s test process
- Type of SDLC and project methodologies
- Test levels and test types considered
- Product risks and project risks
- Business domain
- Contractual and regulatory requirements
- Operational limits: budget & schedules
- Complexity of domain
- Test policy in organization
- Required internal and external norms/standrds
TEST PLANNING - activities
- defining test objectives
- identifying the test activities needed to fulfilled the project’s mission and meet the test objectives
- Defining an approach to achieving test objectives within the limits set by the context
- Determining appropriate test technique and test tasks
- formulation a test execution schedule
- Defining metrics
TEST MONITORING
Continuous comparison of actual and planned test progress using metrics specifically defined for the purpose in the test plan
TEST CONTROL
Proactive taking action that are necessary to achieve the objectives set in the test plan (taking into account its possible updates
Evaluation of exit criteria from the test plan
(DoD - Definition of Done)
Can include:
- checking the test results and test logs against specified coverage criteria
- estimate the quality level of a component or a system, based on test results and test logs
- determining whether further tests are necessary
Informing stakeholders about the progress of the test plan - writing test progress reports
TEST ANALYSIS
- Looking at the test basis
- analysing it to identify testable features
- defining the associated test conditions
- determining „what tot test”
(In terms of measurable coverage criteria) - general test objectives are transformed into specific test conditions
TEST BASIS
- ad any documentation or info that describes how software should work
- serves as foundation or reference for designing and executing test cases
COMMON EXAMPLES OF TEST BASIS
- requirements specification
- design specification
- uses cases
-user stories - source code
- business rules
TEST ANALYSIS VERIFIES THAT REQUIREMENTS ARE:
- consistent
- correctly expressed
- complete
- testable
- ready for starting developing the software (DoR - Definition of Ready)
- don’t need further grooming, can be used as a source for estimation
- properly reflect the need of the customers, users, and other stakeholders
TYPICAL TEST ANALYSIS ACTIVITIES:
- Familiarising with the test basis - definition of desired functional and non-functional behaviour of a component or system
- Analysis of design and implementation information (e.g. diagrams/docus describing system or software architecture, control flow diagrams, UML, entity relationship diagram, interface specification) - staff that define the structure or system
- Analysis of the implementation of the component or a system itself: code, metadata, database queries, and interfaces
- Analysis of risk analysis reports (functional, nonfunctional and structural aspects of a component or system)
- Assessing testability of the test basis to identify common types of defects: ambiguities, omissions, inconsistencies, contradictions, redundant instructions
- Identifying the features and feature sets to be tested
- Defining test conditions for individual features nad prioritising them based on yes base analysis, taking into account different parameters (functional, nonfunctional, structural), business, technical and risk factors
- Creating bidirectional traceability between test basis elements and their associated test conditions
TEST MODELS
Formal test conditions
E.g. state transition diagrams, decision tables, control flow diagrams
TEST DESIGN
Transformation of test conditions into taste cases, collections of test cases and into other testware
„How to test”
- identifying test coverage items and using testing techniques
- creating guidelines (based on TCI) for determining test case input)
- defining test data - e.g. identifying values for the boundary value analysis
- sometimes done along with test implementation
TEST DESIGN ACTIVITIES
- designing (sets of) high-level test cases and prioritizing them
- identifying necessary test data
- identifying any necessary tools and infrastructure elements
- creating bidirectional traceability between test basis, test conditions, test cases, test procedures (expanding the traceability matrix)
- identifying defects in the test absis
TEST IMPLEMENTATION
Tester creates/finalizes the testware necessary for test execution (transforming high-level test cases into low-level/concrete test cases, assembling test cases into TEST PROCEDURES, creating automated test scripts, acquiring test data, implementing test environment
„DO WE HAVE EVERYTHING WE NEED TO RUN THE TESTS?”
TEST IMPLEMENTATION ACTIVITIES:
- If needed - making high-levels test cases more concrete by specifying detailed data
- Developing test procedures and priotizing them
- Creating test suites (based on test procedures) and automated test scripts (if automation is used)
- Organizing test sets into a test execution schedule to ensure that the entire process runs efficiently
- Building a test environment, including - if necessary - mock objects, service virtualization, simulators, and other infrastructure elements, and verifying that it has been configured correctly
- Preparing test data and weighing that it has been correctly loaded into the test environment
- Verifying and updating traceability between test basis, test conditions, test cases, test procedures, and test stes
TEST EXECUTION - ACTIVITIES
- Registering the identification and version data of test items, test objects, test tools, and other testware
- Performing tests manually or with tools including smoke (simple test to check the correct implementation of basic functionality) or sanity tests
- Comparing actual test results with expected ones
- Analyzing anomalies to determine their likely causes (defects in the code, false positives)
- Reporting defects, based on observed failures
- Logging the test execution results (passed, failed, blocking test)
- Repeating the necessary testing activities (confirmation testing, execution of a revised test, regression testing)
- Verifying and updating bidirectional traceability between the test basis and all the testware used
TEST COMPLETION ACTIVITIES
- Handing over the software system for operation
- Completion/cancelling of the project
- Completion of an iteration of an agile project
- Completion of a test level
- Completion of work on the maintenance release
- Checking that all defects reports are closed and creating change request or Product Backlog items for any unresolved defects
- Identifying and archiving any test cases that may be useful in the future
- Handing over testware to the operation team, other project teams etc
- Bringing the test environment to an agreed state
- Analyzing completed test activities to identify lessons learned and identify improvements for future iterations, releases or projects
- Creating a report on the completion of testing and distributing it to stakeholders
CONTEXTUAL FACORS IN TESTING
- Stakeholders (needs, expectations, requirements, business ones, willingness to cooperate with the test team)
- Team members (skills, knowledge, level of experience, availability, training needs, atmosphere)
- Business domain (identified product risks, market needs, specific legal conditions)
- Technical factors (project architecture, technology used)
- Project constraints (project scope, time, available budget, resources, project risks)
- Organisational factors (organizational structure, existing policies, test policies, practices used)
- Software development life cycle (engineering practices, developmental methods)
- Tools (availability, usability, compliance)
- Policies (data, privacy, cookies)
WHAT CONTEXTUAL FACTORS CAN INFLUENCE
- Test strategy
- Test techniques
- Degree of automation
- Required coverage level for the requirements and identified risks
- Level of detail and type of test documentation to be developed
- Level of detail of test progress reporting
- Level of detail of defect reporting
TESTWARE
Work products associated with testing. Sometimes managed using test management tools and defect management tools
TEST PLANNING WORK PRODUCTS
- Test plan
info based on test basis; all other test work products will be linked via bidirectional traceability info. There: definition of EXIT CRITERIA (DoD); everything can be verified at any level (during monitoring and controlling) - Risk register
risks identified by a team, info about probability, impact nad how they can mitigated - Entry criteria and exit criteria
TEST MONITORING AND TEST CONTROL WORK PRODUCTS
- Test progress reporting
created on an ongoing basis/ or at regular intervals; info about project management issues, info on completed tasks, resources allocation and consumption - Documentation of control directives
- Risk information
TEST ANALYSIS WORK PRODUCTS
- (Prioritized) Test conditions - defined
- Acceptance criteria
- Defect Reports in the test basis (if not fixed directly)
TEST DESIGN WORK PRODUCTS
- HIGH-LEVEL (LOGICAL) TEST CASES
those that do not include specific input data values and expected results
those can be reused many times with different data, documenting the scope of the test case
should be bidirectional traceability between test case and the test condition it covers - COVERAGE ITEMS
- TEST SATA REQUIREMENTS
- TEST ENVIRONMENT DESIGN
TEST IMPLEMENTATION WORK PRODUCTS
- Low level (concrete) test cases
- Test procedures (including the order in which they are executed)
- Automated test scripts
- Test sets
- Test data
assigning specific data to specific values, along with guidelines how to use it - Test execution schedule
- Elements of the test environment
7a. MOCK OBJECTS (e.g. stubs, drivers)
7b. Simulators
7c. Service virtualisation
TEST EXECUTION WORK PRODUCTS
- TEST LOGS
- DOCUMENTATION OF THE STATUS OF TEST PROCEDURES
- DEFECT REPORTS
- DOCUMENTATION INDICATIONG WHAT WAS USED IN TESTING (e.g. test objects, test tools and testware)
TEST COMPLETION WORK PRODUCTS
- TEST COMPLETION REPORT
detailed info on the progress if the testing process to date, summary of the results of test execution, info of deviation form the plan and corrective actions - ACTION ITEMS TO IMPROVE SUBSEQUENT PROJECTS OR ITERATIONS (e.g. retrospective action items transformed to Product Backlog items for future iterations)
- CHANGE REQUESTS (e.g. as elements of a product backlog(
TRACEABILITY BETWEEN THE TEST BASIS AND TESTING WORK PRODUCTS - IT ENABLES
- Evaluation of test coverage
- Analyzing the impact of change
- Conduction of test audits
- Meeting criteria related to IT management
- Creating easy-to-understand test status reports and summary test completion reports
- Presenting the status of test basis elements)
requirements for which tests have been passed, failed, or are waiting to be executed) - Providing stakeholders with info about technical issues
- Providing the information needed to assess product quality, process capabilities, and project progress against business objectives
What are the two fundamental roles in testing
- A test management role (managerial)
- A testing role (technical)
Test management role
Responsible for implementing the test process, organizing the work of the test team, directing test activities.
Activities related to:
- test planning
- test monitoring and control
- test completion
Testing role
Engineering aspect of testing
Mainly focus on:
- test analysis
- test design
- test implementation
- test execution
TASKS OF TEST MANAGER
- developing or reviewing test strategies and test policies
- CONTEXT-SENSITIVE TEST PROJECT PLANNING:
- creating and updating test plans
- iteration and release planning agile projects
- choosing test approach
- defining entry criteria and exit criteria
- introducing appropriate metrics for measuring the test progress and assessing the quality of testing and the product
- defining test levels and test cycles - estimating the time, effort and cost of testing
- test prioritization
- risk management
- monitoring tet results, checking the status of exit criteria (DoD), performing tet control (e.g. adjusting plans according to test results and progress)
- PROGRESS SUPERVISION
- configuration management
- defect management
- resource acquisition
- coordinating test strategy and test plan with project managers and stakeholders
- presenting the testers’ point of view
- initiating processes for test analysis, test design, test implementation, tet execution
- reporting test progress, creating a test completion reports
- supporting the team in the use of tools to implement the testing progress (e.g. funds for tools, purchasing licenses, controlling implementation of the tool)
- deciding on the implementation of test environments
- promoting testers and test team
- developing testing skills, performance evaluation, coaching
TASKS OF A TESTER
- Reviewing test plans and participation their developments
- Co-authoring the requirements (user stories) while performing collaborative user story writing
- Deriving testable acceptance criteria for each Product Backlog item
- Analysing, reviewing, and evaluating the test basis (i.e. requirements, user stories, acceptance criteria, specifications, and models) for testabiltiy
- Identifying and documenting test conditions and recording the relationship between test cases, test conditions, and test basis
- Designing, configuring, and verifying test environments
- Designing and implementing test cases, test procedures, and test scripts
- Preparing and acquiring test data
- Co-creating the test execution schedule
- Performing tests, evaluating results, and documenting deviations from expected results
- Using appropriate tools to streamline the testing process ( e.g. test automation tools)
- Evaluations and measuring nonfunctional characteristics of the software
- Collaborating with the team
- Using - if necessary - tools for test management
- Test automation
Who can perform testing tasks depending on the level
- At component and component integration level levels - usually developers
- At the system testing level - testers, members of an independent test team
- At the acceptance test level - business experts and users
- At the operational acceptance testing level - usually system operators and system administrators
Characteristics of a good tester
- TESTING KNOWLEDGE (increase effectiveness of testing, e.g using testing techniques)
- THOROUGHNESS, CAREFULLNESS, CURIOSITY, ATTENTION TO DETAIL, BEING METHODICAL (to identify different types of defects)
- GOOD COMMUNICATION SKILLS, ACTIVE LISTENING, BEING A TEAM PLAYER (effective interaction with all stakeholders, communicating info to others, be understood, be able to report and discuss defects)
- ANALYTICAL THINKING, CRITICAL THINKING, CREATIVITY
- TECHNICAL KNOWLEDGE
- DOMAIN KNOWLEDGE (to understand and communicate with end users and business representatives)
PSYCHOLOGICAL ASPECTS IN TESTING
- UNWILLINGNESS OF DEVELOPERS TO LISTEN, FEAR OF CRITICISM
- CONFIRMATION BIAS
hard to accept information that contradicts beliefs
selective memory, selective hypotheses testing - BAD COMMUNICATION, COCKY - BOTH TESTERS AND DEVOPS