Pdf.3 Flashcards
IMPLEMENT TESTS
Goal: To automate test procedures as much as possible.
Running test cases can be very tedious and time consuming!
– There are many possible input values and system states to test.
A _________ is a program that automates one or several test procedures or parts of them.
There are tools available to help write test components that:
– ________ the actions for a test case as the user performs the actions.
– _______ the recorded script to accept a variety of input values.
☞ _____________ can be used to
store the required input data and the results of each test.
test component
record
parameterize
Spreadsheets and/or database applications
_________________
– system requirements specification
– system analysis specification
– system design specification
– source code
_____________
– test cases
– tests procedures
– testing tools
– expected result
Software configuration
Test configuration
PERFORM TESTS: TESTING STRATEGY
A ___________ specifies which testing techniques (white
box, black box, etc.) are appropriate at which point in time.
A testing strategy integrates test cases into a well-planned series of
steps that test a component/subsystem by specifying:
– ____ are the steps that need to be conducted to test a component?
– _______ are the steps planned and undertaken?
– _________ effort, time and resources will be required to do the steps?
Testing often occurs when deadline pressures are _________.
Progress must be _______ and problems identified as ______ as possible
testing strategy
What
When
How much
most severe
measurable
early
We test
the system
from the
__________
We develop
the system
from the
________
inside out.
outside in.
A TESTING STRATEGY (DEVELOPERS)
___________ (using White Box and Black Box test cases)
– Verifies that each component/subsystem functions correctly.
Done by software engineer who develops the code.
____________ (using White Box and Black Box test cases)
– Verifies that the components/subsystems interact correctly.
Done by software engineer and/or independent test group
(integration/system tester).
_________ (using Black Box test cases)
– Verifies that the system functions correctly as a whole.
Done by independent test group (integration/system tester).
Unit Testing
Integration Testing
System Testing
A TESTING STRATEGY (USERS)
A TESTING STRATEGY(USERS)
Acceptance Testing (using Black Box test cases)
– Validates the software against its requirements.
Done by client/user
UNIT TESTING
Main emphasis
is on ___________
_________ a component that calls the component to be tested
________ a component called by the component to be tested
white box
techniques.
driver
stub
Types
of
test
cases
interface (input/output)
independent paths
boundary conditions
local data structures
error-handling paths
UNIT TESTING: OBJECT-ORIENTED TESTING
What to unit test?
– A unit test has to be at least a _______ (i.e., we need to check an
object’s behaviour), but object state makes testing difficult.
☞A class must be tested in ______ it can ever enter
(i.e., use state-based testing).
How to deal with inheritance and polymorphism?
– If a subclass overrides methods of an already tested superclass,
what needs to be tested — only the overridden methods?
☞No! ___ of a subclass’s methods need to be tested again
due to dynamic binding and substitutability.
class
every state
All
UNIT TESTING: OBJECT-ORIENTED TESTING
How to deal with encapsulation?
– ________ hides what is inside an object → hard to know its
state.
☞We need to provide a method, for testing only,
that reports all of an object’s state.
Example: Testing a stack (push, pop, peek)
Suppose popping an empty stack fails somehow.
– Peeking at an empty stack should show nothing.
– Push an element onto the stack, then peek at the stack expecting
the same element you pushed.
– Push an element onto the stack, then pop it, expecting the same
element you pushed and peek at the stack expecting the stack to
be empty
Encapsulation
INTEGRATION TESTING
Interaction errors cannot be uncovered by _______
unit testing
INTEGRATION TESTING: ___________
1. The top subsystem is tested with stubs.
2. The stubs are replaced one at a
time “depth-first” or “breadth-first”.
3. As new subsystems are integrated, some subset
of previous tests is re-run (regression testing).
Pro: Early testing and error detection of user interface components;
can demonstrate a complete function of the system early.
Con: Cannot do significant low-level processing until late in the
testing; need to write and test stubs.
TOP DOWN
INTEGRATION TESTING: ____________
- Drivers are replaced one
at a time “depth-first”. - Subsystems are grouped
into builds and integrated.
Pro: Interaction faults are more easily found; easier test case
design and no need for stubs.
Con: User interface components are tested last.
BOTTOM UP
INTEGRATION TESTING: ___________
Top-level subsystems
are tested with stubs.
Lower-level subsystems are grouped into builds and tested with drivers.
Pro: Can do parallel testing.
Con: Need many drivers and stubs.
- Can significantly shorten
total testing time.
SANDWICH
INTEGRATION TESTING: CRITICAL SUBSYSTEMS
Critical subsystems should be tested as _________!
Critical subsystems are those that:
1. have ________.
2. address several __________ (i.e., they implement
several use cases).
3. have a high ________.
4. are _______ or error prone → high cyclomatic complexity.
5. have ___________.
Regression testing is required for critical subsystems!
early as possible
high risk
software requirements
level of control
complex
specific performance requirements
SYSTEM TESTING
__________ is testing of the entire system to be sure the system functions properly when integrated.
System testing
SYSTEM TESTING
Some specific types of system tests:
__________ → The developers verify that all user functions work as
specified in the system requirements specification.
_________ → The developers verify that the design goals
(nonfunctional requirements) are met.
_________ → A selected group of end users verifies common
functionality in the target environment.
___________ → The client/user verifies usability and validates
functional and nonfunctional requirements against the
system requirements specification.
___________ → The client/user verifies usability and validates
functional and nonfunctional requirements in real use.
Functional
Performance
Pilot
Acceptance
Installation
SYSTEM TESTING: PERFORMANCE TESTING
__________ Verify that the system can continue functioning when
confronted with many simultaneous requests.
☞ How high can we go? Do we fail-soft or collapse?
__________ Verify that the system can handle large amounts of
data, high complexity algorithms, or high disk
fragmentation.
_________ Verify that access protection mechanisms work.
☞ Make penetration cost more than value of entry.
__________ Verify that the system meets timing constraints.
☞ Usually for real-time and embedded systems.
_________ Verify that the system can recover when forced to fail in
various ways.
☞ Database recovery is particularly important.
stress
volume
security
timing
recovery
SYSTEM TESTING: PILOT TESTING
________: A test in a controlled environment so that developers can observe users.
_________: A test in a real environment so that bugs are uncovered from regular usage patterns
Alpha test
Beta test
SYSTEM TESTING: ACCEPTANCE TESTING
An __________ demonstrates to the client that a function or constraint of the system is fully operational.
acceptance test
SYSTEM TESTING: ACCEPTANCE TESTING
____________ – Does the system provide the required functionality?
☞ ASU: Show that a professor can select a course offering to teach.
____________ – Do interfaces perform desired functions (accept
desired input/provide desired output) and follow
required design standards?
☞ ASU: Show that all data for course registration can be input.
___________ – Are the stored data correct (i.e., in the required
format and obey the required constraints)?
☞ ASU: Show that all information of a student’s course schedule is correct.
☞ ASU: Show that a student cannot register for more than four courses.
____________ – Does the system meet specified performance criteria?
☞ ASU: Show the response time to register for a course is less than 1 second.
Functional validity
Interface validity
Information content
Performance
SYSTEM TESTING:
DERIVING ACCEPTANCE TESTS
________ written requirements in a concise, precise and testable
way by:
– grouping related requirements.
– removing any requirements that cannot be tested.
________ any additional requirements gathered from users by:
– looking at use cases for functional and interface requirements.
– looking at domain model for information content requirements.
– looking at nonfunctional requirements for performance requirements.
_________, for each requirement, an evaluation scenario that
will demonstrate to the client that the requirement is met.
(Since most evaluation scenarios depend on the user interface,
they can not be completed until the user interface is designed.)
Restate
Add
Construct
EVALUATE TESTS
The ________ needs to evaluate the results of the testing by:
– comparing the results with the goals outlined in the test plan.
– preparing metrics to determine the current quality of the software.
☞ How do we know when to stop testing?
We can consider the system’s:
1. _____________: the % of test cases that have
been run and the % of code that has been tested.
2. _________ : based on trends in testing error rate when compared to
previous projects.
test engineer
testing completeness/coverage
reliability
EVALUATE TESTS: TESTING ERROR RATE
Possible outcomes
* perform additional tests to find more ________
* relax the test criteria, if they were set too _________
* deliver acceptable parts of the system;
continue revising and testing unacceptable parts
Past testing history can be used to ___________.
We compare this with actual failure rate for this project
defects
high
plot expected failure rate