Klausurfragen Flashcards
Explain the static examination type “Review”
Review is process or meeting (+1) during which a work product or set of work products is presented (0.5) to project personnel, managers, users, customers or other interested parties (+1) for comment or approval
Control flow based coverage
● Statement coverage
● Branch coverage
● Decision coverage
● Path coverage
Connection: Error
● Somebody makes a mistake.
● As a consequence the software contains a defect or fault
● When executing the program containing a defect an error may occur.
● An error may cause the failure of the whole system
Kinds of Error
● Functional Errors
● Configuration Errors
● Interface Errors
● Documentation Errors
Kinds of metric
● Product metric
● Process metric
● Project metric
General Model Theory - Propertys
- Mapping property
- Reduction property
- Pragmatic property
Definiton: Metric
A metric is the objective attribution of a number (or symbol) to an entity to characterize a specific attribute.
Sorting the scales
● Nominal Scale
Pure classification of values
● Ordinal Scale
Values are totally ordered
● Interval Scale
Captures information about the size of the intervals that separate the classification
● Rational Scale
Values are totally ordered and additive
● Absolut Scale
Values are absolute entities
Kinds of Static Examnination of Software
● Measuring
● Conformance Check
● Review
● Inspection
Metic: Granularity
● Base metric (simple, direct metric)
- Easy to measure, but hard to interpret
● Derived metric (pseudo, indirect metric)
- There is no direct metric or measurement is very
Classification: Measurement Procedure metric
● Objective (precise) metric
- Measurement is done according to an exact procedure
● Subjective (estimated) metric
- The value is estimated, because it cannot be calculated.
● Calculated metric
- Values are determined based on a formula
Classification: Purpose metric
● Prescriptive metric
- Characterizes the expected state
● Descriptive metric
- Characterizes the actual state
● Prognostic metric
- Creates a forecast, which can be
determined only later
● Diagnostic metric
- Characterizes the actual state.
● cyclomatic number
Classification: Source of a Metric
● Model-based metric
- Combines directly measurable or estimated attributes according to a given formula
● Empirical metric
- Based on observations in the real world, without an underlying model.
Classification: Manipulation Metric
● Robust metric
- Returns results that can not be manipulated if the metric is known
● Undermineable metric
- Returns authentic results only if the application of
the metric is not known
Quality Properties of Metrics = Metric for Readability
● Plausibility
● Relevance
● Profitability
● Comparability & Precision
● Availability
Goal-Question-Metric (GQM) - Advantages:
● Only relevant attributes are measured
● Interpretations are defined in the context of the respective goals
When is Mean Time Between Failure useful?
● errors are not corrected
● the kind and intensity of the usage remain the same
● the users do not provoke nor avoid the errors
Quality Model
● The quality model is the cornerstone of a product or process quality evaluation system.
● Defines, which qualities are important and which are not important
● often forming a tree or graph
State machine
● A behavior model, consisting of a finite number of states
Definition: State
Condition or situation in the life of a system during which it:
● satisfies some condition
● performs some activity, or
● waits for some events
Events
Specification of a significant stimulus that has a location in time and space and can trigger a state transition:
● Signal event
● Call event
● Change Event
● Time event
Transition
Relationship between two states
● where the source state will enter the target state
Interaction Rule
Most failures are induced by single parameter faults or by interaction of two parameters, with progressively fewer failures induced by interactions between three or more parameters
Combinatorial Testing
A black-box test selection technique in which test cases are generated to execute specific combinations of values of several input parameters
Conflict Handling Strategies
A conflict is an irrelevant interaction of two or more parameter values.
Conflict handling strategies to exclude irrelevant interactions: ● Abstract Parameter ● Sub-Models ● Replacement ● Avoidance
Test First Development
- Write a test for a new feature
- Run all tests and see the new one fails
- Write code for the new feature
- Run the tests
- Refactor code
TDD = Test First Development + Refactoring
Test First Development - TDD Benefits
● Instant feedback
● Better development practices
● Improve quality assurance
Test First Development - TDD Limitation
● Hard to apply
● TDD needs an upfront design
● Difficult to write test cases for hard-to-test code
Path coverage in the activity graph
● Define (at least) one test case for each path through the activity graph
Benefits Use Case
● Use cases are considered to systematically design test cases
● Formalization (transformation to a state/ activity diagram) discovers missing information and errors early
● Drawbacks of natural language specifications are weakened
Test Driven Development
Test driven development (TDD) is an software development approach in which a test is written before writing the code. Once the new code passes the test, it is refactored to an acceptable standard.
Refactoring (TDD)
Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior.
Diffetent types of steps provided by Gherkin
● Given: describe the initial context of the system
● When: describe an event or an action
● Then: describe an expected outcome or result
Explain why “Exhaustive Testing” is only feasible in rare cases?
● argumentation of too many test cases
● for example/in-depth argument for too many test cases
Explain the concept of “Test Case Selection Criteria” - TCSC
A TCSC defines for a given program P implementing its specifications a set of conditions which the set of test cases has to meet
Practical criterion of Path Coverage
● limiting the number of traversals of loops
● avoids the path explosion caused by loops
Test case design of Boundary Interior Coverage
● No loop execution
● One loop execution (boundary)
● At least two loop execution (interior)
Control flow base: Coverage
● Statement Coverage
- weak criterion but not easy to achieve
● Branch coverage
- minimal criterion in unit testing - inadequate for compound decisions
● Decision coverage
- expensive but intensive
● Path coverage
- more a theoretical option
Variants of Condition Coverage
● Basic Condition Coverage
● Branch and Condition Coverage
● Modified Condition Decision Coverage
● Compound Condition Coverage
Practical criterion of Path Coverage
● limiting the number of traversals of loops
● avoids the path explosion caused by loops
Examples for “linguistic effects”
● incomplete specified verbs, incomplete comparisons or nominalizations
● requirements that are of lesser value
NLP Classes
● Deletion
● Generalization
● Distortion
Requirements Examination Process
- Examine individual words of a requirement sentence!
- Examine each requirement sentence as a whole!
- Examine the integration of a requirement in the overall specification!
Kinds of Quality Costs
- Defect Prevention Costs
- Examination and Inspection Costs
- Error Costs
When do we stop testing?
- We stop testing when the planned time is over
2.
Test Exit Criteria
● Test Case Set
● Relative Defect Cost
Effectiveness of an Examination Strategy - Is influenced by
● effectiveness of the examination activities
● assumptions when defects can be detected
● the number of examined artifacts
Assessment of the impact of TDD
● Defect Rate
● Productivity
● Test Frequency
● Design
Range Error
There are plenty of test cases (out of a range) to discover the error!
o A range error occurs in a certain part q of the input space
o The application of n random selected test cases discovers the error with the probability
Why do we measure?
• Define measurable goals: Is the product maintainable and reliable?
• Analyze important cost drivers in detail: Which part of the costs is caused
by the system test?
• Quantify important qualitative properties of product or processes: How
high is the probability of failure?
• Assess the efficiency of new technologies.
Key steps of Formal Measurements
- Identify relevant attribute for real-world entities
- Identify empirical relations for this attribute
- Identify corresponding numerical relations
- Define a mapping from real-world entities to numbers
- Check that the representational condition holds
What can be measured?
Product qualities • robustness • usability • testability • efficiency
Process qualities
• duration
• effort
• planning precision
GQM Steps
- Defines goals that are relevant for the project
- Deviate questions for Every goal
- Define metrics to answer the questions
Types of quality models
- Quality prediction model: forecasts quality based on actual or historic measurement values
- Quality definition model: describes the quality characteristics of a product or process
- Quality assessment model: evaluate quality of a given product or process based on metrics
Used quality models:
- Mcall’s Quality Model
- Boehm Quality Model
- ISO 25010 (2011)
McCall’s Quality Model – 1977
- Quality perspectives
• the user’s point of view - Quality factors
• represents a behavioral characteristic of software (manager POV) - Quality criteria
• attributes of a quality factor related to software development
Boehm’s Quality Model
- Attempts to define software quality by giving a set of attributes
- Also presents a hierarchical quality model structured in high-intermediate-basic level characteristics
• High level attributes:
o As-is Utility: how easy, reliable and efficient is the software
o Maintainability: How easy it is to understand, modify and retest
o Portability: Can I still use it if I change environment?
ISO 9126
• Former standard for the evaluation of the quality of software
• Defines 6 quality factors with sub-factors, which overlap a minimum
o Portability, Maintainability, Usability, Reliability, Functionality,
Efficiency
• How to measure a quality factor?
o A quality factor has sub factors, which have metrics
o The individual measured metrics are evaluated. If values are in the
acceptable range, win x points; if not, win 0 points
o The sum of points for each sub-factor gives the grade for it.
ISO 25010
• Defines 8 quality characteristics with sub-characteristics:
o Portability, Maintainability, Usability, Reliability, Functional suitability, Performance efficiency, Compatibility, Security
• This model is more comprehensive and complete
Maturity + Maturity model
Maturity
• Measurement of the ability of an organization for continuous improvement in a particular discipline
• The higher the maturity, the higher are the chances that errors are avoided
Maturity model
• Assesses qualitatively culture, processes/structures and technology
• represents the maturity in a hierarchical structure
Quality Benchmark Levels
- Complete
- Advanced
- Extended
- Basic
- Rudimentary
Testing
Process of executing a program with the intent of finding errors
Testing Principles
- Tests must be planned!
- Start testing very early!
- Testing is a creative and challenging task!
- Testing requires independence!
- Exhaustive testing is impossible!
- Every test case needs a defined expected result!
- A successful test case detects an unknown error!
- The quality of a test depends on the quality of its test cases
- Avoid throw-away test cases, except if you are developing a throw-away program!
Properties of Test Cases
● representative
● error sensitive
● low redundancy
● economical
Black vs Glass box tes
● Black-box test (functional test)
- based on the specified characteristics
● Glass-box test (structural test)
- based on the internal structure
Black Box
- Specification is the source for defining test cases
- Comprehensive test of specified functionality.
- Concrete implementation is not adequately considered.
- Minimal requirements of glass-box test are not fulfilled by black-box test
Coverage
• Function coverage
- Every specified function is executed at least once.
• Input coverage
- Every input value is used in at least in one test case
• Output coverage
- Every output situation is created at least once
Glass Box
• Test cases are selected based on the internal structure
Coverage
• Statement coverage
• Branch coverage
• Path coverage
WMC - Weighted Methods per Class
Total sum of the complexity of methods of a class WMC(c) = comp(m1) + comp(m2) + .. + comp(mn)
QA Measures
- Organizational Measures
- Constructive Measures
- Analytical Measures
Defect seeding vs Defect pooling
Seeding:
● Defects are explicitly seeded.
● The ratio between detected seeded defects and detected not-seeded defects is used to determine the original number of defects
Pooling: ● Detected defects are collected in at least two pools ● Scenario: - two independent testing teams - each has its own defect pool
Problem: Defect Seeding
Problem:
● Defect must be seeded homogeneously.
● Seeded defects must cover all defect types.
● Additional costs are to be expected.
Weak Robust Equivalence Class Test
Each representative value of all equivalence classes needs to be included in at least one test case.
- For valid inputs, we use one value from each valid class
- For invalid inputs, a test case has one invalid value and the remaining values will all be valid.
Strong Robust Equivalence Classes Test
A test case is created for every element of the Cartesian product of the equivalence classes, both valid and invalid
Replacement Conflict Handling
● Resolve conflicts after test generation
- Clone test cases with conflicts
- Conflicting values are changed arbitrarily
● Clones must preserve coverage criterion!
● Resolve one conflict at a time to ensure termination
Constructive Metric Development Steps
- Identify a relevant quality aspect
- Model the relevant quality aspect
- Determine a scale
- Develop the metric calculation
- Develop measurement instructions
- Apply and improve the metric
four steps of developing test cases
- Identify Input Equivalence Classes
2: Identify Output Equivalence Classes - Check Function Coverage
- Define boundary values
SCENT steps
1 → transform the use case description into an UML state diagram/activity diagram,
2 → transform the success scenario,
3 → transform the deviations,
4 → apply state transition testing/interpret the action graph
According to ISO 9126, maintainability is influenced by:
analyzability, changeability, stability, testability
Quality Benchmark Levels are determined by assessing
empirical comparison data
Higher maturity means
- Less chance of errors,
- Predictability of dates,
- cost and increased quality
ISO 25010 consists of how many main quality characteristics?
8
How many levels do exist in CMMI?
5
How many levels exist in QBL?
5
How many rules are there in Cognitive Complexity?
3
R. Martin‘s Package Metrics:
- Has 7 metrics,
- Is used to spot those Software packages which are hard to maintain
The Coupling metric/s used are:
- Efferent Coupling,
- Ingoing dependencies,
- Afferent Coupling,
- Outgoing dependencies
Metric Suite by Chidamber & Kemerer:
- States the quality of object-oriented programs,
- Is based on six metrics
Weighted Methods per Class (WMC) is used to determine
- Total complexity of a the methods of a class,
- The difficulty to understand and maintain a class
A low LCOM value indicates…
- A high cohesion of the methods within a class,
- The methods in the class largely work on the same data.,
- The class can not be meaningfully split up
Regression testing
- Is applied after making changes,
- Compares the results of the modified program with old results
For Glass-box testing testcases are selected on the basis of internal structure to analyze through:
- Data Flow,
- Control Flow
In BV Testing, the amount of boundary values that are selected for testing is
5
Worst Case BVT is useful, if :
- the interaction among two or more input variables is of concern,
- a failure of the program is extremely costly
Give the amount of test cases generated for two input variables, for Worst Case BVT:
25
BVT requires the input variables to be
- Independent,
- Representing bounded properties
Give the amount of test cases generated for two input variables, for normal BVT:
9
Give the amount of test cases generated for two input variables, for Robustness BVT:
13
test case classification for Boundary value analysis
- specification-based selection,
- black-box testing, functional testing
There are three partitions: A with 6 elements, B with 4 elements and C with 3 elements. How many test cases are needed in weak normal equivalence class testing?
6
What happens during the step “Check Function Coverage”
- A Matrix is created mapping test cases to their tested functions,
- Test cases are added, so that every function is tested by at least one test case
For Boundary Interior Coverage we design test cases with :
- no loop execution,
- one loop execution,
- at least two loop executions
Which roles participate in reviews?
- Moderator,
- Minute Taker,
- Author,
- Reviewer
Reviews: Which rules exist?
- The review meeting is limited to two hours.,
- The moderator can cancel the meeting.,
- Every reviewer must present his/her findings.,
- No solutions are discussed.
What is the “third hour” during reviews used for?
- to to talk without rules/protocol,
- to talk about possible solutions
Static Examinations?
- Reviews,
- Inspections,
- Measuring,
- Conformance Check
Static Examination are used to:
- Increase the quality of the examinee,
- identify weaknesses in the development process,
- check conformance to standards/guidelines