Unit 11 Flashcards
Suggest three expectations that a customer might have of a software product without, perhaps, being aware of them.
Three possibilities are:
- the product will not conflict with other software that they use;
- the product will boost productivity;
- the product will be simple to use.
Explain how increasing integrity within a system could affect efficiency. Identify one other pair of SQFs which are not independent.
software quality factors (SQFs)
Increasing integrity within a system means making the system more secure. This might involve the use of passwords to access certain data and an authentication server to check a user’s identity, or it might mean that network traffic needs to be encrypted and decrypted. Each of these factors adds overhead to processing, so efficiency is likely to be reduced.
Log entries in a web server indicate that, over the previous month, 113 attacks were made on the web server, but none were successful. Is the web server necessarily secure?
The web server repelled all attacks, so security is 1. From the formula, integrity is 1 (perfect). We get this from working out the formula:
(see pic)
integrityattack =1 – threatattack (1 – securityattack)
This would seem to indicate that the web server is entirely secure. However, all we really know is that the web server was secure against these particular attacks. If the culprits were to use some other kind of attack, or change the parameters on the attack they used, they might well succeed. It is also possible that successful attacks have been made using some other method, but there were no log entries of sufficiently suspicious activity to attract our attention, perhaps because the attacker deleted them.
Why is it important for the customer’s requirements statement to be self-consistent?
If the customer’s requirement statement lacks self-consistency, then either the resulting system will be inconsistent or it will not satisfy the customer’s requirements. The system builders can decide (implicitly or explicitly) how to resolve the inconsistencies, or, if the inconsistent requirements affect different parts of the system, the developers could inadvertently build the inconsistencies into the product.
Give an example of a product you know where the user manual or documentation is not complete.
Most user manuals are not complete, in the sense that they will not describe all the features of a product. For example, a word processor might contain many features that are not fully described anywhere. For example, two features might interact in unexpected ways.
Continue the trend of Figure 1 by extending the curve that appears there. Can you make any predictions about the number of errors that were originally present in the system?
The curve you have drawn may become near vertical at around 140–45 errors. If this is the case, then this indicates that the time between failures becomes very large in that range, meaning there are few errors left to be found. Consequently, the number of errors that were originally present was somewhere around 140–145.
What do you think is the relationship between system and acceptance testing?
In general, the same tests be will carried out during acceptance testing and system testing.
System testing is an in-house activity and a customer need never know how system testing went; any bugs can be dealt with before the customer sees them.
Acceptance testing, on the other hand, is conducted with much more at stake; the customer can accept or reject a system based on its performance at acceptance testing.
A getter is a method that returns the value of one of an object’s attributes. A setter sets the value of one of an object’s attributes. What use could getters and setters be to unit testing?
Getters and setters allow the state of an object to be interrogated. Once the getters and setters have been tested (a process that may be as simple as verifying that the getter returns what was just set, or much more complex if the getters and setters do calculations), they can be used in more complex tests, setting initial state and verifying final state. Being able to set the state before a testing a unit, and to interrogate it afterwards, are indispensable aspects of testing.
Given n classes A1, …, An, each of which uses the method foo implemented in m classes C1, …, Cm all of which are derived from parent class C, calculate how many tests will be required using the following approaches:
(a) the safe approach (as illustrated in Figure 3);
(b) the minimal approach (as illustrated in Figure 4);
(c) the balanced approach (as illustrated in Figure 5) to integration testing. What does this imply for the testing load?
(a) m6n;
(b) max(m, n);
(c) m+n -1.
m+n -1 is not much bigger than max(m, n), so the balanced approach might as well be used in preference to the minimal approach. However, m6n is, in general, much bigger than both m+n -1 and max(m, n). So, if the safe approach were used in preference to the balanced approach, the testing load could increase dramatically.
Are there any situations in which system testing should be carried out by the implementers of a system?
Probably the only situation where this is appropriate is when the project team is very small. In small teams, one person might play multiple roles, perhaps all of requirements engineer, designer, implementer, tester and maintenance engineer.
Why should regression testing be necessary, since, after all, the customer has accepted the product after acceptance testing?
Acceptance testing is the process of showing that the software meets the customer’s requirements, not that there aren’t bugs in the code.
In fact, given that a system is put into use, bugs that require fixing are almost certain to be found after acceptance testing.
In addition, the system will be maintained, adding and changing functionality as needed. Regression testing is therefore necessary.
From Table 4 you can see that complexity is related to each of the SQFs reliability, maintainability, flexibility and testability. Suggest why this should be the case.
software quality factors (SQFs)
Reliability is directly related to the number of bugs in a system, and the number of bugs in a system, as we have seen, is related to its complexity.
Modifying complex code is likely to more difficult than modifying simple code; maintainability and flexibility are therefore related to complexity.
Determining test data for a complex system is likely to be more difficult than for a simple system; testability is therefore also related to complexity.
Compare the complexities of the following two pieces of code using the LOC and cyclomatic-complexity metrics. What conclusions can you draw about the relative complexity of the code?
Lines Of Code (LOC)
Each piece of code has 7 lines.
The complexity according to the LOC metric is therefore the same.
Code A has cyclomatic complexity 3,
whereas code B has cyclomatic complexity 1,
which suggests that code A is the more complex of the two.
This supports an intuitive view that the structure of code A appears more complex than that of code B.
Look again at the definition of the DIT metric. Do you think that, in calculating the metric, classes from the Java API should be included? Do you think that API classes should be counted in Chidamber and Kemerer’s other metrics?
Depth-of-inheritance-tree (DIT) metric:
for a given class, DIT is defined as the largest number of hops through an object’s superclasses, where the starting class is numbered 0. For a single inheritance programming language like Java, this means that the DIT is the number of ancestors + 1, but this is not true in languages like C++ that support multiple inheritance. A method in the given class can be defined in that class or in any of its ancestors. DIT is, therefore, a measure of the number of classes essential to the understanding of the methods of the given class. The deeper a class is in an inheritance tree, and so the higher the value of this metric, the more difficult the class may be to understand.
Chidamber and Kemerer’s metrics measure complexity, and API classes add to complexity. Hence they should be counted in all metric calculations. For this reason, the Java documentation will be very useful when calculating these metrics.
The WMPC metric measures the complexity of a class in terms of the sum of the cyclomatic complexities of its methods. For a class with 10 methods, suggest a WMPC value above which the behaviour of the class should be considered too complex.
Weighted-methods-per-class (WMPC) metric:
given a class, WMPC measures its complexity of behaviour. It is defined as the sum of the cyclomatic complexities of each method of the class. The relationship between this metric and complexity is that a class with many simple methods may be as difficult to understand as a class with a few complicated methods.
For individual methods, a cyclomatic complexity of greater than 10 should be regarded as a hint that the method is too complex. For a class with 10 methods, therefore, a value for the WMPC metric of greater than 10x10 = 100 should be regarded as indicating that the behaviour of the class is too complex.