Unit 11 Flashcards

1
Q

Give two examples of executable documentation.

A

Unit tests and pre- and post-conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is meant by ‘executable documentation’?

A

Executable documentation is documentation that allows code to be automatically checked for validity at run time. It should also demonstrate how the code should be used and indicate what the code does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Suggest three expectations that a customer might have of a software product without perhaps being aware of them.

A

Three possibilities are:

  • the product will not conflict with other software that they use
  • the product will boost productivity
  • the product will be simple to use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain how increasing integrity within a system could affect efficiency.

A

Increasing integrity within a system means strengthening measures to ensure that modification or deletion of data by unauthorised persons, or by any other unintended means, does not occur. This might involve the use of passwords to access certain data and an authentication server to check a user’s identity, or it might mean that network traffic needs to be encrypted and decrypted. Each of these factors adds an overhead to processing, so efficiency is likely to be reduced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Identify a pair of software quality factors (SQFs) that are not independent.

A

Usability and portability. For example, many of the features of the Apple Macintosh that contribute to its reputation for usability are built into its operating system. Applications that take advantage of these features are less portable to other systems, such as Windows or Linux.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the four SQFs identified as being of primary importance for everyday software products?

A

Correctness, integrity, maintainability and usability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How could you assess correctness?

A

A popular measure for assessing correctness is defects per thousand lines of code (defects per KLOC), where a defect may be defined as a verified lack of conformance to requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How could you assess integrity?

A

This is measured by considering the proportion of ‘attacks’ on a product as opposed to bona fide uses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How could you assess maintainability?

A

Unfortunately there is no way to measure maintainability directly, and so we must measure it indirectly. A simple measure is mean time to change (MTTC), which is the average of the times it takes to analyse a bug report, design an appropriate modification, implement the change, test it and distribute the change to all users. In general, the lower the MTTC (for equivalent types of changes), the more maintainable the software product is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How could you assess usability?

A

Any system with a user interface and that will be used by people other than the developers should be usability tested. Usability testing involves users systematically trying out the user interface and the system behind it – although for some purposes the system may be simulated. There are also forms of evaluation such as heuristic review that can be used to make substantial improvements to user interfaces without involving users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the purpose of verification?

A

Verification tests the extent to which the product conforms with the various, often evolving, system descriptions designed to help produce it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the purpose of validation?

A

Validation focuses on ensuring that outputs meet the needs, including implicit needs and expectations of customers and stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What tasks does verification involve?

A
  • ensuring that all system descriptions are self-consistent

- ensuring that all system descriptions are consistent and complete with respect to those from which they were derived

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What task does validation involve?

A
  • ensuring that all system descriptions are consistent with the customer’s requirements, including implicit requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Give a simple example of two system descriptions that might contradict each other.

A

One example would be if a structural model for a hotel reservation system indicated that a reservation could be made for more than one room but the implementation only allowed one room per reservation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is it important for the customer’s requirements statement to be self consistent?

A

If the customer’s requirement statement lacks self consistency, then either the resulting system will be inconsistent or it will not satisfy the customer’s requirements. The system builders can decide (implicitly or explicitly) how to resolve the inconsistencies or, if the inconsistent requirements affect different parts of the system and are not picked up by the developers, the developers could inadvertently build the inconsistencies into the product.

17
Q

Describe three benefits of test-driven development.

A
  • Test coverage: Test coverage is in some respects comprehensive, with virtually all code having associated tests. These tests should all have been run successfully throughout development. Finished code therefore already has an extensive test suite.
  • Regression testing and early discovery of errors: Many kinds of error are discovered and corrected at the earliest opportunity. Changes that break tests can be quickly identified and rectified.
  • Executable documentation: The tests both show how the code should be used and indicate what it should do by means of test cases.
18
Q

Describe three limitations of test-driven development.

A
  • User interface: TDD does not readily apply to user interface testing, for which it is better to apply techniques such as usability testing.
  • Testing of applications integrated with databases: TDD alone is not adequate for the comprehensive testing of databases.
  • Multithreaded systems: TDD is not generally suitable for the testing of multithreaded systems, as results may depend on the vagaries of timing.
19
Q

Give two reasons why it is useful to run a unit test before the relevant code increment has been written.

A
  • If the test unexpectedly already passes at this point, this demonstrates that it is not a good test of the next increment.
  • If it fails in an unexpected way, this demonstrates a faulty or incomplete understanding of the test that needs to be addressed in order to have a good grip on the code and test.
20
Q

Describe the four distinct categories of testing.

A
  • Requirements-based testing draws on previously gathered or formulated testable requirements to check that a system meets the customer’s requirements. The final stage in this form of testing is acceptance testing.
  • Usability testing refers to testing of the user interface.
  • Developmental testing is a term that refers to all of the testing carried out by the team developing the software. It is useful to distinguish between developmental testing at three different levels of scope – unit testing, integration or component testing and system testing.
  • Regression testing is any form of testing during development or system maintenance that systematically checks that fixing one bug has not introduced others.
21
Q

Are there any situations in which system testing should be carried out by the implementers of a system?

A

Probably the only situation where this is appropriate is when the project team is small. In small teams, one person might play the part of requirements engineer, designer, implementer, tester and maintenance engineer.

22
Q

What do you think is the relationship between system testing and acceptance testing?

A

In general, the same tests will be carried out during acceptance testing and system testing. System testing is an in-house activity and a customer need never know how system testing went – any bugs can be dealt with before the customer sees them. Acceptance testing, on the other hand, is conducted with much more at stake – the customer can accept or reject a system based on its performance at acceptance testing.

23
Q

Why should regression testing be necessary even after the customer has accepted the product after acceptance testing?

A

Acceptance testing is the process of showing that the software meets the customer’s requirements, not that there aren’t bugs in the code. In fact, given that a system is put into use, bugs that require fixing are almost certain to be found after acceptance testing. In addition, the system will be maintained, with functionality added and changed, leading to a requirement for regression testing.

24
Q

Use the following phrases, which describe four kinds of testing, to fill the gaps in the following three sentences.

usability testing; requirements testing; security testing; regression testing

TDD and DbC are valuable but not comprehensive tools for _________________.
TDD has ________________ built into it.
DbC and TDD cannot substitute for thorough _______________ or _________.

A

TDD and DbC are valuable but not comprehensive tools for requirements testing.

TDD has regression testing built into it.

DbC and TDD cannot substitute for thorough usability testing or security testing.

25
Q

Which of unit, integration, system and acceptance testing are parts of validation and which are parts of verification?

A

Unit and integration testing concentrate on whether parts of the system perform according to their specifications, answering the verification question (have we built the system correctly?).

System and thus acceptance testing focus on showing that the customer’s requirements have been met, answering the validation question (have we built the right system?).

Note however that there are no hard and fast distinctions. Unit testing can be used to demonstrate that a component satisfies a customer’s requirements – thus viewed as validation – and system testing can be used to demonstrate that a system operates according to specification – viewed as verification.

26
Q

What is black-box testing?

A

Black-box testing is used to test that each aspect of the customer’s requirements is handled correctly by an implementation. Black-box testing ‘sees’ a system through its specification.

27
Q

What is white-box testing?

A

White-box testing is used to check that the details of the implementation are correct. White-box testing ignores the ‘big picture’ of the requirements and instead looks to detailed designs to check that the system does what it is supposed to do correctly, ideally representatively testing all paths. Of course, as well as checking coverage of paths, white-box testing must check that outputs are correct!

28
Q

Give an example where black-box testing will test something that white-box testing would miss, and one where white-box testing will test something that black-box testing would miss.

A

Because black-box testing takes its test cases from the specification, it is likely to pick up the following sorts of errors that white-box testing would miss:

  • operations required by the specification but not provided for by the implementation
  • errors between the interfaces of two classes
  • errors in the transformations between internal states of a class, if these affect input/output behaviour
  • performance errors, in which system performance is found to be wanting
  • system initialisation and termination errors.

On the other hand, in looking inside the implementation, white-box testing will pick up the following sorts of errors that black-box testing would miss:

  • the sequences of method calls in the body of a method that fail to perform the correct function
  • boolean conditions in if statements, while loops, etc. incorrectly expressed
  • loops that terminate incorrectly
  • relationships between methods and attributes of classes that are incorrect.
29
Q

Should the cyclomatic-complexity metric be used to measure the complexity of an object-oriented software system?

A

Because the cyclomatic-complexity metric is based on decision points, which are present only in methods, it is ‘blind’ to the class-structuring mechanisms that are available in object-oriented system descriptions. As much of the complexity of an object-oriented system is held in the class structure, applying the cyclomatic-complexity metric to a whole system would not therefore be appropriate.

30
Q

What is the depth-of-inheritance-tree (DIT) metric?

A

For a given class, DIT is defined as the maximum number of hops we can make, starting at the class in question and moving up the inheritance tree, until reaching a class or interface from which we can go no further. We start with 0 and add 1 per hop.

31
Q

What is the coupling-between-objects (CBO) metric?

A

For a given class, CBO is defined as the number of relationships the class has with other classes. The more relationships a class has – and so the higher the value of this metric – the more difficult it is to understand the use of the given class.

32
Q

What is the number-of-children (NOC) metric?

A

For a given class, NOC is defined as the number of immediate children for that class. This metric is a measure of the number of classes that will be affected by changes to a given parent class.

33
Q

What is the response-for-a-class (RFC) metric?

A

Precise definitions can vary, but one of the most common definitions is that, for a given class, RFC is defined as the size of the response set for the class, which consists of all the methods and constructors of this class (including methods inherited from superclasses) together with all the methods and constructors that are invoked within this class on objects of other classes.

34
Q

What is the lack-of-cohesion-in-methods (LCOM) metric?

A

For a given class, LCOM measures its cohesiveness. Ideally, a class should represent a single domain concept, reflected by the fact that the attributes of a class should be closely related. LCOM is defined as the number of pairs of methods (comparing each method with every other method) that do not make reference to the same attributes minus the number of pairs of methods that do. If this number is negative LCOM is zero. In highly cohesive classes, methods will manipulate the same attributes.

35
Q

What is the weighted-methods-per-class (WMPC) metric?

A

Given a class, WMPC measures its complexity of behaviour. It is defined as the sum of the cyclomatic complexities of each method of the class. This measure stands in contrast with a simple count of the number of classes in each method. In effect, in the case of WMPC the cyclomatic complexity of each method is a weighting used to indicate its complexity. For a class as a whole, the relationship between this metric and complexity is that a class with many simple methods may be as difficult to understand as a class with a few complicated methods.

36
Q

When calculating the DIT metric do you think that classes from the Java API should be included? Do you think that API classes should be counted in Chidamber and Kemerer’s other metrics?

A

Chidamber and Kemerer’s metrics measure complexity, and API classes add to complexity. Hence they should be counted in all metric calculations. For this reason, the Java documentation will be very useful when calculating these metrics.

37
Q

The WMPC metric measures the complexity of a class in terms of the sum of the cyclomatic complexities of its methods.

a. Consider a class with ten methods, whose WMPC value is 40. How confident can you be that this class is not too complex?
b. If the WMPC value of a class with ten methods is over 100, how certain can you be that this class is too complex?

A

For individual methods, a cyclomatic complexity of 10 or more should be regarded as a hint that the method is too complex. In the case of classes, a little more thought is needed.

a. For a class with ten methods, a value for the WMPC metric of 40 would typically suggest acceptably low class complexity. However although nearly all of the ten methods might be acceptably simple, one or two might be unacceptably complex.
b. By contrast, a complexity of greater than 10 × 10 = 100 is a fair indication that the behaviour of the class is too complex.