Module 4: Testing Software Quality Characteristics Flashcards

1
Q

Quality Characteristics - Intro (4.1)

While the previous chapter described specific techniques available to the tester, this chapter considers the…

A

Application of those techniques in evaluating the characteristics used to describe the quality of software applications or systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Quality Characteristics - Intro (4.1)

This syllabus discusses the quality characteristics which may be…

The attributes to be evaluated by the Technical Test Analyst are…

A

Evaluated by a Test Analyst.

Considered in the Advanced Technical Test Analyst syllabus [CTAL-TTA].

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Quality Characteristics - Intro (4.1)

The description of product quality characteristics provided in ISO 25010 [ISO25010] is used as a…

A

Guide to describe the characteristics. The ISO software quality model divides product quality into different product quality characteristics, each of which may have sub-characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Quality Characteristics - Intro (4.1)

Functional suitability

A

Functional correctness
functional appropriateness
functional completeness

Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Quality Characteristics - Intro (4.1)

Reliability

A

Maturity
fault-tolerance
recoverability
availability

Technical Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Quality Characteristics - Intro (4.1)

Usability

A

Appropriateness recognizability
learnability
operability
user interface aesthetics
user error protection
accessibility

Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Quality Characteristics - Intro (4.1)

Performance efficiency

A

Time behavior
resource utilization
capacity

Technical Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Quality Characteristics - Intro (4.1)

Maintainability

A

Analyzability
modifiability
testability
modularity
reusability

Technical Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Quality Characteristics - Intro (4.1)

Portability

A

Adaptability
installability
replaceability

Test Analyst
Technical Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Quality Characteristics - Intro (4.1)

Security

A

Confidentiality
integrity
non-repudiation
accountability
authenticity

Technical Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Quality Characteristics - Intro (4.1)

Compatibility

A

Co-exostance - Technical Test Analyst
Interoperability - Test Analyst

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Quality Characteristics - Intro (4.1)

For all of the quality characteristics and sub-characteristics discussed in this section…

Quality characteristic testing requires…

Without a strategy to deal with…

Some of this testing, e.g., usability testing, can require…

A

the typical risks must be recognized so that an appropriate test strategy can be formed and documented.

Particular attention to SDLC timing, required tools, software and documentation availability, and technical expertise.

Each characteristic and its unique testing needs, the tester may not have adequate planning, ramp up and test execution time built into the schedule [Bath14].

Allocation of special human resources, extensive planning, dedicated labs, specific tools, specialized testing skills and, in most cases, a significant amount of time. In some cases, usability testing may be performed by a separate group of usability or user experience experts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Quality Characteristics - Intro (4.1)

While the Test Analyst may not be responsible for the quality characteristics that require a more technical approach, it is important that…

For example, a test object that fails performance testing…

Similarly, a test object with interoperability issues…

A

The Test Analyst is aware of the other characteristics and understands the overlapping areas for testing.

May likely fail in usability testing if it is too slow for the user to use effectively.

With some components is probably not ready for portability testing as that will tend to obscure the more basic problems when the environment is changed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Quality Characteristics for Business Domain Testing (4.2)

Functional suitability testing is a primary focus for the Test Analyst. Functional suitability testing is focused on…

The test basis for functional suitability testing is generally…

Functional suitability tests vary according to…

At the system test level, functional suitability tests include…

For systems of systems, functional suitability testing will focus primarily on…

A wide variety of test techniques are…

A

“what” the test object does.

Requirements, a specification, specific domain expertise or implied need.

The test level in which they are conducted and can also be influenced by the SDLC. For example, a functional suitability test conducted during integration testing will test the functional suitability of interfacing components which implement a single defined function.

Testing the functional suitability of the system as a whole.

End-to-end testing across the integrated systems.

Employed during functional suitability testing (see Chapter 3).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Quality Characteristics for Business Domain Testing (4.2)

In Agile software development, functional suitability testing usually includes the following:

A
  • Testing the specific functionality (e.g., user stories) planned for implementation in the particular iteration
  • Regression testing for all unchanged functionality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Quality Characteristics for Business Domain Testing (4.2)

In addition to the functional suitability testing covered in this section, there are also…

A

Certain quality characteristics that are part of the Test Analyst’s area of responsibility that are considered to be nonfunctional (focused on “how” the test object delivers the functionality) testing areas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Functional Correctness Testing (4.2.1)

Functional correctness involves verifying…

Functional correctness testing employs…

Functional correctness testing can be conducted at…

A

The application’s adherence to the specified or implied
requirements and may also include computational accuracy.

Many of the test techniques explained in Chapter 3 and often uses the specification or a legacy system as the test oracle.

Any test level and is targeted on incorrect handling of data or situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Functional AppropriatenessTesting (4.2.2)

Functional appropriateness testing involves evaluating and validating the…

This testing can be based on…

Functional appropriateness testing is usually conducted during…

Defects discovered in this testing are indications that

A

Appropriateness of a set of functions for its intended specified tasks.

The functional design (e.g., use cases and/or user stories).

System testing, but may also be conducted during the later stages of integration testing.

The system will not be able to meet the needs of the user in a way that will be considered acceptable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Functional Completeness Testing (4.2.3)

Functional completeness testing is performed to determine the coverage of…

Traceability between specification items…

Measuring functional completeness may vary according to…

For example, functional completeness for Agile software development may be…

Functional completeness for system integration testing may focus on…

A

Specified tasks and user objectives by the implemented functionality.

(e.g., requirements, user stories, use cases) and the implemented functionality (e.g., function, component, workflow) is essential to enable required functional completeness to be determined.

The particular test level and/or the SDLC used.

Based on implemented user stories and features.

The coverage of high-level business processes.

20
Q

Functional Completeness Testing (4.2.3)

Determining functional completeness is generally supported by…

Lower than expected levels of functional completeness are indications that…

A

Test management tools if the Test Analyst is maintaining the traceability between the test cases and the functional specification items.

The system has not been fully implemented.

21
Q

Interoperability Testing (4.2.4)

Interoperability testing verifies the…

Tests focus on…

Testing should cover all…

In reality, this may only be feasible for…

Specifying tests for interoperability requires that…

These environments are then tested…

A

Exchange of information between two or more systems or
components.

The ability to exchange information and subsequently use the information that has been exchanged.

The intended target environments (including variations in the hardware, software, middleware, operating system, etc.) to ensure the data exchange will work properly.

A relatively small number of environments. In that case interoperability testing may be limited to a selected representative group of environments.

Combinations of the intended target environments are identified, configured and available to the test team.

Using a selection of functional suitability test cases which exercise the various data exchange points present in the environment.

22
Q

Interoperability Testing (4.2.4)

Interoperability relates to how…

Software with good interoperability characteristics can…

The number of changes and the effort required to implement and test those changes may be…

A

Different components and software systems interact with each other.

Be integrated with a number of other systems without requiring major changes or significant impact on non-functional behaviour.

Used as a measure of interoperability.

23
Q

Interoperability Testing (4.2.4)

Testing for software interoperability may, for example, focus on the following design features:

A
  • Use of industry-wide communications standards, such as XML
  • Ability to automatically detect the communication needs of the systems it interacts with and adjust accordingly
24
Q

Interoperability Testing (4.2.4)

Interoperability testing may be particularly significant for the following:

A
  • Commercial off-the-shelf software products and tools
  • Applications based on a system of systems
  • Systems based on the Internet of Things
  • Web services with connectivity to other systems
25
Q

Interoperability Testing (4.2.4)

Interoperability Testing is performed during…

At the system integration level, this type of testing is conducted to determine…

Because systems may interoperate on multiple levels, the Test Analyst must…

For example, if two systems will exchange data, the Test Analyst must…

It is important to remember that all interactions may not…

Techniques such as…

Typical defects found include…

A

Component integration and system integration testing.

How well the fully developed system interacts with other systems.

Understand these interactions and be able to create the conditions that will exercise the various interactions.

Be able to create the necessary data and the transactions required to perform the data exchange.

Be clearly specified in the requirements documents. Instead, many of these interactions will be defined only in the system architecture and design documents. The Test Analyst must be able and prepared to examine these documents to determine the points of information exchange between systems and between the system and its environment to ensure all are tested.

Equivalence partitioning, boundary value analysis, decision tables, state transition diagrams, use cases and pairwise testing are all applicable to interoperability testing.

Incorrect data exchange between interacting components.

26
Q

Usability Evaluation (4.2.5)

Test Analysts are often in the position to coordinate and support the evaluation of usability. This may include…

To do this effectively, a Test Analyst must…

It is important to understand why users…

To gain this understanding it is first necessary to appreciate…

A

specifying usability tests or acting as a moderator working with the users to conduct tests.

Understand the principal aspects, goals and approaches involved in these types of testing.

Might have difficulty using the system or do not have a positive user experience (UX) (e.g., with using software for entertainment).

That the term “user” may apply to a wide range of different types of personas, ranging from IT experts to children to people with disabilities.

27
Q

Usability Evaluation - Usability Aspects (4.2.5.1)

The following are the three aspects considered in usability testing:

A
  • Usability in the sense according to the ISO 25010 standard
  • User experience (UX) as a generalization of usability
  • Accessibility as a sub-characteristic of usability
28
Q

Usability Evaluation - Usability Aspects - Usability (4.2.5.1)

Usability testing targets software…

Such defects may affect…

Usability problems can lead to…

A

Defects that impact a user’s ability to perform tasks via the user
interface.

The user’s ability to achieve their goals effectively, or efficiently, or with satisfaction.

Confusion, error, delay or outright failure to complete some task on the part of the user.

29
Q

Usability Evaluation - Usability Aspects - Usability (4.2.5.1)

The following are the sub-characteristics of usability:

A
  • Appropriateness recognizability (i.e., understandability)
  • Learnability
  • Operability
  • User interface aesthetics (i.e., attractiveness)
  • User error protection
  • Accessibility (see below)
30
Q

Usability Evaluation - Usability Aspects - User Experience (4.2.5.1)

User experience evaluation addresses…

This is of particular importance for test objects where…

A

The whole user experience with the test object, not just the direct interaction.

Factors such as enjoyment and user satisfaction are critical for business success.

31
Q

Usability Evaluation - Usability Aspects - User Experience (4.2.5.1)

Typical factors which influence user experience include the following:

A
  • Brand image (i.e., the user’s trust in the manufacturer)
  • Interactive behavior
  • The helpfulness of the test object, including help system, support and training
32
Q

Usability Evaluation - Usability Aspects - Accessibility (4.2.5.1)

It is important to consider the accessibility to software for those with particular needs or restrictions for its use. This includes those with disabilities. Accessibility testing should consider…

Accessibility, similar to usability, must be considered when…

Testing often occurs during the…

Defects are usually determined when the software…

A

The relevant standards, such as the Web Content Accessibility Guidelines (WCAG), and legislation, such as the Disability
Discrimination Acts (Northern Ireland, Australia), Equality Act 2010 (England, Scotland, Wales) and Section 508 (US).

Conducting design activities.

Integration levels and continues through system testing and into the acceptance testing levels.

Fails to meet the designated regulations or standards defined for the software.

33
Q

Usability Evaluation - Usability Aspects - Accessibility (4.2.5.1)

Typical measures to improve accessibility focus on the opportunities provided for users with disabilities to interact with the application. These include the following:

A
  • Voice recognition for inputs
  • Ensuring that non-text content that is presented to the user has an equivalent text alternative
  • Enabling text to be resized without loss of content or functionality
34
Q

Usability Evaluation - Usability Aspects - Accessibility (4.2.5.1)

Accessibility guidelines support the Test Analyst by…

In addition, tools and browser plugins are available to help testers…

A

Providing a source of information and checklists which can be used for testing (examples of accessibility guidelines are given in [ISTQB_UT_SYL]).

Identify accessibility issues, such as poor color choice in web pages that violate guidelines for color blindness.

35
Q

Usability Evaluation - Usability Evaluation Approaches (4.2.5.2)

Usability, user experience and accessibility may be tested by one or more of the following approaches:

A
  • Usability testing
  • Usability reviews
  • User surveys and questionnaires
36
Q

Usability Evaluation - Usability Evaluation Approaches - Usability Testing (4.2.5.2)

Usability testing evaluates the ease by which users can use or learn to use the system to reach a specified goal in a specific context. Usability testing is directed at measuring the following:

A
  • Effectiveness - capability of the test object to enable users to achieve specified goals with accuracy and completeness in a specified context of use
  • Efficiency - capability of the test object to enable users to expend appropriate amounts of resources in relation to the effectiveness achieved in a specified context of use
  • Satisfaction - capability of the test object to satisfy users in a specified context of use
37
Q

Usability Evaluation - Usability Evaluation Approaches - Usability Testing (4.2.5.2)

It is important to note that designing and specifying usability tests is often…

A

Conducted by the Test Analyst in co-operation with testers who have special usability testing skills, and usability design engineers who understand the human-centered design process (see [ISTQB_UT_SYL] for details).

38
Q

Usability Evaluation - Usability Evaluation Approaches - Usability Reviews (4.2.5.2)

Inspections and reviews are a type of testing conducted from…

This can be cost effective by finding…

Heuristic evaluation (systematic inspection of a user interface design for usability) can be used to…

This involves having a small set of…

Reviews are more effective when the user interface is…

Visualization is important for…

A

A usability perspective which help to increase the user’s level of involvement.

Usability problems in requirements specifications and designs early in the SDLC.

Find the usability problems in the design so that they can be addressed as part of an iterative design process.

Evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”).

More visible. For example, sample screen shots are usually easier to understand and interpret than just describing the functionality given by a particular screen.

An adequate usability review of the documentation.

39
Q

Usability Evaluation - Usability Evaluation Approaches - User Surveys and Questionnaires (4.2.5.2)

Survey and questionnaire techniques may be…

Standardized and publicly available surveys such as Software Usability Measurement Inventory (SUMI) and Website Analysis and MeasureMent Inventory (WAMMI) permit…

In addition, since SUMI provides…

A

Applied to gather observations and feedback regarding user behavior with the system.

Benchmarking against a database of previous usability measurements.

Tangible measurements of usability, this can provide a set of completion / acceptance criteria.

40
Q

Portability Testing (4.2.6)

Portability tests relate to the…

A

Degree to which a software component or system can be transferred into its intended environment, either as a new installation, or from an existing environment.

41
Q

Portability Testing (4.2.6)

The ISO 25010 classification of product quality characteristics includes the following sub-characteristics of portability:

A
  • Installability
  • Adaptability
  • Replaceability
42
Q

Portability Testing (4.2.6)

The task of identifying risks and designing tests for portability characteristics is…

A

shared between the Test Analyst and the Technical Test Analyst (see [ISTQB_ALTTA_SYL] Section 4.7).

43
Q

Portability Testing - Installability (4.2.6.1)

Installability testing is conducted on…

A

The software and written procedures are used to install and de-install the software on its target environment.

44
Q

Portability Testing - Installability (4.2.6.1)

The typical installability testing objectives that are the focus of the Test Analyst include:

A
  • Validating that different configurations of the software can be successfully installed. Where a large number of parameters may be configured, the Test Analyst may design tests using the pairwise technique to reduce the number of parameter combinations tested and focus on particular configurations of interest (e.g., those frequently used).
  • Testing the functional correctness of installation and de-installation procedures.
  • Performing functional suitability tests following an installation or de-installation to detect any defects which may have been introduced (e.g., incorrect configurations, functions not available).
  • Identifying usability issues in installation and de-installation procedures (e.g., to validate that users are provided with understandable instructions and feedback/error messages when executing the procedure).
45
Q

Portability Testing - Adaptability (4.2.6.2)

Adaptability testing checks whether a given application can be…

The Test Analyst supports adaptability testing by…

The target environments are then tested using…

A

Adapted effectively and efficiently to function correctly in all intended target environments (hardware, software, middleware, operating system, cloud, etc.).

Identifying the intended target environments (e.g., versions of different mobile operating systems supported, different versions of browsers which may be used), and designing tests that cover combinations of these environments.

A selection of functional suitability test cases which exercise the various components present in the environment.

46
Q

Portability Testing - Replaceability (4.2.6.3)

Replaceability testing focuses on…

This may be particularly relevant for system architectures based on…

For example, a hardware device used in a warehouse to register and control stock levels may be…

A

The ability of software components or versions within a system to be exchanged for others.

The Internet of Things, where the exchange of different hardware devices and/or software installations is a common occurrence.

Replaced by a more advanced hardware device (e.g., with a better scanner) or the installed software may be upgraded with a new version that enables stock replacement orders to be automatically issued to a supplier’s system.

47
Q

Portability Testing - Replaceability (4.2.6.3)

Replaceability tests may be…

A

Performed by the Test Analyst in parallel with functional integration tests where more than one alternative component is available for integration into the complete system.