Fundamentals of Testing Flashcards

1
Q

What are the typical test objectives?

A

Evaluating work products such as requirements, user stories, designs, and code
Triggering failures and finding defects
Ensuring required coverage of a test object
Reducing the level of risk of inadequate software quality
Verifying whether specified requirements have been fulfilled
Verifying that a test object complies with contractual, legal, and regulatory requirements
Providing information to stakeholders to allow them to make informed decisions
Building confidence in the quality of the test object
Validating whether the test object is complete and works as expected by the stakeholders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Debugging is concerned with what?

A

Finding causes of failures (defects) and analyzing and eliminating them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Testing can trigger what?

A

Failures that are caused by defects in the software OR find defects in the test object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the typical debugging process?

A

Reproduction of a failure
Finding the root cause
Fixing the cause

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is testing necessary?

A

Provides a cost-effective means of detecting defects
Provides a means of directly evaluating the quality of a test object at various stages of the SDLC
Provides users with indirect representation on the dev project
May also be required to comply with regulatory standards or meet contractual and legal obligations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are QA and testing different?

A

Testing is a form of Quality Control.
Quality Control is product-oriented, corrective approach.
QA is process-oriented, preventive approach that focuses on implementation and improvement of processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Who uses test results?

A

QA and QC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Who is responsible for QA?

A

everyone on a project

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How are test results used by Quality Control?

A

They are used to fix defects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How are test results used by QA?

A

They provide feedback on how well the development and test processes are performing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is an error?

A

A human made mistake caused by things like time pressure, complexity of work products, processes, infrastructure, interactions, lack of training, or tiredness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a defect?

A

A fault or bug caused by an error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a failure?

A

When the system fails to do what it should or shouldn’t do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a root cause?

A

The fundamental reason for the occurrence of a problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Where can a defect be found?

A

Documentation, source code, or supporting artifact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How are root causes identified?

A

Through root cause analysis that is typically performed when a failure occurs or a defect is identified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the seven testing principals?

A

Testing shows the presence, not the absence of defects
Exhaustive testing is impossible
Early testing saves time and money
Defects cluster together
Tests wear out
Testing is context dependant
Absence of defects fallacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Explain the testing principle:
Testing shows the presence, not the absence of defects

A

Testing reduces the probability of defects remaining undiscovered but cannot prove test object correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain the testing principle:
Exhaustive testing is impossible

A

Except in trivial cases, testing everything is not feasible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How can testing efforts be focused to prevent exhaustive testing?

A

By using test techniques, test case prioritization, and risk based testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Explain the testing principle:
Defects cluster together

A

A small number of system components usually contain the most defects discovered or are responsible for most failures.

22
Q

Explain the testing principle:
Tests wear out

A

If a test is repeated many times, it can become ineffective in detecting new defects

23
Q

How can you overcome the effect of tests wearing out?

A

Existing tests and test data may need to be modified and new tests written.

24
Q

Explain the testing principle:
Testing is context dependant

A

There is no universally applicable approach to testing.

25
Q

Explain the testing principle:
Absence-of-defects fallacy

A

Just because there are no defects, does not mean the system will be successful. It may not meet the needs of the users, or could be inferior to other competing systems.

26
Q

Define test planning

A

Defining the test objectives and then selecting an approach that best achieves those objectives within constraints imposed by the overall project.

27
Q

Define test monitoring and control

A

Test monitoring involves the ongoing checking of all test activities and the comparison of the actual progress and the planned progress.
Test control involves taking the actions necessary to meet the objectives of testing

28
Q

Define test analysis

A

Analyzing the test basis to identify testable features and to define and prioritize associated test conditions together with the related risks and risk levels.
It answers what to test.

29
Q

Define test design

A

Changing test conditions into test cases and other test ware. It often involves the id of coverage items to specify test case input, defining the test data requirements, designing the test environment.
It answers how to test

30
Q

Define implementation

A

Creating or acquiring the testware necessary for test execution.
Test cases can be organized into test procedures and are often assembled into test suites.
Manual and automated test scripts are created
Test procedures are prioritized and arranged within a test execution schedule for efficient test execution
The test environment is built and verified to be set up correctly

31
Q

Define test execution

A

Running the tests in accordance with the test execution schedule.
May be manual or automated.
Can take many forms including continuous testing or pair testing sessions
Actual test results are compared with the expected results.
The test results are logged.
Anomalies are analyzed to id their likely causes allowing for reporting the anomalies based on the observed failures

32
Q

Define test completion

A

Usually occur at project milestones (release, end of iteration, test level completion) for any unresolved defects, change requests, or backlog items created.
Any testware that may be useful in the future is id’d and either archived or given to the appropriate teams.
The test environment is shut down to an agreed upon state.
Test activities are analyzed to ID lessons learned and and improvements for future iterations, releases, or projects.
A test completion report is created and communicated to the stakeholders.

33
Q

What are some contextual factors that will affect how testing is carried out?

A

Stakeholders (needs, expectations, requirements, willingness to cooperate)
Team members (knowledge, skills, level of experience, availability, training needs)
Business domain (criticality of the test object, ID’d risks, market needs, specific legal requirements)
Technical factors (type of software, product architecture, technology used)
Project constraints (scope, time, budget, resources)
Org factors (Org structure, existing policies, practices used)
SDLC (engineering practices, development methods)
Tools (availability, usability, compliance)

34
Q

What are some way the different contextual factors can impact test related issues?

A

Test strategy
Test techniques used
Degree of test automation
Required level of coverage
Level of detail of test documentation
Reporting

35
Q

What is testware?

A

It is created as output work products from test activities and may vary in how different organizations produce, shape, name, organize, and manage those products.

36
Q

What do test planning testware products include?

A

Test plan
Test schedule
Risk register (list of risks together with risk likelihood, impact, and info on risk mitigation)
Entry and exit criteria

37
Q

What do test monitoring and control testware products include?

A

Test progress reports
Documentation of control directives
Risk information

38
Q

What do test analysis testware products include?

A

Prioritized test conditions (acceptance criteria)
Defect reports regarding defects in the test basis if not fixed directly

39
Q

What do test design testware products include?

A

Prioritized test cases
Test charters
Coverage items
Test data requirements
Test environment requirements

40
Q

What do test execution testware products include?

A

Test logs
Defect reports

41
Q

What do test completion testware products include?

A

Test completion report
Action items for improvement of subsequent projects or iterations
Documented lessons learned
Change requests

42
Q

Why is maintaining traceability valuable?

A

Accurate traceability supports coverage evaluation and criteria can be used to show to what extent the test objectives have been achieved.
It makes it possible to determine the impact of changes, facilitates test audits, and helps meet IT governance criteria.
Makes test progress and completion reports more easily understandable by including the status of test basis elements
Allows for easier communication of the technical aspects of testing to stakeholders
Provides info to assess product quality, process capability, and project progress against biz goals

43
Q

What are two principal roles in testing?

A

Test management role and testing role

44
Q

What responsibilities are included in the test management role?

A

Overall responsibility for the test process, test team, and leadership of the test activities.
Mainly focused on activities of test planning, test monitoring and control, and test completion

45
Q

What is the testing role responsible for?

A

Overall responsibility for the engineering aspect of testing.
Mainly focused on the activities of test analysis, test design, test implementation, and test execution

46
Q

What are some generic skills required for testing?

A

Testing knowledge
Thoroughness, carefulness, curiosity, attention to details, being methodical
Good communication skills, active listening, being a team player
Analytical thinking, critical thinking, creativity
Technical knowledge
Domain knowledge

47
Q

What is the Whole Team Approach?

A

Any team member with the necessary knowledge can perform any task and everyone is responsible for quality. This allows testers to work closely with other team members to ensure the desired quality levels are achieved.

48
Q

What are the advantages of the Whole Team Approach?

A

Improves team dynamics
Enhances communication and collaboration
Creates synergy by allowing the various skill sets of team members to be leveraged for the benefit of the project

49
Q

What are some benefits to independence of testing?

A

Independent testers are likely to recognize different kinds of failures and defects compared to developers because of different backgrounds, technical perspectives, and biases.
Independent testers can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system.

50
Q

What are some drawbacks to independence of testing?

A

Independent testers may be isolated from the department team leading to a lack of collaboration, communication problems, or adversarial relationship with the development teams.
Developers may lose a sense of responsibility for quality.
May be seen as a bottleneck or be blamed for delays in release