Foundation Level 1 Flashcards

1
Q

Software testing

A

a set of activities to discover defects and evaluate the quality of software artifacts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Test Objectives

A
  1. Evaluating work products such as requirements, user stories, designs, and code
  2. Ensuring required coverage of a test object
  3. Reducing the level of risk of inadequate software quality
  4. Verifying whether specified requirements have been fulfilled
  5. Verifying that a test object complies with contractual, legal, and regulatory requirements
  6. Providing information to stakeholders to allow them to make informed decisions
  7. Building confidence in the quality of the test object
  8. Validating whether the test object is complete and works as expected by the stakeholders
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Debugging

A

concerned with finding causes of the failure (defects), analyzing these causes, and eliminating them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Typical debugging process

A
  1. Reproduction of a failure
  2. Diagnosis (finding the root cause)
  3. Fixing the cause
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Quality control (QC)

A

a product-oriented, corrective approach that focuses on those activities supporting the achievement of appropriate levels of quality (Testing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Quality assurance (QA)

A

a process-oriented, preventive approach that focuses on the implementation and improvement of processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Test results in QA and QC

A
  1. In QC they are used to fix defects
  2. In QA they provide feedback on how well the development and test processes are performing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A root cause

A

a fundamental reason for the occurrence of a problem (e.g., a situation that leads to an error). Root causes are identified through root cause analysis, which is typically performed when a failure occurs or a defect is identified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Errors, Defects, Failures

A

Human beings make errors (mistakes), which produce defects (faults, bugs), which in turn may result in failures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Testing principles

A
  1. Testing shows the presence, not the absence of defects
  2. Exhaustive testing is impossible
  3. Early testing saves time and money
  4. Defects cluster together
  5. Tests wear out
  6. Testing is context dependent.
  7. Absence-of-defects fallacy (In addition to verification, validation should also be carried out)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test Activities and Tasks

A

often implemented iteratively or in parallel, need to be tailored to the system and the project

  1. Test planning
  2. Test monitoring and control
  3. Test analysis
  4. Test design
  5. Test implementation
  6. Test execution
  7. Test completion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The way the testing is carried out will depend on

A
  1. Stakeholders (needs, expectations, requirements, willingness to cooperate, etc.)
  2. Team members (skills, knowledge, level of experience, availability, training needs, etc.)
  3. Business domain (criticality of the test object, identified risks, market needs, specific legal regulations, etc.)
  4. Technical factors (type of software, product architecture, technology used, etc.)
  5. Project constraints (scope, time, budget, resources, etc.)
  6. Organizational factors (organizational structure, existing policies, practices used, etc.)
  7. Software development lifecycle (engineering practices, development methods, etc.)
  8. Tools (availability, usability, compliance, etc.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Traceability

A
  1. provides information to assess product quality, process capability, and project progress against business goals.
  2. between the test basis elements, testware associated with these elements (e.g., test conditions, risks, test cases), test results, and detected defects.
  3. The coverage criteria can function as key performance indicators to drive the activities that show to what extent the test objectives have been achieved
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The test management role

A
  1. takes overall responsibility for the test process, test team and leadership of the test activities.
  2. mainly focused on the activities of test planning, test monitoring and control and test completion.

NB varies depending on the context, the test management role can be performed by a team leader, by a test manager, by a development manager,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The testing role

A

mainly focused on the activities of test analysis, test design, test implementation and test execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Generic Skills Required for Testing

A
  1. Testing knowledge (techniques)
  2. Thoroughness, carefulness, curiosity, attention to details, being methodical
  3. Good communication skills, active listening, being a team player
  4. Analytical thinking, critical thinking, creativity
  5. Technical knowledge
  6. Domain knowledge (to be able to understand and to communicate with end users/business representatives)
17
Q

Independence of Testing

A
  1. A certain degree of independence makes the tester more effective at finding defects due to differences between the author’s and the tester’s cognitive biases
  2. Work products can be tested by their author (no independence), by the author’s peers from the same team (some independence), by testers from outside the author’s team but within the organization (high independence), or by testers from outside the organization (very high independence)
18
Q

The main benefit of independence of testing

A
  1. independent testers are likely to recognize different kinds of failures and defects compared to developers because of their different backgrounds, technical perspectives, and biases.
  2. an independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system.
19
Q

Drawbacks of independent testers

A
  1. Independent testers may be isolated from the development team, which may lead to a lack of collaboration, communication problems, or an adversarial relationship with the development team.
  2. Developers may lose a sense of responsibility for quality.
  3. Independent testers may be seen as a bottleneck or be blamed for delays in release.
20
Q

Test planning work products

A
  1. test plan
  2. test schedule
  3. risk register
  4. entry and exit criteria
21
Q

Risk register

A

a list of risks together with risk likelihood, risk impact and information about risk mitigation

22
Q

Test monitoring and control work products

A
  1. test progress reports
  2. documentation of control directives
  3. risk information
23
Q

Test analysis work products

A
  1. (prioritized) test conditions
  2. defect reports regarding defects in the test basis
24
Q

Test design work products

A
  1. (prioritized) test cases
  2. test charters
  3. coverage items
  4. test data requirements
  5. test environment requirements
25
Q

Examples of test environment elements

A
  1. stubs
  2. drivers
  3. simulators
  4. service virtualizations
26
Q

Test implementation work products

A
  1. test procedures
  2. automated test scripts
  3. test suites
  4. test data
  5. test execution schedule
  6. test environment elements
27
Q

Test execution work products

A
  1. test logs
  2. defect reports
28
Q

Test completion work products

A
  1. test completion report
  2. action items for improvement of subsequent projects or iterations
  3. documented lessons learned
  4. change requests
29
Q

Test execution rate

A

No. of executed tests / no. of total tests

30
Q

Test planning

A

defining the test objectives and then selecting an approach that best achieves the objectives within the constraints imposed by the overall context.

31
Q

Test monitoring and control

A
  1. ongoing checking of all test activities
  2. the comparison of actual progress against the plan
32
Q

Test analysis

A
  1. analyzing the test basis to identify testable features and to define and prioritize associated test conditions, together with the related risks and risk levels
  2. The test basis and the test objects are also evaluated to identify defects they may contain and to assess their testability
  3. identify features and sets of features to be tested
  4. defining and prioritising test conditions for each feature based on analysis of the test basis

NB: answers the question “what to test?” in terms of measurable coverage criteria.

33
Q

Test design

A
  1. Designing and prioritising test cases
  2. Identifying test data to support test conditions and test cases
  3. Designing the test environment and identifying required infrastructure & tools
  4. Creating bi-directional traceability between test basis and test cases
  5. designing the test environment and identifying any other required infrastructure and tools.

NB answered the question “how to test”

34
Q

Test implementation

A
  1. Developing and prioritising test procedures and creating automated test scripts
  2. Creating test suits from the test procedures
  3. Arranging the test suits within the test execution schedule
  4. Building the test environment and verifying everything set up correct
  5. Preparing test data and ensuring it is properly loaded in the test environment.
  6. Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedure and test suites
35
Q

Test execution

A
  1. Recording the IDs and versions of the test items, objects tools and test ware
  2. executing test either manually or with a tool
  3. comparing actual results with expected results
  4. analysing anomalies to establish their likely causes
  5. reporting defects based on failures observed
  6. logging the outcome of test execution
  7. repeating test activities either as a result of action taken for an anomaly or as a part of planned testing
36
Q

Test completion

A
  1. Checking whether all defect reports are closed, entering change request or product backlog items for any defects that remain unresolved at the end of test execution
  2. Creating a test summary report to be communicated to stakeholders
  3. Finilazing and archiving the test environment, test data test infrastructure and other testware
  4. handing over testware to the maintenance team, other project teams and/or other stakeholders who could benefit from its use
  5. analysing lessons learned from the completed test activities to determine changes needed for future iterations, releases and projects
  6. using information gathered to improve test process maturity
37
Q

Test Basis

A

All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based.

If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

38
Q

Test charter

A

A statement of test objectives, and possibly test ideas about how to test. Test charters are used in exploratory testing.

39
Q

Error

A

Human action that produces an incorrect result.