Ch5 Flashcards

1
Q

Benefits of Test Independence?

A

See other and different defects
Unbiased
Verify assumptions made during specification and implementation
Bring experience, skills and quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Drawbacks of Test Independence?

A

Isolation from development team
May be seen as bottleneck or blamed for delays in release
May not be familiar with business, project or systems
Developers may lose a sense of responsibility for quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Tasks of the Task Leader?

A

Write or review test policy and strategy.

Contribute to the testing perspective.

Plan tests (approach, estimates and resoruces.)

Assess testing objective and risks.

Schedule test activities and initate specification, preparation, implementation and execution of tests.

Monitor the test results and check exit criteria.

Adapt planning based on test results and progress, taking action necessary to compensate for problems.

Set up adequate configuration management of testware.

Introduce suitable metrics for measuring test progress and evaluating the quality of testing.

Consider automation and select tools to support testing.

Supervise the implementation of the test environment
Write test summary reports for stakeholders.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Tasks of the Tester?

A

Review and contribute to test plans

Review user requirements, specifications and models for testability

Create test specifications

Set up the test environment with appropriate technical support

Prepare and acquire test data

Execute and log tests

Evaluate results and record incidents

Use test tools as necessary and automate tests

Measure performance (if applicable)

Review tests developed by others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Are you a mad? Recite the IEEE Stand 829-1998.

Test plan identifier

A
  1. Test plan identifier
  2. Introduction
  3. Test items
  4. To be tested
  5. Not to be tested
  6. Approach
  7. P/F criteria
  8. Suspension and Resumption.
  9. Test deliverables.
  10. Testing tasks
  11. Enviormental needs
  12. Responsibilities
  13. Staffing and training needs.
  14. Schedule.
    15 Risk and contingencies.
  15. Approvals.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

State the order in Levels of Planning!!

A

Test Policy –> Test Strategy –> Master Test Plan–Compent–>Integration–>System–>Acceptance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the definition of Entry Criteria?

A

Entry C is the condition by which we start testing, i.e when tests are ready for execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are a few Entry Critera?

A

Test environment available and ready
Test tool configured in the test environment
Testable code available
Test data available, including configuration data, logins, etc.
Test summary report available from previous testing, including quality measures
Third-party software delivered and software licences bought
Other project dependencies in place

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Exit Criteria?

A

Exit Criteria is the used to define test end. Typically after testing has achieved a specific goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are a few Exit Critera?

A

Measures of testing thoroughness, i.e. coverage measures
Estimates of defect density or reliability
Cost
Residual risks such as number of defects outstanding or requirements not tested
Schedules such as those based on time to market

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What should you remember about Exit Criteria?

A

Exit Criteria varies with test level. The coverage of code for component testing.
Coverage of requirements or risk for system testing
Non-functional measures such as usability in acceptance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define Test Approach.

A

The implementation of test stategy. Based on objective and risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What can the Test Approach be used for?

A

A starting point for test planning.

Selecting design techniques and test types.

Defining Entry/Exit Criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the types of Test Approach?

A
Analytical
e.g. Risk-based
Model-based
e.g. Using statistics such as expected usage profiles
Methodical
e.g. Based on failures (error guessing), experience, checklist
Process- or standard-compliant 
e.g. Industry standards or agile methods
Dynamic/heuristic
e.g. Reactive, exploratory testing
Consultative
Based on advice from experts in technology or business
Regression-averse
e.g. Reuse and automation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Test Estimation is?

A

A calculated approximation of the cost or effort required to complete a task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the approaches for Test Estimation?

A

Two Approaches:

The Metrics-based approach based on
Metrics of former or similar projects, or typical values.

The Expert-based approach, based on assessments by the owner of the tasks, or domain experts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What factors should you consider whist Estimating?

A
Product -
Quality of the specification 
Size of the product
Complexity 
Requirements for reliability, security and documentation

Development process -
Stability of the organisation, tools used, test process, skills of the people involved, time pressure

Software quality -
Expected number of defects and the amount of rework required

18
Q

Why do we perform

Test Progress Monitoring?

A

Provide feedback and visibility about testing
Assess progress against planned schedule and budget
Measure exit criteria such as coverage
Assess effectiveness of test approach with respect to objectives
Collect data for future project estimation

19
Q

True or false, Metrics can be collected manually or automatically?

A

True, Test tools (test management, execution tools, defect trackers) can record key data.

20
Q

State a few useful Metrics.

A

Percentage of work done in test case and environment preparation
Test case execution (e.g. number of test cases run/not run and test cases passed/failed)
Defect information (e.g. defect density, defects fixed, failure rate, retest results)
Coverage of requirements, risks or code
Dates of test milestones
Testing costs, including cost-benefit analysis of fixing defects

21
Q

What should you consider when choosing Metrics?

A
Estimates (Time, Cost, etc.)
Exit criteria (e.g. coverage, risk and defect data)
Suspension criteria (e.g. quality, timescales)
22
Q

Why should we consider in Test Control?

A

Re-prioritise tests when an identified risk occurs (e.g. software delivered late).
Change the test schedule due to availability of a test environment.
Set an entry criterion requiring fixes to have been retested by a developer before accepting them into a build.

23
Q

What are the objectives of Test Reporting?

A

To summarise information about test activities during test phase:

What testing occurred?
Statistics on tests run/passed/failed, incidents raised/fixed

Was exit criteria met?

To analyse data and metrics to support recommendations and decisions about future actions:
Assessment of defects remaining
Economic benefit of continued testing
Outstanding risks
Level of confidence in tested software
Effectiveness of objectives, approach and tests

24
Q

Impress me, outline the IEEE Std 829-1998 Test Summary Report:

A

Summary:

Software versions and hardware environment
Refer to test plan, logs and incident reports

Variances:

Changes from test plan, designs or procedures
Comprehensiveness assessment
Features not tested, with reasons

Summary of results:

Description of incidents, list of fixes and outstanding incidents

Evaluation:

Estimate of the software quality, reliability and failure risk

Summary of activities:

Effort and elapsed time categorised
Dates exit criteria were met

Approvals:

Provide a list and signature block for each approving authority

25
Q

What is Configuration Management?

A

The aim is to establish and maintain the integrity of the products of the system through the project and product life cycle

26
Q

Configuration Management can support testing by ensuring that?

A
All items of testware are
Uniquely identifiable
Version controlled
Tracked for changes
Related to each other 
Related to development items
27
Q

What’s Risk?

A

“A factor that could result in future negative consequences; usually expressed as impact and likelihood”

ISTQB® Glossary

28
Q

What determines the level of Risk?

A

Financial, Legal, Safety, Image, Rework or Embarrasemt.

29
Q

The main definition of Project Risk is?

A

Simply put, it’s the projects ability to deliver its objectives. (Think people over software)

30
Q

The main definition of Product Risk is?

A

Product Risks are issues with software or system.

31
Q

What are can affect Product Risk?

A

Technical issues
Problems in defining the right requirements
The extent that requirements can be met given existing constraints
Low quality of the design, code, test data and tests
Test environment not ready on time
Late data conversion or migration planning
Organisational factors
Skill, training and staff shortages
Personnel issues
Political issues, (e.g. communication problems)
Unrealistic expectations of testing
Supplier issues
Failure of a third party
Contractual issues

32
Q

What are can affect Project Risk?

A

Failure-prone software delivered
Poor software characteristics
e.g. Reliability, usability, performance
Poor data integrity and quality
e.g. Data migration or conversion problems, violation of data standards
Software does not perform its intended functions
Potential for software to cause harm to an individual or company

33
Q

What’s Risk-based Testing?

A

Testing driven by Risk identification.

34
Q

What is Incident Management?

A

Incident management is the process of recognising, investigating, taking action and disposing of incidents.

35
Q

What’s a Incident?

A

Discrepancies between actual and expected results are logged as incidents
They must be investigated and may turn out to be defects

36
Q

What could be a cause of an Incident?

A
Software defect
Requirement or specification defect
Environmental problem
e.g. Hardware, operating system, network
Test procedure or script fault
e.g. Incorrect, ambiguous or missing step
Incorrect test data
Incorrect expected results on test procedure
Tester error
Not following the procedure
37
Q

What are the objectives of a Incident Report?

A

Provide feedback to enable developers and other parties to identify, isolate and correct defects
Enable test leaders to track:
The quality of the system
The progress of the testing
Provide ideas for test process improvement
Identify defect clusters
Create a history of incidents and resolutions
Supply metrics for assessing exit criteria

38
Q

Can you outline the IEE 829-1998 for Test Incident Reports?

A

Report Identifier
Unique reference for each incident

Summary
Of the circumstances in which the incident occurred, referring to software and revision level, test case and test log

Description
Of the incident, referring to inputs, expected results, actual results, anomalies, date and time, procedure step, environment, attempts to repeat, testers and observers

Impact
Of the incident on test plans, test case and procedure specifications, if known

39
Q

What is the Test Policy?

A

A high-level document describing the principles, approach and major objectives of the organisation regarding testing.

40
Q

What is the Test Strategy?

A

Documentation that expresses the generic requirements for testing one or more projects run within an organisation, providing detail on how testing is to be performed, and is aligned with the test policy.

41
Q

What’s the definition of Test Control?

A

Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported.