1. Fundamentals of Testing Flashcards

1
Q

Typical objectives of testing

7

A
  • Prevent defects by evaluating work products (requirements, user stories, design, code)
  • Verify if specified requirements are fulfilled.
  • Check test object is complete & works as expected.
  • Build confidence in quality.
  • Find defects/failures to reduce risk of inadequate quality.
  • Provide enough information for stakeholders to make decisions (especially on quality).
  • Comply with or verify object’s compliance with contractual, legal or regulatory requirements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Differentiate testing from debugging

A

Testing = finding and showing failures caused by defects.

Debugging = dev activity finding, analysing and fixing the defects.

(Conformation testing checks if defects are resolved)
(Testers test and devs debug, except in agile and sometimes other methodologies where testers help debug)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give examples of why testing is necessary

4

A
  • Detecting defects in requirements or stories - reduces risk of wrong/unstable features.
  • Testers working with system designers = better understanding of design and how to test it - reduces risk of fundamental design defects and enables early test identification.
  • Testers working with developers = better understanding of code and how to test it - reduces risk of defects in code and tests.
  • Testers verifying and validating software before release detects failures and supports debugging. Increases likelihood software meets stakeholder needs and requirements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the relationship between testing and quality assurance

A

Quality management: umbrella containing QA and QC

Quality assurance: Prevents defects by ensuring processes used to manage/create deliverables work and are followed. Process oriented. Proactive.

Quality control: includes testing. Determines if end result is expected - detects problems by inspecting ans testing. Product oriented. Reactive.

QA wants whole process to be good so supports testing through QC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Distinguish between error, defect and failure

A
  • Error: human action that produces incorrect result. Caused by time pressure, inexperience etc.
  • Defect: Imperfection or deficiency in work product where it doesn’t meet requirements or specifications. Cased by errors in requirements, errors in programming etc.
  • Failure: Event where component or system does not perform required function within specified limits. Caused by defects in code, environmental factors etc.

(Not all unexpected test results are failures, false positives and false negatives)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Distinguish between root cause of a defect and its effects.

A

Root cause is the earliest actions or conditions that created a defect. The effect is the end result.

E.g.
Root cause: Story owner misunderstands something
Error: Story was not written correctly
Defect: Story is ambiguous
Defect: Code calculates improperly
Failure: Incorrect payment made to customer
Effect: Customer complains

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the 7 testing principles

A

• Testing shows the presence of defects not their absence.
(Can’t prove that there are no defects, can reduce probability of undiscovered defects remaining)

• Exhaustive testing is impossible.
(Impossible to test every possible combination - instead direct based on risk analysis, techniques and priorities)

• Early testing saves time and money.
(Static and dynamic testing ASAP in the dev life cycle, helps reduce/eliminate costly changes.)

• Defects cluster together.
(Few modules usually contain most defects. Predicted/observed clusters are factors in risk analysis and focussing test efforts.)

• Beware pesticide paradox
(Same tests over and over will no longer find new defects, like antibiotic resistance. Changing test data or new tests may detect new defects. In automated regression this might reduce regression defects which could be a good thing though)

• Testing is context dependant.
(Different software (high safety medical vs. mobile game app) and different methodologies (agile vs. sequential) have different testing.)

• Absence of errors is a fallacy.
(Perfection is impossible. Fallacy to think finding and fixing defects automatically makes software good. Software might still be inferior, badly designed etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain the impact of context on test practices.

A

Context affects the test process for an organisation. It affects coverage criteria which will define the key performance indicators (KPIs), which will drive demonstration of objectives.

Contextual factors can include: 
• Software development lifecycle
• Methodologies used
• Test levels/types being considered
• Product and project risks
• Business domain
• Operation constraints (budgets, resources, timescale, complexity, contractual/regulatory requirements)
• Organisational policies and practices
• Required internal and external standards.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

List standard test activities

7

A

TEST PLANNING

TEST MONITORING AND CONTROL

TEST ANALYSIS

TEST DESIGN

TEST IMPLEMENTATION

TEST EXECUTION

TEST COMPLETION

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe and list tasks within test planning

A

Define objectives, decide approach for meeting objectives within constraints imposed by context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe and list tasks within test monitoring and control

A

Monitoring - On going comparison of progress against planned progress using monitoring metrics defined in plan.

Control - Taking actions necessary to meet objectives of plan. (Supported by evaluation of exit criteria, progress against plan is communicated to stakeholders in progress reports)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe and list tasks within test analysis

A

(What to test)

Determines what to test in terms of measurable exit criteria. Test basis is analysed to identify testable features and define associated test conditions.

Major activities:

• Analysing the test basis appropriate to the test level being considered.

  • Requirement specifications (business/functional req, stories, use cases, similar work products)
  • Design and implementation information (system/software architecture, design specs, call flow graphs)
  • Implementation of component or system itself (code, database metadata/queries, interfaces)
  • Risk analysis reports (considering functional, non-functional and structural aspects)

• Evaluating the test basis and items to identify defects.
- Such as ambiguities, omissions, inconsistencies, inaccuracies, contradictions and superfluous statements.

• Identifying features and test of features to be tested.

• Defining and prioritising test conditions for each feature.
- Based on analysis of test basis, characteristics, business/technical factors, risks

• Capturing bi-directional traceability between each element of the test basis and associated test conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe and list tasks within test design

A

(How to test)

Test conditions are expanded into high level test cases, sets thereof, and other testware.

  • Designing and prioritising test cases and sets thereof.
  • Identifying necessary test data to support conditions and cases.
  • Designing test environment and identifying any needed infrastructure/tools.
  • Capturing bi-directional traceability between test basis, conditions, and cases.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe and list tasks within test implementation

A

(Do we now have everything in place to run the tests?)

Testware necessary for execution is created/completed, including sequencing procedures out of test cases.

  • Developing and prioritising procedures (maybe creating automated scripts.
  • Creating test suites out of procedures (and automated scripts if any)
  • Arranging test suites within an execution schedule to result in efficient test execution.
  • Building the test environment (inc test harnesses, service visualisation, simulators) and verifying everythig needed is set up correctly.
  • Preparing test data and ensuring it’s properly loaded into test environment.
  • Verifying and updating bi-directional traceability between test basis, conditions, cases, procedures and suites.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe and list tasks within test execution

A

Test suites are run in accordance with the execution schedule.

  • Recording IDs & versions of test item, object, tool, testware.
  • Executing tests (manually & with tools).
  • Comparing actual and expected results.
  • Analysing anomalies to establish likely causes (failures due to code defects, false positives)
  • Reporting defects based on failures observed.
  • Logging outcome of test execution.
  • Repeating test activities - either as a result of action taken for an anomaly or as part of planned testing (eg. execution of corrected test, conformation/regression testing)
  • Verifying and updating bi-directional traceability between the test basis, conditions, cases, procedures and results.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe and list tasks within test completion

A

Collect data from completed activities to consolidate experience, testware, and other relevant information. Happens at project milestones (system release, project completion/cancellation, agile iteration finish, test level completion, maintenance release).

  • Checking whether all defect reports are closed, entering change requests/product backlog for any remaining unresolved.
  • Creating a test summary report to be communicated to stakeholders.
  • Finalising and archiving test environment, data, infrastructure, and other testware for later reuse.
  • Handing over testware to maintenance teams, teams, other project team and other stakeholders who might benefit from its use.
  • Analysing lessons learned from completed test activities to determine changes needed for future iterations, releases and projects.
  • Using information gathered to improve test process maturity.
17
Q

Test planning work products

A

• One or more test plans
◦ Info on test basis (linked to other products by traceability information)
◦ Exit criteria (used during monitoring and control)

18
Q

Test monitoring and control work products

A

• Several various types of test reports (providing audience relevant details about progress, including reports if available; and should address project management concerns such as task completion, resource management/usage, and effort)
◦ Test progress reports (produced on ongoing/regular basis)
◦ Test summary reports (Produced at milestones)

19
Q

Test analysis work products

A
  • Defined and prioritised test conditions (bi-directionally traceable to the test cases they cover)
    • Test charters (for exploratory testing)
    • Reports of any discovered defects in test basis.
20
Q

Test design work products

A

• Test cases and sets thereof to exercise conditions defined in analysis
◦ High level test cases without concrete data for input data and expected results (reusable across multiple test cycles with different data but still documents scope well)

* Design/identification of necessary test data
* Design of test environment
* Identification of infrastructure and tools
21
Q

Test implementation work products

A
  • Test procedures and sequencing of procedures
    • Test suites
    • A test execution schedule
22
Q

Test execution work products

A
  • Documentation of the status of individual test cases or procedures (eg. ready to run, pass, fail, blocked, deliberately skipped etc.)
    • Defect reports
    • Documentation about which test items, objects, tools and testware were involved in the testing.
23
Q

Test completion work products

A
  • Test summary reports
    • Action items for improvement of subsequent iterations/projects
    • Change requests or product backlog items
    • Finalised software
24
Q

Explain the value of maintaining traceability between the test basis and the test work products

A

Traceability is vital in implementing test monitoring and control; and evaluation of test coverage.

Traceability supports:
• Analysing the impact of changes
• Making testing auditable
• Meeting IT governance criteria
• Improving the understandability of test progress reports and summary reports to include the status of elements of the test basis (eg. requirements that have passed, failed, or are pending their tests)
• Relating technical aspects of testing to stakeholders in terms they can understand
• Providing information to assess product quality, process capability, and project progress against business goals.

25
Q

Identify the psychological factors that influence the success of testing

A

Testing can be perceived as destructive due to:
• Conformation bias – difficulty accepting information that doesn’t align with currently held beliefs (hard to accept there are defects in code you created and think is good)
• Other cognitive biases – difficulty accepting/understanding new information.
• Blaming the bearer of bad news – information from testers is often ‘bad news’

Conflict can be avoided by:
• Good interpersonal skills – communicating effectively in a neutral, constructive way.
• Aligning one’s behaviour with test objectives with minimal personal biases.

26
Q

Explain the difference between the mindsets required for test and development activities

A

Tester:
• Curiosity
• Professional pessimism
• Critical eye
• Attention to detail
• Motivation for good and positive communication and relationships
• Mindset that tends to grow and mature as a tester gains experience

Developer:
• Might include tester elements
• Often more focussed on designing and building solutions than evaluating the potential problems in the solution they have already come up with.
• Confirmation bias makes spotting problems hard.

With the right mindset, developers can test their own code but independent testers increase defect detection effectiveness (particularly important in large, complex, safety critical systems). Independent testers often bring a new perspective because they did not author/own/design the product and have different cognitive biases.