SQA Fundamentals Flashcards
So what happens to all the traditional software testing methods, types and artifacts? Do we throw them away?
Naaah! You will still need all those software testing methods, types and artifacts (but at varying degrees of priority and necessity). You will, however, need to completely throw away that traditional attitude and embrace the agile attitude.
Test case
A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the requirements or design of an application.
Regression testing
Regression testing is a type of software testing that intends to ensure that changes (enhancements or defect fixes) to the software have not adversely affected it.
The likelihood of any code change impacting functionalities that are not directly associated with the code is always there and it is essential that regression testing is conducted to make sure that fixing one thing has not broken another thing.
During regression testing, new test cases are not created but previously created test cases are re-executed.
Security
The extent of protection of software against unauthorized access, invasion of privacy, theft, loss of data, etc.
Ad hoc testing
Ad hoc Testing, also known as Random Testing or Monkey Testing, is a method of software testing without any planning and documentation. The tests are conducted informally and randomly without any formal expected results.
The tester improvises the steps and arbitrarily executes them (like a monkey typing while dancing). Though defects found using this method are more difficult to reproduce (since there are no written test cases), sometimes very interesting defects are found which would never have been found if written test cases existed and were strictly followed.
STLC
SOFTWARE TESTING LIFE CYCLE (STLC) defines the steps / stages / phases in testing of software. However, there is no fixed standard STLC in the world and it basically varies as per the following:
- Software Development Life Cycle
- Whims of the Management
- Nevertheless, Software Testing Life Cycle, in general, comprises of the following phases:
- Requirements/design review
- Test planning
- Test designing
- Test environment setup
- Test execution
- Test reporting
Note that the STLC phases mentioned above do not necessarily have to be in the order listed; some phases can sometimes run in parallel. For instance, Test Designing and Test Execution. Interestingly, no matter how well-defined a Software Testing Life Cycle you have in your project or organization, there are chances that you will invariably witness the following widely-popular cycle:
- Testing
- Cursing
Literal definition of regression
the act of going back to a previous place or state; return or reversion.
Disadvantages of functional testing
- It has a potential of missing logical errors in software.
- It has a high possibility of redundant testing.
NOTE: Functional testing is more effective when the test conditions are created directly from user/business requirements. When test conditions are created from the system documentation (system requirements/ design documents), the defects in that documentation will not be detected through testing and this may be the cause of end-users’ wrath when they finally use the software.
At which level(s) is gray box testing applicable?
Though Gray Box Testing method may be used in other levels of testing, it is primarily useful in Integration Testing.
When is acceptance testing performed?
Acceptance Testing is the fourth and last level of software testing performed after System Testing and before making the system available for actual use.
System testing analogy
During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed.
How to report a defect effectively
It is essential that you report defects effectively so that time and effort is not unnecessarily wasted in trying to understand and reproduce the defect. Here are some guidelines:
-Be specific:
Specify the exact action: Do not say something like ‘Select ButtonB’. Do you mean ‘Click ButtonB’ or ‘Press ALT+B’ or ‘Focus on ButtonB and click ENTER’? Of course, if the defect can be arrived at by using all the three ways, it’s okay to use a generic term as ‘Select’ but bear in mind that you might just get the fix for the ‘Click ButtonB’ scenario. [Note: This might be a highly unlikely example but it is hoped that the message is clear.]
In case of multiple paths, mention the exact path you followed: Do not say something like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.” Understanding all the paths at once will be difficult. Instead, say “Do ‘A and X’ and you get D.” You can, of course, mention elsewhere in the report that “D can also be got if you do ‘B and Y’ or ‘C and Z’.”
Do not use vague pronouns: Do not say something like “In ApplicationA, open X, Y, and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’, or ‘X’ or ‘ApplicationA’?”
-Be detailed:
Provide more information (not less). In other words, do not be lazy. Developers may or may not use all the information you provide but they sure do not want to beg you for any information you have missed.
-Be objective:
Do not make subjective statements like “This is a lousy application” or “You fixed it real bad.”
Stick to the facts and avoid the emotions.
-Reproduce the defect:
Do not be impatient and file a defect report as soon as you uncover a defect. Replicate it at least once more to be sure. (If you cannot replicate it again, try recalling the exact test condition and keep trying. However, if you cannot replicate it again after many trials, finally submit the report for further investigation, stating that you are unable to reproduce the defect anymore and providing any evidence of the defect if you had gathered. )
-Review the report:
Do not hit ‘Submit’ as soon as you write the report. Review it at least once. Remove any typos.
Definition of Quality (IEEE)
- The degree to which a system, component, or process meets specified requirements.
- The degree to which a system, component, or process meets customer or user needs or expectations.
Functionality
The ability of software to carry out the functions as specified or desired.
Portability
The ability of software to be transferred easily from one location to another.
White box testing example
A tester, usually a developer as well, studies the implementation code of a certain field on a webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the expected outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car is not moving.
Bug classifications
- Severity / Impact (See Defect Severity)
- Probability / Visibility (See Defect Probability)
- Priority / Urgency (See Defect Priority)
- Related Dimension of Quality (See Dimensions of Quality)
- Related Module / Component
- Phase Detected
- Phase Injected
NOTE: We prefer the term ‘Defect’ over the term ‘Bug’ because ‘Defect’ is more comprehensive.
Cost of Quality
Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost of failure of control/non-conformance. In other words, it sums up the costs related to prevention and detection of defects and the costs due to occurrences of defects.
Definition by ISTQB: cost of quality: The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.
Definition by QAI: Money spent beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product. The Cost of Quality includes prevention, appraisal, and correction or repair costs.
Integration testing approaches
- Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system.
- Top Down is an approach to Integration Testing where top level units are tested first and lower level units are tested step by step after that. This approach is taken when top down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
- Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level units step by step after that. This approach is taken when bottom up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
- Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.
Defect detection efficiency
Defect Detection Efficiency (DDE) is the number of defects detected during a phase/stage that are injected during that same phase divided by the total number of defects injected during that phase.
The phase a defect is ‘injected’ in is identified by analyzing the defects [For instance, a defect can be detected in System Testing phase but the cause of the defect can be due to wrong design. Hence, the injected phase for that defect is Design phase.]
Security testing
Security Testing is a type of software testing that intends to uncover vulnerabilities of the system and determine that its data and resources are protected from possible intruders.
Bebugging
The process of intentionally injecting bugs in a software program, to estimate test coverage by monitoring the detection of those bugs
Gray box testing example
An example of Gray Box Testing would be when the codes for two units/ modules are studied (White Box Testing method) for designing test cases and actual tests are conducted using the exposed interfaces (Black Box Testing method).
Who performs unit testing?
It is normally performed by software developers themselves or their peers. In rare cases it may also be performed by independent software testers.
White box testing
(also known as Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method in which the internal structure/ design/ implementation of the item being tested is known to the tester. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. Programming know-how and the implementation knowledge is essential. White box testing is testing beyond the user interface and into the nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a white/ transparent box; inside which one clearly sees.
Testing based on an analysis of the internal structure of the component or
system.
Defect age calculation
Defect Age in Time = Defect Fix Date (OR Current Date) – Defect Detection Date
Normally, average age of all defects is calculated.
Uses of defect density
- For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them.
- For comparing software/products so that quality of each software/product can be quantified and resources focused towards those with low quality.
Boundary value analysis
It is a software test design technique that involves determination of boundaries for input values and selecting values that are at the boundaries and just inside/ outside of the boundaries as test data.
Acceptance testing
ACCEPTANCE TESTING is a level of software testing where a system is tested for acceptability. The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Performance testing
Performance Testing is a type of software testing that intends to determine how a system performs in terms of responsiveness and stability under a certain load.
System testing
SYSTEM TESTING is a level of software testing where a complete and integrated software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.
The process of testing an integrated system to verify that it meets specified requirements.
Defect age
Defect Age can be measured in terms of any of the following:
Time
Phases
DEFECT AGE (IN TIME)
Definition
Defect Age (in Time) is the difference in time between the date a defect is detected and the current date (if the defect is still open) or the date the defect was fixed (if the defect is already fixed).
Dropped defects are not counted.
‘fixed’ means that the defect is verified and closed; not just ‘completed’ by the developer.
Unit testing
UNIT TESTING is a level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of software. It usually has one or a few inputs and usually a single output. In procedural programming a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/ super class, abstract class or derived/ child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.) Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.
Equivalence partitioning
It is a software test design technique that involves dividing input values into valid and invalid partitions and selecting representative values from each partition as test data.
Cost of Quality Calucation
Cost of Quality (COQ) = Cost of Control + Cost of Failure of Control
where
Cost of Control = Prevention Cost + Appraisal Cost
and
Cost of Failure of Control = Internal Failure Cost + External Failure Cost
Test plan
- test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst other items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
- master test plan: A test plan that typically addresses multiple test levels.
- phase test plan: A test plan that typically addresses one test phase.
Different levels of testing from most granular to broadest
Unit -> Integration -> System -> Acceptance
Maintainability
The ease with which software can be modified (adding features, enhancing features, fixing bugs, etc)
Guidelines for implementing a defect lifecyccle
- Make sure the entire team understands what each defect status exactly means. Also, make sure the defect life cycle is documented.
- Ensure that each individual clearly understands his/her responsibility as regards each defect.
- Ensure that enough detail is entered in each status change. For example, do not simply DROP a defect but provide a reason for doing so.
- If a defect tracking tool is being used, avoid entertaining any ‘defect related requests’ without an appropriate change in the status of the defect in the tool. Do not let anybody take shortcuts. Or else, you will never be able to get up-to-date defect metrics for analysis.
Localizability
The ability of software to be used in different languages, time zones etc.
Concurrency
The ability of software to service multiple requests to the same resources at the same time.
Types of standards for compliance testing
- Internal standards could be standards set by the company itself. For example, a web application development company might set the standard that all webpages must be responsive.
- External standards could be standards set outside of the company. For example, Health Insurance Portability and Accountability Act (HIPAA) has set regulations for the healthcare industry.
- Compliance testing could also be done by an external organization. This normally results in some sort of compliance certification.
- The method and type of testing to be conducted during compliance testing depends on the specific regulation / standard being assessed.
- The depth of compliance testing could range from a high-level audit on a sampling basis to a detailed scrutiny of each specified standard.
Scalability
The measure of software’s ability to increase or decrease in performance in response to changes in software’s processing demands.
Defect life cycle
DEFECT LIFE CYCLE (Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.
It could be something like NEW - ASSIGNED - DEFERRED - DROPPED - COMPLETED - REASSIGNED CLOSED
Note on deferred: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. This defect is ASSIGNED when the time comes.
Test case format
Normally a test management tool is used by companies and the format is determined by the tool used, but in general it should contain:
- Test suite ID
- Test case ID
- Test case summary
- Related requirement
- Prerequisites
- Test procedure (step by step process)
- Test data (or links to the data) to be used in the test
- Expected result
- Actual result
- Status (Pass or Fail. Other statuses can be ‘Not Executed’ if testing is not performed and ‘Blocked’ if testing is blocked.)
- Comments
- Created by
- Date of creation
- Executed by
- Date of execution
- Test environment (The environment (Hardware/Software/Network) in which the test was executed.)
Tasks involved in acceptance testing
Acceptance Test Plan
- Prepare
- Review
- Rework
- Baseline
Acceptance Test Cases/Checklist
- Prepare
- Review
- Rework
- Baseline
Acceptance Test
-Perform
Compliance testing
Compliance Testing [also known as conformance testing, regulation testing, standards testing] is a type of testing to determine the compliance of a system with internal or external standards.
Accessibility
The degree to which software can be used comfortably by a wide variety of people, including those who require assistive technologies like screen magnifiers or voice recognition.
Which level(s) of testing is regression testing applicable to?
Regression testing can be performed during any level of testing (Unit, Integration, System, or Acceptance) but it is mostly relevant during System Testing
Defect (bug) report template
In most companies, a defect reporting tool is used and the elements of a report can vary. However, in general, a defect report can consist of the following elements:
Unique defect ID Project name Product name Release version Module Detected build version Summary Description Steps to replicate Actual result Expected results Attachments (screenshots, logs) Remarks Defect severity Defect priority Reported by Assigned to Status Fixed build version
Types of load testing
- Load Testing is a type of performance testing conducted to evaluate the behavior of a system at increasing workload.
- Stress Testing a type of performance testing conducted to evaluate the behavior of a system at or beyond the limits of its anticipated workload.
- Endurance Testing is a type of performance testing conducted to evaluate the behavior of a system when a significant workload is given continuously.
- Spike Testing is a type of performance testing conducted to evaluate the behavior of a system when the load is suddenly and substantially increased.
Methods for integration testing
Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used. Normally, the method depends on your definition of ‘unit’.
Defect density formula
Defect density = Number of defects / Size
Test automation frameworks
There are also many Test Automation Tools/Frameworks that generate the test scripts for you; without the need for actual coding. Many of these tools have their own scripting languages (some of them based on a core scripting languages). For example, Sikuli, a GUI automation tool, uses Sikuli Script which is based on Python. A test script can be as simple as the one below:
Defect severity classifications
- Critical: The defect affects critical functionality or critical data. It does not have a workaround. Example: Unsuccessful installation, complete failure of a feature.
- Major: The defect affects major functionality or major data. It has a workaround but is not obvious and is difficult. Example: A feature is not functional from one module but the task is doable if 10 complicated indirect steps are followed in another module/s.
- Minor: The defect affects minor functionality or non-critical data. It has an easy workaround. Example: A minor feature that is not functional in one module but the same task is easily doable from another module.
- Trivial: The defect does not affect functionality or data. It does not even need a workaround. It does not impact productivity or efficiency. It is merely an inconvenience. Example: Petty layout discrepancies, spelling/grammatical errors.
Severity is also denoted as: S1 = Critical S2 = Major S3 = Minor S4 = Trivial
At what levels is white box testing applicable?
- Unit Testing: For testing paths within a unit.
- Integration Testing: For testing paths between units.
- System Testing: For testing paths between subsystems.
Advantages of smoke testing
- It exposes integration issues.
- It uncovers problems early.
- It provides some level of confidence that changes to the software have not adversely affected major areas (the areas covered by smoke testing, of course)
Target value of defect detection efficiency
The ultimate target value for Defect Detection Efficiency is 100% which means that all defects injected during a phase are detected during that same phase and none are transmitted to subsequent phases. [Note: the cost of fixing a defect at a later phase is higher.]
How is defect priority determined?
Priority is quite a subjective decision; do not take the categorizations above as authoritative. However, at a high level, priority is determined by considering the following:
- Business need for fixing the defect
- Severity/Impact
- Probability/Visibility
- Available Resources (Developers to fix and Testers to verify the fixes)
- Available Time (Time for fixing, verifying the fixes and performing regression tests after the verification of the fixes)
Defect Priority needs to be managed carefully in order to avoid product instability, especially when there is a large of number of defects.
Advantages of black-box testing
- Tests are done from a user’s point of view and will help in exposing discrepancies in the specifications.
- Tester need not know programming languages or how the software has been implemented.
- Tests can be conducted by a body independent from the developers, allowing for an objective perspective and the avoidance of developer-bias.
- Test cases can be designed as soon as the specifications are complete.
Usability testing
Usability Testing is a type of testing done from an end-user’s perspective to determine if the system is easily usable.
The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as possible.
Compatibility
The suitability of software for use in different environments like different Operating Systems, Browsers, etc