Manual QA Interview Questions Flashcards

1
Q

Software Development Life Cycle (SDLC)

A

A process used by the software industry to design, develop and test high quality software. The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

STLC Software Testing Life Cycle

A

A sequence of different activities performed by the testing team to ensure the quality of the software or the product. STLC is an integral part of Software Development Life Cycle (SDLC). STLC deals only with the testing phases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Unit Testing

A

Aims to verify each part of the software by isolating it and then perform tests to demonstrate that each individual component is correct in terms of fulfilling requirements and the desired functionality. [Done by developers]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Integration Testing

A

Aims to test different parts of the system in combination in order to assess if they work correctly together. By testing the units in groups, any faults in the way they interact together can be identified. [Done by developers]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

System Testing

A

All the components of the software are tested as a whole in order to ensure that the overall product meets the requirements specified. [Done by Testers]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Acceptance Testing

A

The level in the software testing process where a product is given the green light or not. The aim of this type of testing is to evaluate whether the system complies with the end-user requirements and if it is ready for deployment. [Done by users]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Steps the Defect goes through from discovery to resolution

A

New, Assigned, Open, Fixed, Retest, Close

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Test Scenario

A

Any functionality that can be tested. It is also called Test Condition or Test Possibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Test Case

A

A set of actions executed to verify a particular feature or functionality of your software application. A Test Case contains test steps, test data, precondition, postcondition developed for specific test scenario to verify any requirement.
Example: Test login with a valid username and a valid password

The test scenario can be tested with more than one test case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Functional Testing

A

A type of testing which verifies that each function of the software application operates in conformance with the requirement specification. It tests what the system does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Non-functional testing

A

A type of testing to check non-functional aspects (performance, usability, reliability, etc.) of a software application. It tests how well the system performs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Verification

A

Are we building the product right?

Techniques used: Informal review, inspection, walkthrough, technical and peer review.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Validation

A

Are we building the right product?

Techniques used: Functional testing, system testing, smoke testing, regression testing, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When should we start testing in our project?

A

Software testing should start early in the Software Development Life Cycle. This helps to capture and eliminate defects in the early stages of SDLC i.e requirement gathering and design phases. An early start to testing helps to reduce the number of defects and ultimately the rework cost in the end.

“Early testing saves time and money”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If we don’t have clear written user requirements, how can we test the software?

A
  1. Work with whatever little documentation you can get your hands on.
  2. Use the older/current version of the application as a reference to test the future release of a software product.
  3. Talk to the project team members.
  4. Use exploratory testing to test the application when it is ready.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is exploratory testing, why do we use it?

A

Simultaneous learning, test design and test execution.

In exploratory testing, test cases are not created in advance but testers check system on the fly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Exploratory testing is used for two reasons:

A
  1. When we don’t have time to design test cases
  2. When there are poor or no requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A defect which could have been removed during the initial stage is removed in a later stage. How does this affect the cost?

A

The cost goes up the later the defect is found in the Development Phase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Change-related Testing

A

Focuses on verifying that any new builds or code changes do not adversely affect the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Confirmation Testing or Re-Testing

A

When a test fails because of the defect then that defect is reported and a new version of the software is expected that has had the defect fixed. In this case we need to execute the test again to confirm that whether the defect got actually fixed or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Regression Testing

A

A type of software testing to confirm that a recent program or code change has not adversely affected existing features.

Impact analysis is used to know how much regression testing will be required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Black-box Testing

A

A software testing method in which the internal structure/ design/implementation of the item being tested is not known to the tester.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

White-box Testing

A

A software testing method in which the internal structure/ design/implementation of the item being tested is known to the tester.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Grey-box Testing

A

A software testing technique to test a software product or application with partial knowledge of internal structure of the application. The purpose of grey box testing is to search and identify the defects due to improper code structure or improper use of applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which test cases are written first, black-box or white box?

A

Black box.

Black box are user requirements, White box are detailed design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is use-case testing?

A

A technique that helps us identify test cases that exercise the whole system on a transaction by transaction basis from start to finish.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Equivalence Partitioning

A

Divides data into partitions (also known as equivalence classes) in such a way that all the members of a given partition are expected to be processed in the same way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Requirements Traceability

A

The ability to connect requirements to other artifacts — such as different types of software tests or bugs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Bidirectional Traceability

A

The ability to trace forward (e.g., from requirement to test case) and backward (e.g., from test case to requirement).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Requirement Traceability Matrix (RTM)

A

A document that maps and traces user requirement with test cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Dynamic Testing

A

The execution of the component or system being tested.

32
Q

Static Testing

A

Does not involve the execution of the component or system being tested. It relies on the manual examination of work products (i.e., reviews) or tool-driven evaluation of the code or other work products (i.e., static analysis)

33
Q

Test Plan

A

A document describing the scope, approach, resources and schedule of intended test activities. It is a record of the test planning process.

34
Q

Master Test Plan

A

A test plan that typically addresses multiple test levels.

35
Q

Phase Test Plan

A

A test plan that typically addresses one test phase.

36
Q

Test Progress Report

A

The test report prepared during a test activity.

37
Q

Test Summary Report

A

The test report prepared at the end of a test activity.

38
Q

Which mistakes do testers tend to do?

A
  1. Failure to communicate
  2. Being afraid of asking questions
  3. Begin testing before understanding the scope and requirements
  4. Writing poor defect reports
  5. Missing some requirements while writing test cases
  6. Not having any type of planning
  7. False positive & False negative
39
Q

If you reported a defect to a developer and he rejected it, what should you do?

A
  • Communicate with him [show an example]
  • Return to the work products (SRS-Product Backlog)
  • Ask the product owner
  • Check the test environment [Repeat the steps on different environments]
  • Escalate the issue
  • Accept that it is not a defect
40
Q

What is the difference between an error, defect (bug), and failure?

A

A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code or in some other related work product.

If a defect in the code is executed, this may cause a failure, but not necessarily in all circumstances.

41
Q

What are the most important components of a defect (bug) report?

A

1.Title
2.Steps to reproduce
3.Expected result
4.Actual result
5.Priority
6.Screenshot or video

42
Q

Risk Based Testing (RBT)

A

A software testing type which is based on the probability of risk.

43
Q

Risk-based testing steps:

A

1-Identify the risks
2-Analyze the risks
3-Prioritize the risks
4-Mitigate risks

44
Q

Alpha and Beta Testing

A

Typically used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market.

45
Q

Alpha Testing

A

Performed at the developing organization’s site, not by the development team, but by potential or existing customers, and/or operators or an independent test team.

46
Q

Beta Testing

A

Performed by potential or existing
customers, and/or operators at their own locations. Beta testing may come after alpha testing, or may occur without any preceding alpha testing having occurred.

47
Q

What are the benefits of test independence?

A

Independent testers are likely to recognize different kinds of failures compared to developers because of their different
backgrounds, technical perspectives, and biases.

An independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system.

Independent testers of a vendor can report in an upright and objective manner about the system under test without (political) pressure of the company that hired them.

48
Q

What are the potential drawbacks of test independence?

A
  • Isolation from the development team, may lead to a lack of collaboration, delays in providing feedback to the development
    team, or an adversarial relationship with the development team
  • Developers may lose a sense of responsibility for quality
  • Independent testers may be seen as a bottleneck
  • Independent testers may lack some important information (e.g., about the test object)
49
Q

The purpose of a Test Technique

A

Helps in identifying test conditions, test cases, and test data.

50
Q

Testing Tools

A

A product that supports one or more test
activities right from planning, requirements, creating a build, test execution, defect logging and test analysis.

51
Q

What is random/monkey testing?

A

Data is generated randomly often using a tool or automated mechanism.

52
Q

Negative Test

A

When you put in an invalid input and receive errors.

53
Q

Positive Test

A

When you put in a valid input and expect some action to be completed in accordance with the specification.

54
Q

What is Decision Table Testing?

A

Used for testing systems for which the specification takes the form of rules or cause-effect combinations.

55
Q

What is the Waterfall Model?

A

Illustrates the software development process in a linear sequential flow.

In the waterfall model, the phases do not overlap.

56
Q

What is the V-Model?

A

An SDLC model where execution of processes happens in a sequential manner in a V-shape. It is also known as Verification and Validation model.

The V-Model is an extension of the waterfall model and is based on the association of a testing phase for each corresponding development stage.

57
Q

What are the best practices for writing test cases?

A

*Write test cases with end-users perspective.
*Write test steps in a simple way that anyone can follow them easily.
*Make the test cases reusable.
*Set the priority.
*Provide a test case description, test data, expected result, precondition, postcondition.
*Write invalid test cases along with valid test cases.
*Follow proper naming conventions.
*Review the test cases regularly and update them if necessary.

58
Q

What is the Test Suite?

A

Test suite is a container that has a set of tests which helps testers in executing and reporting the test execution status.

A Test case can be added to multiple test suites.

59
Q

What is the Test Environment?

A

A testing environment is a setup of software and hardware for the testing teams to execute test cases.

60
Q

Build

A

An executable file provided by the developers to the testing team for testing the application.

61
Q

Release

A

An installable software provided to the end-users after it gets certified by the testing team.

62
Q

What is the Test Data?

A

The data that is used by the testers to run the test cases.

63
Q

Quality Assurance

A

Typically focused on adherence to proper
processes, in order to provide confidence that the appropriate levels of quality will be achieved.

64
Q

Quality Control

A

Involves various activities, including test
activities, that support the achievement of appropriate levels of quality.

65
Q

What is a show stopper?

A

A critical bug is a show stopper which means a large piece of functionality or major system component is completely broken and there is no workaround to move further.

66
Q

Big Bang integration

A

Combining all the modules once and verifying the functionality after completion of individual module testing.

67
Q

Top-Down integration

A

Testing takes place from top to bottom.

68
Q

Bottom-Up integration

A

Reciprocate of the Top-Down Approach. Testing takes place from bottom to up.

69
Q

Smoke Testing

A

Done to make sure if the build we received from the development team is testable or
not.

70
Q

Sanity Testing

A

Done during the release phase to check for the main functionalities of the application
without going deeper

71
Q

What is Ad-Hoc Testing?

A

Testers randomly test the application
without following any documents or test design techniques.

72
Q

Localization Testing (l10n)

A

The software testing process for checking the localized version of a product for that particular culture or locale settings.

73
Q

Globalization/Internationalization Testing (i18n)

A

A software testing method used to ensure that the software application can function in any culture or locale (language, territory or code page) by testing the software functionalities using each type of international input possible.

74
Q

Defect Priority

A

The order in which a defect should be fixed.

75
Q

Defect Severity

A

A degree of impact a bug or a Defect has on the software application under test.