QA questions Flashcards

1
Q

Black Box Testing vs. White Box Testing vs. Grey Box Testing

A
  • Black Box Testing. It is a testing technique where the QA tester does not have knowledge of the internal workings of the application and focuses on the inputs and outputs and tests external aspects of the system, such as user interfaces and APIs. Testing is based on specifications and requirements. It is primarily used for validation. It is appropriately used when the tester wants to evaluate the system from an end-user perspective, such as during acceptance testing. Example scenarios include functional testing, system testing, and acceptance testing. An example scenario could be testing a website’s login functionality where the tester inputs valid and invalid credentials to ensure the login process works as expected.
    • White Box Testing. Also known as clear or open box testing, it requires knowledge of the internal structures of the application. It is used for verification. It is used to test internal structures, working paths, and the functional flow of the application. It is mainly used at unit testing level but can be applied at integration and system levels as well. This type of testing also includes certain types of security and performance evaluations. Testers have access to the source code and use this knowledge to design test cases. An example scenario could be testing a function in a program where the tester knows the internal logic and verifies if the function behaves as expected. involves testing the internal structures or workings of an application, not just its functionality. testers check flow of inputs and outputs through the application, error handling, and security issues. It is used when verifying the technical aspects of the application, such as during unit testing or integration testing.
    • Grey-box testing: This method combines elements of both white-box and black-box testing. Testers have partial knowledge of the internal workings of the application. An example scenario could be testing a banking application where the tester has access to the database structure to design test cases for verifying data integrity while also testing user interfaces based on requirements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Test Plan

A

A test plan is a document detailing the scope, approach, resources, and schedule of intended testing activities. It should specify what is to be tested, how testing will be performed, and how results will be evaluated. It ensures that everyone involved is on the same page and helps in organizing the testing process.
- Components of a comprehensive test plan should include:
- Introduction
- Objectives and Test Goals
- Test Scope
- Testing Approach/Strategy
- Risk Analysis
- Resources (Personnel, Software, Hardware)
- Schedule and Milestones
- Entry and Exit Criteria
- Deliverables
- Dependencies
- Tools and Environment
- Communication Plan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Importance of test automation in a CI/CD pipeline

A

Test automation is crucial in CI/CD pipelines since it allows for the rapid and consistent testing of code changes, ensuring that new features and patches do not break existing functionality. It facilitates continuous integration by automatically running tests every time a change is made, and supports continuous delivery/deployment by ensuring that all automated tests must pass before changes are moved to the next stage in the pipeline or deployed to production, thus reducing manual effort and speeding up the delivery process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Boundary Value Analysis

A

Boundary value analysis (BVA) is a black box testing technique used to identify errors at the boundaries rather than in the ranges of input. For an input field accepting numbers from 1 to 100: you would test at the boundaries 0, 1, 100, and 101 to check for off-by-one errors and ensure the application properly handles inputs at and just outside the acceptable range.
BVA is significant because it helps identify errors and bugs that may not be apparent with other types of testing and can be more efficient than exhaustive testing all inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Testing a RESTful API

A

Approaching RESTful API testing involves several steps:

  • Understanding the API documentation. To get a clear idea of the available endpoints, methods (GET, POST, PUT, DELETE), request parameters, and response formats.
  • Testing for HTTP status codes. For each endpoint, ensuring that successful calls return codes like 200 (OK) or 201 (Created) and that errors return appropriate errors like 400 (Bad Request), 401 (Unauthorized), or 404 (Not Found).
  • Testing validation. Ensuring that all required fields are validated and that responses correctly handle missing or incorrect data.
  • Security testing. Verifying authentication and authorization mechanisms, checking for SQL injection, and testing access controls to make sure data is protected.
  • Performance testing. Assessing how the API behaves under load, including its response times and how it scales with increased concurrent requests.

In summary, testing should cover functional correctness, reliability, performance benchmarks, and security vulnerabilities of the API.

When testing an API, I would focus on the correctness of the response data, status codes for different scenarios, error handling (ensuring appropriate error messages are returned for invalid requests), performance under load, and security aspects (such as authentication and data privacy). For validating responses, I would use assertions in my test cases to check status codes, response payload, response headers, and the response time to ensure they meet the expected outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Defect Clustering and Pareto Principle

A
  • Defect Clustering suggests that in any given system, a small number of modules contain most of the defects detected.
    • The Pareto Principle (also known as the 80/20 rule) applies when roughly 80% of the effects come from 20% of the causes. In QA, this means focusing on the critical modules or components that might contain the majority of defects can greatly improve testing efficiency since those areas are likely to have the highest number of issues.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Regression Testing vs. Re-testing

A
  • Regression Testing is performed to ensure that a recent program or code change has not adversely affected existing features. It is systematic, not based on any specific defect corrections.
    • Re-testing is testing that specific defect fixes work as intended. It is directly concerned with the validation of the actions taken to correct defects.
    • Example: If a bug was fixed in the login feature, re-testing would involve checking the login feature where the defect was fixed, while regression testing would mean checking the login feature as well as other functionalities that could potentially be affected by the changes made to fix the login issue.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Test Pyramid

A
  • The Test Pyramid is a concept that suggests how to prioritize testing activities. It is divided into three layers - from the bottom, they are Unit Tests, Service Tests, and UI Tests.
    • The base or the largest amount of tests should be Unit Tests, which are quick to execute and can catch many fundamental errors early on. Service Tests (or Integration Tests) are in the middle and ensure that different parts of the application work together correctly. The top of the pyramid is UI Tests, which are fewer because they are more expensive to maintain and run.
    • In Agile development, the Test Pyramid guides teams to allocate more efforts towards the lower levels of the pyramid (Unit and Integration Tests) to ensure a robust foundation for the application, improving both the quality and the speed of development.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Regression Testing

A

This type of testing is performed to ensure that new code changes do not adversely affect the existing functionality of the product. To prioritize test cases for regression testing, I would analyze the areas of the application that have undergone recent changes, areas with high complexity, features with the highest user traffic, and components that have had frequent issues in the past. High-risk areas would be tested first.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

API Testing Approaches

A

Efficient testing of an API involves several types of testing, including:
- Functional testing to validate the output based on the input conditions.
- Load testing to verify performance under expected and peak load conditions.
- Security testing to ensure the API is secure from external threats.
- Usability and documentation testing to check if the API is user-friendly and well-documented for integration.
Tools like Postman or Swagger can be used for executing these tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test Automation in Agile Development

A

Test automation is crucial in Agile due to the fast-paced nature of iterations and frequent code changes. Prioritizing tests for automation involves identifying tests that are repetitive, have a high risk of human error, or are time-consuming. Tests that validate core features of the application and have stable requirements are usually automated first to ensure a quick feedback loop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Handling Critical Bugs Late in the Development Cycle

A
  • Identifying a critical bug late in the cycle is a challenging scenario. My approach would be to immediately document the bug in detail and communicate it to the development team and project stakeholders, prioritizing the issue based on its impact.
    • I would then have a meeting to discuss the severity, potential impacts, and possible solutions or workarounds with the development team and stakeholders. Depending on the situation, it might involve hotfixing the issue or delaying the release to ensure quality.
    • To prevent future occurrences, a root cause analysis would be conducted to understand how the bug was missed and to improve the testing strategy accordingly. This might include enhancing test cases, increasing test coverage in the affected areas, or adjusting the testing process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Performance testing under peak load conditions

A

I would start by analyzing the application architecture to understand its components and how they interact under load.
Next, I would identify key performance metrics such as response time, throughput, and resource utilization.
Then, I would design and execute performance tests using tools like JMeter or Gatling to simulate peak load conditions.
During testing, I would monitor the performance metrics and identify any bottlenecks such as database queries taking too long or server overload.
Once bottlenecks are identified, I would prioritize them based on their impact and work with the development team to implement optimizations.
Finally, I would rerun the tests to validate the effectiveness of the optimizations and ensure the application’s performance meets requirements under peak load conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Test oracle in software testing

A

A test oracle is a mechanism used to determine whether a software application behaves correctly or not. It helps in verifying the correctness of the output produced by the software under test.
Test oracles are crucial because they provide a baseline for comparison between the expected and actual outcomes of a test.
An example scenario where the absence of a clear test oracle could pose challenges is in testing machine learning algorithms. Without a clear understanding of the expected output for a given input, it can be difficult to determine whether the algorithm’s predictions are accurate or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Automation of regression test suite for a web application with frequent UI changes

A

I would prioritize test cases based on their criticality and frequency of use.
I would focus on creating robust and maintainable automated tests using tools like Selenium WebDriver.
To handle frequent UI changes, I would use locators that are less likely to change, such as IDs or CSS selectors, and minimize reliance on XPath.
I would also implement techniques like page object model to abstract UI changes and make tests more resilient.
Regular maintenance and review of the automated test suite would be crucial to ensure it remains effective and sustainable over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Continuous Integration (CI) and its benefits

A

CI is a development practice where developers integrate code into a shared repository frequently, preferably several times a day. Each integration is verified by an automated build and automated tests.
CI benefits the QA process by providing rapid feedback on the quality of code changes, reducing the risk of integration issues, and enabling early detection of defects.
An example of integrating automated tests into a CI pipeline could be using Jenkins as the CI server. I would configure Jenkins to trigger automated tests whenever new code is pushed to the repository. The test results would be reported back to the development team, allowing them to quickly address any issues. Additionally, I would set up notifications to alert the team if any tests fail, ensuring timely resolution of issues.

17
Q

Test Case

A

A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly. The components of a good test case include:
- Test Case ID
- Test Description
- Pre-Conditions
- Test Steps
- Expected Result
- Actual Result
- Pass/Fail Status
- Post-Conditions
- Created By and Date
- Comments or Special Instructions

18
Q

Defect life cycle

A

The Defect Life Cycle, also known as Bug Life Cycle, is the journey of a defect from its identification to its closure. The typical stages include:
- New: When a defect is found and reported.
- Assigned: The defect is assigned to a developer to fix.
- Open: The developer starts analyzing and working on the defect.
- Fixed: The developer has fixed the defect and it’s ready for re-testing.
- Test: The defect is re-tested by the QA team.
- Verified: The QA team confirms that the defect is fixed.
- Closed: The defect is closed if fixed or otherwise moved to another state (like Deferred, Reopened, etc.) depending on the situation.
- Reopened: If the issue still exists, the defect is reopened and the cycle continues.

19
Q

Test automation framework

A

A test automation framework is a set of guidelines, coding standards, concepts, processes, practices, and tools that can help in automating software testing processes. Types of test automation frameworks include:
- Linear Scripting Framework
- Modular Testing Framework
- Keyword-Driven Framework
- Data-Driven Framework
- Hybrid Framework
- Behavior Driven Development Framework

For instance, the Data-Driven Framework allows testers to store test data (inputs and expected outcomes) in an external database or file (like Excel, XML, or databases). This data is then dynamically read and fed into the test scripts as required, separating the test script logic from the test data. This allows for easy maintenance and efficient management of data and scripts, making it possible to execute the same script with different sets of data.

20
Q

Verification vs. Validation

A
  • Verification is the process of evaluating work-products of a development phase to ensure it meets the specified requirements at that phase. It is more about ensuring the product is being built correctly, without directly observing the final product (e.g., reviewing code or design documents).
    • Validation is the process of evaluating the final product to check whether it meets the business needs and requirements. It is concerned with whether the right product has been built.
21
Q

Risk-Based Testing Approach

A
  • Risk-Based Testing involves prioritizing the testing of features, modules, or functionalities based on the risk of failure and the impact of that failure on the system and users. Factors to consider when identifying risks include:
    • The complexity of the feature
    • Historical defect trends
    • The criticality of the feature to the business or its users
    • Changes in requirements or the development environment
    • External dependencies
    • Based on these factors, testers can allocate resources and plan their testing efforts to minimize the biggest risks to project success.
22
Q

New feature test design

A

When designing test cases for a new feature, I would first understand the feature’s requirements, objectives, and user stories. I would consider the user’s perspective, edge cases, potential security vulnerabilities, and compatibility issues (across different devices, platforms, or browsers as applicable). I would prioritize tests based on the feature’s complexity, criticality to the application, and use case frequency. Exploratory testing might also be employed to uncover unexpected issues before defining more structured test cases.

23
Q

Bug report

A

An effective bug report should include:
- A clear, descriptive title
- Steps to reproduce the issue
- Expected vs. actual results
- The environment in which the bug was found (e.g., device, OS version, browser)
- Severity and/or priority of the issue
- Screenshots or videos, if applicable
- Any relevant log files or error messages
This structure helps developers quickly understand the issue’s context and its impact, facilitating quicker resolutions.

24
Q

Test Automation pros and cons

A

Advantages of automated testing include faster execution of tests, higher test coverage in shorter periods, repeatability, and reduction of human error. Disadvantages include the initial setup time, the need for maintenance of test scripts, and the possibility of overlooking user experience issues that a manual tester might catch. Automated testing might not be effective in scenarios where the application’s UI is frequently changing, or for tests that require subjective judgment, such as design or user experience considerations.

25
Q

Performance testing

A

To plan and execute a performance test for a web application, I would first define the test objectives (e.g., throughput, response times, concurrent users). I would then select appropriate tools and set up a test environment that mimics the production environment as closely as possible. Important metrics to measure would include response time, throughput (transactions per second), resource utilization (CPU, memory, disk I/O, network I/O), and error rates. These metrics help identify bottlenecks and ensure that the application can handle the anticipated load while maintaining acceptable performance levels.

26
Q

Testing Without Documentation

A
  • Without documentation, a tester must rely on exploratory testing techniques, which involve experimenting with the software to discover its functionalities and uncover any defects. Other strategies might include:
    • Use case testing: Inferring possible use cases based on the application’s context and UI.
    • Communication with the development team to understand the intended functionality and known issues.
    • Incremental testing by building a familiarity with the application over time, starting from basic functionalities to more complex ones.
27
Q

CI/CD Pipeline for Testing

A

Setting up an effective CI/CD pipeline involves automating the build, test, and deployment phases. For testing, this means automating the execution of test suites at every significant pipeline stage (e.g., post-commit, pre-deployment). Key considerations include choosing the right tools that integrate with the pipeline (like Jenkins, Travis CI), ensuring test environments are stable and consistent, and managing test data effectively. Automated tests should be categorized (unit, integration, system) and run in a logical sequence, with mechanisms to report and act on failures promptly, ensuring that only well-tested, quality code progresses down the pipeline.