QA questions Flashcards
Black Box Testing vs. White Box Testing vs. Grey Box Testing
- Black Box Testing. It is a testing technique where the QA tester does not have knowledge of the internal workings of the application and focuses on the inputs and outputs and tests external aspects of the system, such as user interfaces and APIs. Testing is based on specifications and requirements. It is primarily used for validation. It is appropriately used when the tester wants to evaluate the system from an end-user perspective, such as during acceptance testing. Example scenarios include functional testing, system testing, and acceptance testing. An example scenario could be testing a website’s login functionality where the tester inputs valid and invalid credentials to ensure the login process works as expected.
- White Box Testing. Also known as clear or open box testing, it requires knowledge of the internal structures of the application. It is used for verification. It is used to test internal structures, working paths, and the functional flow of the application. It is mainly used at unit testing level but can be applied at integration and system levels as well. This type of testing also includes certain types of security and performance evaluations. Testers have access to the source code and use this knowledge to design test cases. An example scenario could be testing a function in a program where the tester knows the internal logic and verifies if the function behaves as expected. involves testing the internal structures or workings of an application, not just its functionality. testers check flow of inputs and outputs through the application, error handling, and security issues. It is used when verifying the technical aspects of the application, such as during unit testing or integration testing.
- Grey-box testing: This method combines elements of both white-box and black-box testing. Testers have partial knowledge of the internal workings of the application. An example scenario could be testing a banking application where the tester has access to the database structure to design test cases for verifying data integrity while also testing user interfaces based on requirements.
Test Plan
A test plan is a document detailing the scope, approach, resources, and schedule of intended testing activities. It should specify what is to be tested, how testing will be performed, and how results will be evaluated. It ensures that everyone involved is on the same page and helps in organizing the testing process.
- Components of a comprehensive test plan should include:
- Introduction
- Objectives and Test Goals
- Test Scope
- Testing Approach/Strategy
- Risk Analysis
- Resources (Personnel, Software, Hardware)
- Schedule and Milestones
- Entry and Exit Criteria
- Deliverables
- Dependencies
- Tools and Environment
- Communication Plan
Importance of test automation in a CI/CD pipeline
Test automation is crucial in CI/CD pipelines since it allows for the rapid and consistent testing of code changes, ensuring that new features and patches do not break existing functionality. It facilitates continuous integration by automatically running tests every time a change is made, and supports continuous delivery/deployment by ensuring that all automated tests must pass before changes are moved to the next stage in the pipeline or deployed to production, thus reducing manual effort and speeding up the delivery process.
Boundary Value Analysis
Boundary value analysis (BVA) is a black box testing technique used to identify errors at the boundaries rather than in the ranges of input. For an input field accepting numbers from 1 to 100: you would test at the boundaries 0, 1, 100, and 101 to check for off-by-one errors and ensure the application properly handles inputs at and just outside the acceptable range.
BVA is significant because it helps identify errors and bugs that may not be apparent with other types of testing and can be more efficient than exhaustive testing all inputs.
Testing a RESTful API
Approaching RESTful API testing involves several steps:
- Understanding the API documentation. To get a clear idea of the available endpoints, methods (GET, POST, PUT, DELETE), request parameters, and response formats.
- Testing for HTTP status codes. For each endpoint, ensuring that successful calls return codes like 200 (OK) or 201 (Created) and that errors return appropriate errors like 400 (Bad Request), 401 (Unauthorized), or 404 (Not Found).
- Testing validation. Ensuring that all required fields are validated and that responses correctly handle missing or incorrect data.
- Security testing. Verifying authentication and authorization mechanisms, checking for SQL injection, and testing access controls to make sure data is protected.
- Performance testing. Assessing how the API behaves under load, including its response times and how it scales with increased concurrent requests.
In summary, testing should cover functional correctness, reliability, performance benchmarks, and security vulnerabilities of the API.
When testing an API, I would focus on the correctness of the response data, status codes for different scenarios, error handling (ensuring appropriate error messages are returned for invalid requests), performance under load, and security aspects (such as authentication and data privacy). For validating responses, I would use assertions in my test cases to check status codes, response payload, response headers, and the response time to ensure they meet the expected outcomes.
Defect Clustering and Pareto Principle
- Defect Clustering suggests that in any given system, a small number of modules contain most of the defects detected.
- The Pareto Principle (also known as the 80/20 rule) applies when roughly 80% of the effects come from 20% of the causes. In QA, this means focusing on the critical modules or components that might contain the majority of defects can greatly improve testing efficiency since those areas are likely to have the highest number of issues.
Regression Testing vs. Re-testing
- Regression Testing is performed to ensure that a recent program or code change has not adversely affected existing features. It is systematic, not based on any specific defect corrections.
- Re-testing is testing that specific defect fixes work as intended. It is directly concerned with the validation of the actions taken to correct defects.
- Example: If a bug was fixed in the login feature, re-testing would involve checking the login feature where the defect was fixed, while regression testing would mean checking the login feature as well as other functionalities that could potentially be affected by the changes made to fix the login issue.
Test Pyramid
- The Test Pyramid is a concept that suggests how to prioritize testing activities. It is divided into three layers - from the bottom, they are Unit Tests, Service Tests, and UI Tests.
- The base or the largest amount of tests should be Unit Tests, which are quick to execute and can catch many fundamental errors early on. Service Tests (or Integration Tests) are in the middle and ensure that different parts of the application work together correctly. The top of the pyramid is UI Tests, which are fewer because they are more expensive to maintain and run.
- In Agile development, the Test Pyramid guides teams to allocate more efforts towards the lower levels of the pyramid (Unit and Integration Tests) to ensure a robust foundation for the application, improving both the quality and the speed of development.
Regression Testing
This type of testing is performed to ensure that new code changes do not adversely affect the existing functionality of the product. To prioritize test cases for regression testing, I would analyze the areas of the application that have undergone recent changes, areas with high complexity, features with the highest user traffic, and components that have had frequent issues in the past. High-risk areas would be tested first.
API Testing Approaches
Efficient testing of an API involves several types of testing, including:
- Functional testing to validate the output based on the input conditions.
- Load testing to verify performance under expected and peak load conditions.
- Security testing to ensure the API is secure from external threats.
- Usability and documentation testing to check if the API is user-friendly and well-documented for integration.
Tools like Postman or Swagger can be used for executing these tests.
Test Automation in Agile Development
Test automation is crucial in Agile due to the fast-paced nature of iterations and frequent code changes. Prioritizing tests for automation involves identifying tests that are repetitive, have a high risk of human error, or are time-consuming. Tests that validate core features of the application and have stable requirements are usually automated first to ensure a quick feedback loop.
Handling Critical Bugs Late in the Development Cycle
- Identifying a critical bug late in the cycle is a challenging scenario. My approach would be to immediately document the bug in detail and communicate it to the development team and project stakeholders, prioritizing the issue based on its impact.
- I would then have a meeting to discuss the severity, potential impacts, and possible solutions or workarounds with the development team and stakeholders. Depending on the situation, it might involve hotfixing the issue or delaying the release to ensure quality.
- To prevent future occurrences, a root cause analysis would be conducted to understand how the bug was missed and to improve the testing strategy accordingly. This might include enhancing test cases, increasing test coverage in the affected areas, or adjusting the testing process.
Performance testing under peak load conditions
I would start by analyzing the application architecture to understand its components and how they interact under load.
Next, I would identify key performance metrics such as response time, throughput, and resource utilization.
Then, I would design and execute performance tests using tools like JMeter or Gatling to simulate peak load conditions.
During testing, I would monitor the performance metrics and identify any bottlenecks such as database queries taking too long or server overload.
Once bottlenecks are identified, I would prioritize them based on their impact and work with the development team to implement optimizations.
Finally, I would rerun the tests to validate the effectiveness of the optimizations and ensure the application’s performance meets requirements under peak load conditions.
Test oracle in software testing
A test oracle is a mechanism used to determine whether a software application behaves correctly or not. It helps in verifying the correctness of the output produced by the software under test.
Test oracles are crucial because they provide a baseline for comparison between the expected and actual outcomes of a test.
An example scenario where the absence of a clear test oracle could pose challenges is in testing machine learning algorithms. Without a clear understanding of the expected output for a given input, it can be difficult to determine whether the algorithm’s predictions are accurate or not.
Automation of regression test suite for a web application with frequent UI changes
I would prioritize test cases based on their criticality and frequency of use.
I would focus on creating robust and maintainable automated tests using tools like Selenium WebDriver.
To handle frequent UI changes, I would use locators that are less likely to change, such as IDs or CSS selectors, and minimize reliance on XPath.
I would also implement techniques like page object model to abstract UI changes and make tests more resilient.
Regular maintenance and review of the automated test suite would be crucial to ensure it remains effective and sustainable over time.