STLC & Functional Testing Flashcards
Interview Questions
What are the phases in STLC?
STLC - Software Testing Life Cycle:
- Analyze requirements: using SMART and INVEST models;
- Test Planning: Identify test cases, Identify test data needed, Test Lead Delegating Tasks;
- Test Case Development: Create test cases in JIRA;
- Setting up Test Environment: Dev, Test, UAT, Staging, Demo, Production
- Test Execution: Test execution and report defects.
- Test closure
What Testing Techniques do you use?
- Decision Table.
- Boundary Value Analysis.
- Equivalence Class Partitioning.
- State Transition Diagram.
What is a Decision Table testing technique?
A Decision Table is a special testing technique used when you have a feature with multiple fields required for submission.
A decision table will allow the tester to map all the possible test cases for the given feature.
What is a Boundary Value Analysis testing technique?
Boundary Value Analysis is a testing technique to minimize the number of inputs to test without sacrificing the quality of testing.
Example: Max char value is 50, min is 1.
1) 0 (min - 1)
2) 1 (min)
3) 25 (valid)
4) 50 (max)
5) 51 (max +1)
What is an Equivalence Class Partitioning testing technique?
Equivalence Class Partitioning is a testing technique that allows the tester to partition or divide the major functionalities.
Example: CALCULATOR ⇒ Upper bound 10 digits (min 1 max 10) 1, 7, 10, 11
- Addition - 5 tests
- Subtraction - 5 tests
- Multiplication - 5 tests
- Division - 5 tests
What is an State Transition Diagram testing technique?
State Transition Diagram is a technique where there are specific states in an application.
Example: When you have 3 invalid login attempts in a row and your account is locked for 24 hrs.
Can you explain the current Defect Life Cycle that you are using?
Defect Life cycle ⇒ The process of how a bug gets opened and the cycle of getting closed.
EXTENDED ANSWER:
- Identification: Defects are identified during various phases, including development, testing, or by end-users
- Logging: Once a defect is identified, it is logged into a defect tracking system - JIRA.
- Assignment: The logged defect is assigned to the development team responsible for fixing it.
- Investigation: Developers investigate the reported defect to understand its root cause.
- Fixing: After understanding the defect, developers work on fixing the issue in the code.
- Verification: Testers verify the fixed code to ensure that the defect has been successfully addressed.
- Reopening (if necessary): If the verification reveals that the defect has not been entirely fixed or if a new issue is discovered, the defect may be reopened, and the cycle repeats.
- Closure: Once the fix is verified and confirmed, the defect is marked as closed.
What is a Test Case?
Test Case is a document detailing the Preconditions, Actions, Input data and Expected and Actual results related to testing a specific functionality.
- Title: A summary describing the specific test case that is being performed.
- Description: Optional.
- Preconditions: What needs to be done before test execution.
- Test Set (or Test Suite): Which Test Suite does it belong to.
- Actions: The ordered test steps that need to be performed.
- Expected result ⇒ What is expected based on acceptance criteria.
- Actual result ⇒ what happened during execution.
Tell me about the Last Test Case you wrote ?
……. Refer to a test case that you worked on :-)
What is a Smoke Test? How often are you running your smoke test ?
A Smoke Test, in the context of software testing, is a preliminary test aimed at quickly checking whether the essential functions of a software application work as intended. It helps determine if the software build is stable enough for more comprehensive testing.
Main of a SMOKE Test:
- Scope: Broad but shallow, covering major functionalities of the application.
- Purpose: To ensure basic stability and identify critical issues early in the testing process.
- Execution: Conducted after each build, often by the development team or through automated scripts.
- Outcome: Pass or fail. If the smoke test fails, it indicates that the build is not stable for further testing.
Frequency of Running Smoke Tests:
- After each new software build.
- After major code integrations.
- Before initiating more extensive testing, such as regression testing or comprehensive test suites.
- They are executed frequently to catch major issues early, ensuring that subsequent testing efforts are not wasted on unstable builds.
What is the difference between Validation and Verification?
Verification ⇒ Process of reviewing requirements and work documents.
Validation ⇒ Process of validating the actual deployed application to evaluate whether it meets the intended requirement.
Can you talk about how you Design your Test Cases?
- Requirements Analysis: Understand project requirements thoroughly to identify testable features and functionalities.
- Test Case Objectives: Clearly define the objectives of each test case, specifying what functionality or scenario it aims to validate.
- Test Case Structure: Organize test cases with a clear structure, including a unique identifier, description, test steps, expected results, and any preconditions.
- Positive and Negative Scenarios: Include test cases covering expected (positive) behavior as well as scenarios that test how the system handles unexpected (negative) conditions.
- Boundary Value Analysis: Test the system at the boundaries of input values to identify potential issues related to edge cases.
- Equivalence Class Partitioning: Group similar input conditions into equivalence classes and design test cases to cover each class.
- Dependencies and Integration: Consider dependencies between different components or systems and design test cases to validate their integration.
- Usability and User Experience: Include test cases that assess the software’s user interface, user experience, and overall usability.
- Performance and Load Testing: If applicable, design test cases to evaluate the system’s performance under different load conditions.
- Security Testing: Include test cases to assess the security features of the application, checking for vulnerabilities.
- Regression Testing: Design test cases that can be used for regression testing, ensuring that new changes do not introduce issues in existing functionalities.
- Automation Considerations: If automation is planned, design test cases with automation in mind, focusing on feasibility and maintainability.
- Traceability: Establish traceability between test cases and requirements to ensure comprehensive coverage and facilitate tracking.
- Review and Collaboration: Collaborate with team members to review and validate test cases, incorporating feedback and ensuring coverage of all relevant scenarios.
What is Regression Testing?
The process of testing the application End to End to ensure the newly added features are not causing any issues with existing functionality.
ALTERNATIVE ANSWER:
Regression Testing is like checking to make sure that the changes or additions made to a software application haven’t broken anything that was working fine before. It ensures that the new updates haven’t caused unexpected problems in the existing features.
How often are you running Regression Tests?
In our project we have a Release Cycle ⇒ 3 months.
Every 2 weeks we are running Regression.
We use the Staging Environment to run Regression Tests.
Do you know the difference between Minor and Major Regression Testing?
Minor regression ⇒ the process of end to end testing for all user stories related to a specific sprint. This is done at the end of each sprint to ensure all the user stories are working well together without affecting existing functionality.
Major Regression ⇒ the process of end to end testing of the entire application. This is done before a major release to ensure the new release version is not affecting the existing functionality.
How many Test Cases do you have in your Regression Suite?
I can tell that 150 Test Cases per year can be automated.
Test cases:
* UI - 65
* API - 15
* DB - 20
Truly fullstack:
* UI - 50
* API - 25
* DB - 25
What is Confirmation Testing?
Confirmation Testing (or Retesting) ⇒ Confirming whether the defect created has been fixed.
ALTERNATIVE ANSWER:
Confirmation Testing is like double-checking to make sure that a reported bug or issue has been fixed. Once the developers say they’ve fixed something, confirmation testing is done to confirm that the problem is indeed resolved and that everything is working as expected.