STLC & Functional Testing Flashcards

Interview Questions

1
Q

What are the phases in STLC?

A

STLC - Software Testing Life Cycle:

  1. Analyze requirements: using SMART and INVEST models;
  2. Test Planning: Identify test cases, Identify test data needed, Test Lead Delegating Tasks;
  3. Test Case Development: Create test cases in JIRA;
  4. Setting up Test Environment: Dev, Test, UAT, Staging, Demo, Production
  5. Test Execution: Test execution and report defects.
  6. Test closure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What Testing Techniques do you use?

A
  1. Decision Table.
  2. Boundary Value Analysis.
  3. Equivalence Class Partitioning.
  4. State Transition Diagram.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Decision Table testing technique?

A

A Decision Table is a special testing technique used when you have a feature with multiple fields required for submission.

A decision table will allow the tester to map all the possible test cases for the given feature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Boundary Value Analysis testing technique?

A

Boundary Value Analysis is a testing technique to minimize the number of inputs to test without sacrificing the quality of testing.

Example: Max char value is 50, min is 1.
1) 0 (min - 1)
2) 1 (min)
3) 25 (valid)
4) 50 (max)
5) 51 (max +1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is an Equivalence Class Partitioning testing technique?

A

Equivalence Class Partitioning is a testing technique that allows the tester to partition or divide the major functionalities.

Example: CALCULATOR ⇒ Upper bound 10 digits (min 1 max 10) 1, 7, 10, 11
- Addition - 5 tests
- Subtraction - 5 tests
- Multiplication - 5 tests
- Division - 5 tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an State Transition Diagram testing technique?

A

State Transition Diagram is a technique where there are specific states in an application.

Example: When you have 3 invalid login attempts in a row and your account is locked for 24 hrs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Can you explain the current Defect Life Cycle that you are using?

A

Defect Life cycle ⇒ The process of how a bug gets opened and the cycle of getting closed.

EXTENDED ANSWER:

  1. Identification: Defects are identified during various phases, including development, testing, or by end-users
  2. Logging: Once a defect is identified, it is logged into a defect tracking system - JIRA.
  3. Assignment: The logged defect is assigned to the development team responsible for fixing it.
  4. Investigation: Developers investigate the reported defect to understand its root cause.
  5. Fixing: After understanding the defect, developers work on fixing the issue in the code.
  6. Verification: Testers verify the fixed code to ensure that the defect has been successfully addressed.
  7. Reopening (if necessary): If the verification reveals that the defect has not been entirely fixed or if a new issue is discovered, the defect may be reopened, and the cycle repeats.
  8. Closure: Once the fix is verified and confirmed, the defect is marked as closed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a Test Case?

A

Test Case is a document detailing the Preconditions, Actions, Input data and Expected and Actual results related to testing a specific functionality.

  1. Title: A summary describing the specific test case that is being performed.
  2. Description: Optional.
  3. Preconditions: What needs to be done before test execution.
  4. Test Set (or Test Suite): Which Test Suite does it belong to.
  5. Actions: The ordered test steps that need to be performed.
  6. Expected result ⇒ What is expected based on acceptance criteria.
  7. Actual result ⇒ what happened during execution.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Tell me about the Last Test Case you wrote ?

A

……. Refer to a test case that you worked on :-)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Smoke Test? How often are you running your smoke test ?

A

A Smoke Test, in the context of software testing, is a preliminary test aimed at quickly checking whether the essential functions of a software application work as intended. It helps determine if the software build is stable enough for more comprehensive testing.

Main of a SMOKE Test:

  • Scope: Broad but shallow, covering major functionalities of the application.
  • Purpose: To ensure basic stability and identify critical issues early in the testing process.
  • Execution: Conducted after each build, often by the development team or through automated scripts.
  • Outcome: Pass or fail. If the smoke test fails, it indicates that the build is not stable for further testing.

Frequency of Running Smoke Tests:

  • After each new software build.
  • After major code integrations.
  • Before initiating more extensive testing, such as regression testing or comprehensive test suites.
  • They are executed frequently to catch major issues early, ensuring that subsequent testing efforts are not wasted on unstable builds.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between Validation and Verification?

A

Verification ⇒ Process of reviewing requirements and work documents.

Validation ⇒ Process of validating the actual deployed application to evaluate whether it meets the intended requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Can you talk about how you Design your Test Cases?

A
  1. Requirements Analysis: Understand project requirements thoroughly to identify testable features and functionalities.
  2. Test Case Objectives: Clearly define the objectives of each test case, specifying what functionality or scenario it aims to validate.
  3. Test Case Structure: Organize test cases with a clear structure, including a unique identifier, description, test steps, expected results, and any preconditions.
  4. Positive and Negative Scenarios: Include test cases covering expected (positive) behavior as well as scenarios that test how the system handles unexpected (negative) conditions.
  5. Boundary Value Analysis: Test the system at the boundaries of input values to identify potential issues related to edge cases.
  6. Equivalence Class Partitioning: Group similar input conditions into equivalence classes and design test cases to cover each class.
  7. Dependencies and Integration: Consider dependencies between different components or systems and design test cases to validate their integration.
  8. Usability and User Experience: Include test cases that assess the software’s user interface, user experience, and overall usability.
  9. Performance and Load Testing: If applicable, design test cases to evaluate the system’s performance under different load conditions.
  10. Security Testing: Include test cases to assess the security features of the application, checking for vulnerabilities.
  11. Regression Testing: Design test cases that can be used for regression testing, ensuring that new changes do not introduce issues in existing functionalities.
  12. Automation Considerations: If automation is planned, design test cases with automation in mind, focusing on feasibility and maintainability.
  13. Traceability: Establish traceability between test cases and requirements to ensure comprehensive coverage and facilitate tracking.
  14. Review and Collaboration: Collaborate with team members to review and validate test cases, incorporating feedback and ensuring coverage of all relevant scenarios.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Regression Testing?

A

The process of testing the application End to End to ensure the newly added features are not causing any issues with existing functionality.

ALTERNATIVE ANSWER:

Regression Testing is like checking to make sure that the changes or additions made to a software application haven’t broken anything that was working fine before. It ensures that the new updates haven’t caused unexpected problems in the existing features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How often are you running Regression Tests?

A

In our project we have a Release Cycle ⇒ 3 months.

Every 2 weeks we are running Regression.

We use the Staging Environment to run Regression Tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Do you know the difference between Minor and Major Regression Testing?

A

Minor regression ⇒ the process of end to end testing for all user stories related to a specific sprint. This is done at the end of each sprint to ensure all the user stories are working well together without affecting existing functionality.

Major Regression ⇒ the process of end to end testing of the entire application. This is done before a major release to ensure the new release version is not affecting the existing functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How many Test Cases do you have in your Regression Suite?

A

I can tell that 150 Test Cases per year can be automated.

Test cases:
* UI - 65
* API - 15
* DB - 20

Truly fullstack:
* UI - 50
* API - 25
* DB - 25

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is Confirmation Testing?

A

Confirmation Testing (or Retesting) ⇒ Confirming whether the defect created has been fixed.

ALTERNATIVE ANSWER:

Confirmation Testing is like double-checking to make sure that a reported bug or issue has been fixed. Once the developers say they’ve fixed something, confirmation testing is done to confirm that the problem is indeed resolved and that everything is working as expected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Do you do Ad-Hoc testing?
When do you do it?

A

Ad-Hoc testing ⇒ Random unplanned testing. When you test without a specific plan.

When ⇒ When there is extra time in a sprint or after regression.

19
Q

How do you estimate the Level of Efforts in testing User Stories?

A
  • Review of the acceptance criteria.
  • Number of test cases.
  • Complexity involved in testing.
  • Complexity involved in test data creation.
20
Q

Why do you like in Testing as your profession on daily basis?

A
  • I am a detail oriented person.
  • I have a passion for identifying application issues and collaborating with my team on fixing them.
  • I enjoy being a part of the process to create useful features for the application users.
  • I love saving my company money based on my work.
  • I love the technologies available in the test automation space.
  • I love the rush in hunting for defects.
21
Q

What is the difference between Static and Dynamic Testing?

A

Verification ( Static ) ⇒ Process of reviewing requirements and work documents.

Validation ( Dynamic ) ⇒ process of validating the actual deployed application to evaluate whether it meets the intended requirement.

22
Q

What is the difference between Positive and Negative testing?

A

Positive testing ⇒ Testing following the exact requirement specification, also known as Happy Path.

Negative testing ⇒ Testing the deviation from the original requirement.

EXAMPLE
Text box: Only accept integer values upto 10 digits.
Positive: Entering integers only 1 -10.
Negative: Entering anything else other than integers,
alphanumeric, more than 10 chars.

23
Q

What is a Test Plan? What is contained in a Test Plan?
Who creates the Test Plan?

A

A Test Plan is a formal document detailing how the testing will be conducted for the specific project. (Automation test plan).
Test Lead created the test plan.

24
Q

What is a Test Scenario? What is the difference between Test Case and Test Scenario?

A

Test Scenario ⇒ The high level feature.

Test case ⇒ A specific condition that you are attempting to test against the application.

25
Q

How do you generate Test Data?

A
  1. Manually based on the data requirements.
  2. Getting data from the database.
  3. Programmatically.
26
Q

What is System Testing?

A

Testing the system as a whole!

System Testing is a phase in software testing where the entire system, including its integrated components, is evaluated to ensure that it functions as intended. It assesses the system’s compliance with specified requirements, identifies defects, and verifies that all elements work together seamlessly before moving to the next phase or production.

27
Q

What is an example of NON-Functional Testing?

A
  • Performance,
  • Usability,
  • Reliability,
  • Maintainability,
  • Portability,
  • Security.
28
Q

What is User Acceptance Testing (UAT)?

A

User Acceptance Testing (UAT) is the final phase of software testing where end-users assess the system to ensure it meets their requirements, behaves as expected, and is ready for production release.

UAT involves real-world scenarios, user validation of business requirements, and collaboration between end-users and the development team.

Who: Product Owner, Users and Stakeholders.

29
Q

What is the difference between Smoke testing and Sanity testing?

A

Smoke Testing and Sanity Testing are both types of software testing, but they serve different purposes:

Smoke Testing:

  • Purpose: To check if the software build is stable enough for more detailed testing.
  • Scope: Broad and shallow, covering major functionalities.
  • Execution: Conducted after each build, often by the development team or automated scripts.
  • Outcome: Determines whether further testing can proceed or if there are major issues that need resolution.

Sanity Testing:

  • Purpose: To quickly verify specific functionalities after changes or bug fixes.
  • Scope: Narrow and deep, focusing on specific features or functionalities.
  • Execution: Typically performed by testers, often after a new build or changes.
  • Outcome: Ensures that specific functionalities are working as intended and helps identify any issues introduced by recent changes.

In essence, Smoke Testing assesses the overall stability of the software, while Sanity Testing focuses on specific areas to ensure they are still functioning after modifications.

Both are quick and early tests to catch major issues before more in-depth testing is conducted.

30
Q

Why do we care about Stability?

A

When you are in a sprint, Developers are pushing code multiple times a day. How do you know if a build/deployment did not cause an issue in the application. Well thats why we have a smoke test to rule out any defects that could have been introduced due to the new code deployment.

Usually a smoke test should take 30 mins and is automated since we need to run it everytime we want to check if the application is stable.

31
Q

What should we include in a Smoke Test?

A

Usually we include Happy path scenarios related to Core Stability Function well what does core stability mean exactly? Your way of identifying the core stability of the application depends on the application. Since we are in e-commerce that means we need customers to be able to register, login, search and checkout. So we gather all happy path test cases related to those modules and put it in the smoke test suite.

32
Q

When is needed to run Regression Tests?

A

We run Regression before a Major Release so we need to run all of the test cases in the project to ensure there are no defects before the customer views it.

33
Q

What is the difference in Sanity Tests vs. Regression Tests?

A

Sanity Tests is often referred to Subset of Regression Tests this is because we are not repeating regression but performing sanity (running all test cases within one or many modules).

Goals of Sanity Testing vs. Regression Testing:
* Sanity Testing: Confirm that recent changes haven’t broken critical features.
* Regression Testing: Ensure the overall stability of the system and catch any unintended consequences of new changes.

In summary, while both Sanity Testing and Regression Testing aim to ensure system stability. Sanity Testing is a quick and focused check on specific functionalities, while Regression Testing is a more comprehensive examination of the entire application to catch broader issues.

34
Q

How do you know if you have Spotted a Legitimate Defect?

A
  • Check environment (deployment any server issues);
  • Automation script programming error;
  • Application issue;

Manually reproduce the issue multiple times.
Before creating the defect discuss with your Test Lead.

35
Q

What is the difference between Defect Priority and Defect Severity?

A

Defect Priority ⇒ How soon should the defect be fixed? This is a Product Owner work.

Defect Severity ⇒ What is the impact of the defect for this application? Definition of Severity is a Tester responsibility.

36
Q

What difference types of Defect Severity you know?

A

Critical , High , Medium, Low and Trivial.

37
Q

Have you heard of 508 Compatibility Testing?

A

508 is a mandate to ensure equal web accessibility to users with vision, hearing and other disabilities.

Testers need to ensure the application is compliant with 508 standards. Screen reader technology, providing additional HTML attributes.

38
Q

What is the Pesticide Paradox?
How do you prevent this result in your testing project?

A

Pesticide Paradox - is a situation where if you run the same set of test cases over and over again, they will lose effectiveness (diminishing returns).

When PP appears in results, there are opportunities for testers to do:
* - Review test cases.
* - Update test cases.
* - Ensure the quality of the test cases.

39
Q

What is the difference between Error and Failure?

A

Error and Failure refer to distinct concepts:

Error Definition ⇒ An error is a mistake made by a developer during the coding phase. It is a deviation from the intended behavior and could lead to a defect in the software.

Failure definition ⇒ A failure occurs when the software does not behave as expected during testing or actual usage. It is a manifestation of a defect or an error in the code.

In summary, errors are mistakes made by developers in the code, while failures are observable deviations from expected behavior during testing or actual use.

40
Q

Have you ever come across Production Defects? How did you handle them?

A

Production Defect ⇒ When an end user reports a defect.
The Product Owner or some one on their team will send an email describing the issue.

Review the defect details. Try to reproduce this in the Staging Environment. If it is not reproducible then gather more details from the user. If it is reproducible work with the developers on fixing it.

41
Q

What are your Challenges when it comes to testing?

A
  • With an agile environment there are constant deadlines.
  • Complexity of testing requirements.
  • Complexity of designing test automation.
  • Changing requirements.
  • Creating test data.

EXTENDED Answer:

Continuous Learning: Staying updated with evolving technologies, testing methodologies, and tools is a challenge.

Test Data Management: Creating and managing realistic and diverse test data sets can be complex.

Test Automation Maintenance: Ensuring the stability of automated tests across different environments and versions is a challenge.

Integration Testing: Ensuring smooth integration between different components or systems can be challenging.

Performance Testing: Assessing the performance of an application under different conditions and loads is challenging. SDETs need to simulate realistic scenarios to identify potential bottlenecks.

Cross-Browser/Platform Testing: Ensuring compatibility across various browsers and platforms adds complexity to testing.

Collaboration with Development Teams: Effective collaboration with developers is crucial. Miscommunication or lack of collaboration can lead to misunderstandings about requirements and defects.

Test Environment Management: Setting up and maintaining test environments that mirror production environments can be challenging. Ensuring data consistency and environmental stability is essential.

Security Testing: Identifying and addressing security vulnerabilities requires specialized knowledge. SDETs may need to collaborate with security experts to ensure the application’s resilience against security threats.

Agile and DevOps Practices: Adapting to Agile and DevOps methodologies requires a shift in mindset and processes. SDETs need to integrate testing seamlessly into continuous integration and delivery pipelines.

42
Q

If the application is down in the Staging Environment what will you do?

What if the application is working for some team members but not others?

A

When down happening, effective communication between the testing, development, and IT teams is crucial. Collaborate to diagnose and resolve the issues promptly, and document the steps taken for future reference.

Application Down in Staging Environment:

  1. Check Logs and Error Messages: Examine the logs and error messages.
  2. Review Recent Changes: Check if there were recent deployments or changes to the staging environment.
  3. Environment Configuration: Verify the configuration of the staging environment.
  4. Collaborate with Dev Team: Communicate with the development team to understand the root cause of the issue and work together to address it.
  5. Restore from Backup: If feasible, restore the staging environment to a known working state from a backup.

Application Working for Some Team Members but Not Others:

  1. Network Issues: Check for network issues that might be preventing some team members from accessing the application.
  2. Browser Compatibility: Verify if the issue is browser-specific.
  3. Clear Cache and Cookies: Instruct team members to clear their browser cache and cookies, as cached data might be causing discrepancies in the application’s behavior.
  4. Check Permissions: Ensure that all team members have the required permissions to access the application.
  5. Device Compatibility: If team members are using different devices, check for device-specific compatibility issues.
43
Q

After your Team performs Regression Testing, is the application version ready for release?

The application crashes in the Production Environment after production deployment.

What are the reasons for this?

A

SHORT ANSWER: If we can confirm that it works in one environment, but doesn’t work in another, then it’s because of a released code package. We will need to work with developers and systems engineers to determine the root cause.
Problems on the production server.

EXTENDED ANSWER:

Regression Testing and Release Readiness:

Regression testing is a crucial step to ensure that new changes, updates, or additions to the software haven’t introduced unintended side effects or issues in existing functionalities. However, the readiness of an application for release involves multiple factors beyond regression testing. These may include:

  1. Functional Testing: Ensuring that all intended features work as expected.
  2. Performance Testing: Assessing the application’s performance under different conditions.
  3. Security Testing: Identifying and addressing potential security vulnerabilities.
  4. User Acceptance Testing (UAT): Validating that the application meets user expectations.
  5. Comprehensive Test Coverage: Ensuring that testing covers a wide range of scenarios and use cases.

Application Crashes in Production Environment:

If an application crashes in the production environment after deployment, several factors could contribute to this issue:

  1. Unidentified Defects: Regression testing might not have caught all defects, and some issues might surface only in the production environment.
  2. Environmental Differences: Discrepancies between the testing and production environments can lead to unexpected behavior.
  3. Data Discrepancies: Differences in production data compared to the test data can cause issues.
  4. Concurrency Issues: Problems related to concurrent user access or simultaneous transactions may cause crashes in a production setting.
  5. Resource Constraints: Production environments may have different resource constraints, such as memory or processing power, leading to crashes not observed in test environments.
  6. Configuration Issues: Incorrect configurations in the production environment, such as database connections or external service integrations, can contribute to crashes.
44
Q

You created a Defect ticket in your project board, your developer challenges the validity of your defect?

What can you do?

A

After reviewing the defect details with the developer again and I am unable to reach a common understanding, then I will talk to PO and/or BA and set up a meeting where I will discuss the defect and get the final decision from the PO.