Technical Interview Flashcards

1
Q

How do you know if it’s a front-end or back-end issue?

A

Front end issues are usually related to the UX/UI of an app
* Pertain to UI rendering or client-side scripting
* Browser or device specific
* involve client-side code

Back end issues are related to the data or logic of the app
* consistent problems across different environments
* Involve server-side code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is use case?

A

*Refers to a description of how users interact with a software system to perform certain tasks
* A format used by BAs for specifying system requirements
* Each use case normally represents completed business operation performed by user
* From QA perspective, we will execute corresponding E2E test to make sure the requirement is implemented

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are uses for use case?

A
  • Requirement analysis
  • design
  • testing
  • validation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe a test pyramid

A

There are three levels, from bottom to top
* Unit testing
* Integration testing - validates the data flow between components
* e2e testing - verifies the functionality and performance of the complete system by simulating real-word user scenarios

This distribution helps achieve a balance between test coverage, execution speed, and maintenance effort.

By focusing on a greater number of lower-level tests, teams can catch most defects early in the development process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Dynamic vs static testing?

A

Static testing focuses on analyzing software artifacts without executing code, to identify defects early in the development process
* cost effective
* a way to determine the root cause of bugs
* part of verification

Dynamic testing involves executing code to assess its functionality and behavior, to verify it meets specified requirements
* later in dev process
* QA analysts can get a look at how the software performs while running in a real-world situation
* stages:
- unit testing
- integration testing
- system testing
- acceptance testing
* part of validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Difference between verification and validation

A

Verification is about confirming that the software is being developed correctly according to requirements and standards
* “Are we building the product right?”
* Focuses on assessing whether the software meets its specified requirements and adheres to predefined standards and guidelines
* involves reviews, walkthroughs, and inspections

Validation is about confirming that the software meets the users’ needs and expectations
* “Are we building the right product?”
* Focuses on whether the software meets the needs and expectations of its users and stakeholders
* involves testing, user feedback, and acceptance testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the test documentation types?

A
  • Test Plan
  • Test Strategy
  • Test Matrix
  • Test checklist
  • Test case
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a test matrix?

A
  • document used to track the relationship between test cases and different aspects of the system under test
  • rows represent test cases
  • columns can represent dimensions like requirements, functionalities, user stories, platforms, browsers, OS’s, environments, and test types
  • helps visualize test coverage, assess completeness, and ensure traceability between test cases and other artifacts such as requirements, user stories, design documents, and defects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Boundary value analysis

A
  • A methodology for designing test cases that concentrate software testing effort on cases near the limit of valid input ranges
  • At those points when input values change from valid to invalid errors are more likely to occur
  • You test boundaries between partitions
  • ex, for -9 to 9:
  • -9, -8, 1, 8, 9
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Equivalence Partioning

A
  • Input data units are divided into partitions that can be considered the same, called equivalent partitions
  • then we pick only value from each partition
  • if one value in a partition passes, all others will also pass. vice versa
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What will you do if you raise a bug but the dev does not agree it is a bug?

A
  • First, I will recheck business/technical requirements
  • If it’s not described in those documents, I would ask the PO/BA/PM
  • If I’ve explained to the dev and they still insist, I would escalate it to my manager
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Bug severity/priority?

A
  • Both are used to rank how important it is to fix a bug
  • Severity relates to the impact of a bug on the software’s functionality
  • Priority relates to the urgency of fixing the bug relative to other issues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Example of low severity and high priority bug

A

*Buttons are slightly out of place when site is accessed on chrome. They can still be clicked easily and do what they are meant to do
* This means that functionality is not affected, hence bug severity is low
* But this doesn’t make for a pleasurable visual representation. Poorly designed websites actively turn off users, therefore bug priority is high

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Example of low severity and low priority bug

A
  • lack of uniformity in text, there are some typos, font and color of a page doesn’t match the font and color of the main website
  • doesn’t affect functionality
  • doesn’t need to be fixed immediately
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do you decide when to stop testing?

A

PMs or project leads decide based on:
* If deadlines are met
* entering test case pass rate
* risk is below the permitted level
* if all critical bugs and roadblocks have been resolved
* if all the requirements are met

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the difference between test plan and test strategy?

A
  • Test plan is more detailed and focuses on the specific activities, resources, and schedules for the testing phase.
  • It provides a detailed roadmap, helping the team to understand what needs to be tested, how it will be tested, and what resources are required.
  • Includes:
  • test objectives
  • test scope
  • test scenarios
  • test cases
  • test schedules
  • test environment
  • roles and responsibilities
  • entry and exit criteria
  • Test strategy provides a broader overview of the overall testing approach for the project
  • Typically created during the early stages of project planning and is used to guide the development of more detailed test plans and other documents
  • testing approach
  • methodologies
  • tools
  • techniques
  • types of testing (functional, performance, system, etc)
  • test levels
  • types of documents produced
  • how bugs will be managed
17
Q

What are the various artifacts to which you refer to when writing test cases?

A
  • Specification of functional requirements
  • wireframes
  • use cases
  • user stories
  • acceptance criteria
  • User acceptance testing test cases
18
Q

Difference between functional and non-functional testing?

A

Functional
* focused on verifying that the software functions correctly according to its specifications and requirements
* examples are from the test pyramid

Non-funcitonal
* aims to assess how well the system performs under various conditions and how it meets quality attributes
* performance, usability, reliability, scalability, and security

19
Q

What are the different types of non-functional tests?

A

Performance: assessing system response times, throughput, and resource utilization under different load conditions
* load test - evaluating system performance under expected load conditions, to see if it performs within acceptable response times
* stress test - evaluating system performance while pushing it to its limits or breaking points, to identify failure points and weaknesses
* endurance test - evaluating system performance over an extended period of time to ensure stability and reliability
* scalability test - assessing the system’s ability to scale up or down based on user demand
* volume test - evaluates performance when subjugated to large amounts of data

usability testing: Evaluating how user-friendly the software is

reliability testing: testing the system’s ability to maintain functionality over time and under varying conditions

scalability testing: testing system’s ability to maintain performance, and resource utilization as workload increases or decreases

security testing: evaluating the system’s ability to protect data and resources from unauthorized access