Testing in Web Development (week 11) Flashcards
What are some examples of types of tests?
Unit tests, behavioural tests, acceptance tests, regression tests etc
What are some of the different kinds of systems to test?
Safety-critical, embedded systems, real-time systems etc
What are some examples of particular challenges for testing web applications (6)
- Different browsers (FireFox, Chrome)
- Different versions of browser (IE6 vs IE8)
- Differences in versions of HTML, ECMAScript (JavaScript)
- Differences in APIs e.g. how XHRs are handled by different browsers
- Differences in the handling of the DOM, JavaScript etc.
- Differences in libraries and versions (e.g. jQuery)
What are the advantages of automated testing? (10)
- Automation means you can offload testing to machines (rather than rely on humans)
- (More) frequent testing e.g. regression testing
- Quick and regular feedback to developers
- Virtually unlimited iterations of test case execution
- Support for Agile and extreme development methodologies
- Disciplined documentation of test cases
- Customized defect reporting
- Finding defects missed by manual testing
- Humans get tired etc.
- Supports continuous integration and continuous deployment
When should we not automate and manual instead on the <strong>client</strong> side?
- If the user interface is rapidly changing
- e.g. HTML elements are changing
- Will need to keep changing the tests to match the interface
When should we not automate and manual instead on the <strong>server</strong> side?
If the API is rapidly changing.
Aside from specific and server side context, when should we not automate and manual instead?
Tight timescales –> Don’t have time to develop the tests
Discuss the developer’s comments to the failed automated test. What is the problem with the test?
Log in /authentication
Developer: ‘It works in my Postman test but fails the automated test’
•Assumption: that the log in details are sent via the body rather than the query
(… therefore there is a problem with the automated test)
- Developer’s assumptions affect the developer’s coding
- The same developer assumptions influence the developer’s Postman tests
Discuss the developer’s comments to the failed automated test. What is the problem with the test?
Photo uploading
Developer: ‘It works in my Postman test but fails the automated test’
(… therefore there is a problem with the automated test)
- The way Postman attaches photos is different to the way that the mocha-chai framework attaches photos
- Difference in ‘implementation’ of API specification
- Difference/s in implementation used by developer and tester
Discuss the developer’s comments to the failed automated test. What is the problem with the test?
Race conditions
Developer: ‘It works in my Postman test but fails the automated test’
(… therefore there is a problem with the automated test)
- Postman tests (being manually driven*) operate in ‘human real-time’
- Mocha-chai tests (being automated) operate in ‘computing real-time’
- Difference in ability to replicate timing
Discuss the developer’s comments to the failed automated test. What is the problem with the test?
npmstart vs node app.js
Developer: ‘App runs in WebStorm but fails during the automated test’
(… therefore there is a problem with the automated test)
- The IDE configuration may be different to the automated testing configuration
- WebStorm: press the ‘run’ button which defaults to run node app.js
- Automated test: uses npmstart
- Differences in configuration
- Differences in assumptions
Discuss the developer’s comments to the failed automated test. What is the problem with the test?
Photos directory
Developer: ‘App runs in WebStorm but fails the automated test’
(… therefore there is a problem with the automated test)
- The developer created a /photos directory through WebStorm, but…
- … (for whatever reason) the /photos directory and assets were not added to the git repo…
- … so the server crashes when downloaded from eng-git and run
- Differences in ‘environment’ (configuration)
Discuss the developer’s comments to the failed automated test. What is the problem with the test?
Developer: ‘When I run it, the test passes, but it fails the automated test…’
(… therefore there is a problem with the automated test)
- Has the developer read the API specification properly?
- Does the API specification explain itself properly?
- Does the developer understand what is expected of the API?
- Does the tester understand what is expected of the API?
- The HTTP response from the API is not the same as a response to the user
- API ≠ user story ≠ developer’s assumptions ≠ developer’s preferences
Define a false positive tests
Actually passes, but for the wrong reason
Define a false negative tests
Actually fails, but for the wrong reason
List the four general sets of scenarios that can occur for tests.
Operation actually successful & Operation intended to be successful –> passing
Operation actually successful & Operation intended to be unsuccessful –> failing
Operation actually unsuccessful & Operation intended to be successful –> failing
Operation actually unsuccessful & Operation intended to be unsuccessful –> passing
How can we have independent tests?
- One test failing does not affect another test’s success
- Use pre-conditions to setup test e.g. before()
- Use post-conditions to tidy up, after test e.g. after()
Use suites for different areas
Discuss reasons why we cannot always ensure tests are independent.
- Some functionality will effect other functionality
* Login function is required before other functionality can operate –> •Could stub
What is the ideal situation for a given test?
✨✨✨
Independent, discrete, specific tests.
•e.g. One test (one suite of tests) for each status code in the API
What is a reason that, in reality, APIs are not always completely specified?
Minimal Viable Product