Automated Testing - CL2 Flashcards
Automated testing concept
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
There are many approaches to test automation, however below are the general approaches used widely:
1. Graphical user interface testing. A testing framework that generates user interface events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of the program is correct.
2. API driven testing. A testing framework that uses a programming interface to the application to validate the behaviour under test. Typically API driven testing bypasses application user interface altogether. It can also be testing public (usually) interfaces to classes, modules or libraries are tested with a variety of input arguments to validate that the results that are returned are correct.
One way to generate test cases automatically is model-based testing through use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so. In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.
What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make. A multi-vocal literature review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: 1) System Under Test (SUT), 2) the types and numbers of tests, 3) test-tool, 4) human and organizational topics, and 5) cross-cutting factors. The most frequent individual factors identified in the study were: need for regression testing, economic factors, and maturity of SUT.
A growing trend in software development is the use of unit testing frameworks such as the xUnit frameworks (for example, JUnit and NUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected.
Test automation, mostly using unit testing, is a key feature of extreme programming and agile software development, where it is known as test-driven development (TDD) or test-first development. Unit tests can be written to define the functionality before the code is written. However, these unit tests evolve and are extended as coding progresses, issues are discovered and the code is subjected to refactoring. Only when all the tests for all the demanded features pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration. It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of a waterfall development cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally, code refactoring is safer when unit testing is used; transforming the code into a simpler form with less code duplication, but equivalent behavior, is much less likely to introduce new defects when the refactored code is covered by unit tests.
Some software testing tasks (such as extensive low-level interface regression testing) can be laborious and time-consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively.
Once automated tests have been developed, they can be run quickly and repeatedly. Many times, this can be a cost-effective method for regression testing of software products that have a long maintenance life. Even minor patches over the lifetime of the application can cause existing features to break which were working at an earlier point in time.
Test automation tools can be expensive and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly in regression testing. A good candidate for test automation is a test case for common flow of an application, as it is required to be executed (regression testing) every time an enhancement is made in the application. Test automation reduces the effort associated with manual testing. Manual effort is needed to develop and maintain automated checks, as well as reviewing test results.
In automated testing, the test engineer or software quality assurance person must have software coding ability since the test cases are written in the form of source code which when run produce output according to the assertions that are a part of it. Some test automation tools allow for test authoring to be done by keywords instead of coding, which do not require programming.
Link:
https://en.wikipedia.org/wiki/Test_automation
Functional testing concept
Functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered (unlike white-box testing). Functional testing is conducted to evaluate the compliance of a system or component with specified functional requirements. Functional testing usually describes what the system does.
Since functional testing is a type of black-box testing, the software’s functionality can be tested without knowing the internal workings of the software. This means that testers do not need to know programming languages or how the software has been implemented. This, in turn, could lead to reduced developer-bias (or confirmation bias) in testing since the tester has not been involved in the software’s development.
Functional testing does not imply that you are testing a function (method) of your module or class. Functional testing tests a slice of functionality of the whole system.
Functional testing differs from system testing in that functional testing “verifies a program by checking it against … design document(s) or specification(s)”, while system testing “validate[s] a program by checking it against the published user or system requirements” (Kaner, Falk, Nguyen 1999, p. 52).
Functional testing has many types:
- Smoke testing
- Sanity testing
- Regression testing
- Usability testing
- Smoke Testing
In computer programming and software testing, smoke testing (also confidence testing, sanity testing, build verification test (BVT) and build acceptance test) is preliminary testing to reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset of test cases that cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly. When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called an intake test. Alternately, it is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. In the DevOps paradigm, use of a BVT step is one hallmark of the continuous integration maturity stage.
For example, a smoke test may address basic questions like “does the program run?”, “does the user interface open?”, or “does clicking the main button do anything?” The process of smoke testing aims to determine whether the application is so badly broken as to make further immediate testing unnecessary. As the book Lessons Learned in Software Testing puts it, “smoke tests broadly cover product features in a limited time […] if key features don’t work or if key bugs haven’t yet been fixed, your team won’t waste further time installing or testing”.
Smoke tests frequently run quickly, giving benefits of faster feedback, rather than running more extensive test suites, which would naturally take much longer.
A daily build and smoke test is among industry best practices. Smoke testing is also done by testers before accepting a build for further testing. Microsoft claims that after code reviews, “smoke testing is the most cost-effective method for identifying and fixing defects in software”.
One can perform smoke tests either manually or using an automated tool. In the case of automated tools, the process that generates the build will often initiate the testing.
Smoke tests can be functional tests or unit tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods. Functional tests may comprise a scripted series of program inputs, possibly even with an automated mechanism for controlling mouse movements. Unit tests can be implemented either as separate functions within the code itself, or else as a driver layer that links to the code without altering the code being tested. - Sanity check
A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. It is a simple check to see if the produced material is rational (that the material’s creator was thinking rationally, applying sanity). The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. A rule-of-thumb may be checked to perform the test. The advantage of a sanity test, over performing a complete or rigorous test, is speed.
In arithmetic, for example, when multiplying by 9, using the divisibility rule for 9 to verify that the sum of digits of the result is divisible by 9 is a sanity test—it will not catch every multiplication error, however it’s a quick and simple method to discover many possible errors.
In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is often prior to a more exhaustive round of testing.
3. Regression Testing Regression testing (rarely non-regression testing) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs after a change. If not, that would be called a regression. Changes that may require regression testing include bug fixes, software enhancements, configuration changes, and even substitution of electronic components. As regression test suites tend to grow with each found defect, test automation is frequently involved. Sometimes a change impact analysis is performed to determine an appropriate subset of tests (non-regression analysi).
- Usability Testing
Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning app that creates confusion amongst its users will not last for long. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.
Usability testing focuses on measuring a human-made product’s capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are food, consumer products, web sites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles.
Link:
https://en.wikipedia.org/w/index.php?title=Functional_testing
Integration testing concept
Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements. It occurs after unit testing and before validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
Link:
https://en.wikipedia.org/wiki/Integration_testing
End-to-end testing concept
End-to-End Testing is defined as a type Software Testing that not only validates the software system under test but also checks its integration with external interfaces. Hence, the name “End-to-End”. The purpose of End-to-End Testing is to exercise a complete production-like scenario.
Along with the software system, it also validates batch/data processing from other upstream/downstream systems.
End to End Testing is usually executed after functional and System Testing.
It uses actual production like data and test environment to simulate real-time settings. End-to-End testing is also called Chain Testing.
Links:
https: //www.guru99.com/end-to-end-testing.html
https: //docs.microsoft.com/en-us/previous-versions/cc194885(v=msdn.10)?redirectedfrom=MSDN
Test driven development concept
Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the software is improved so that the tests pass. This is opposed to software development that allows software to be added that is not proven to meet requirements.
American software engineer Kent Beck, who is credited with having developed or “rediscovered” the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.
Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right.
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
Link:
https://en.wikipedia.org/wiki/Test-driven_development
Performance testing concept
In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system.
https://en.wikipedia.org/wiki/Software_performance_testing
Test automation framework concept
A test automation framework is an integrated system that sets the rules of automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled to represent a business process. The framework provides the basis of test automation and simplifies the automation effort.
The main advantage of a framework of assumptions, concepts and tools that provide support for automated software testing is the low cost for maintenance. If there is change to any test case then only the test case file needs to be updated and the driver Script and startup script will remain the same. Ideally, there is no need to update the scripts in case of changes to the application.
Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test scripting are due to development and maintenance efforts. The approach of scripting used during test automation has effect on costs.
Various framework/scripting techniques are generally used:
1. Linear (procedural code, possibly generated by tools like those that use record and playback)
2. Structured (uses control structures - typically ‘if-else’, ‘switch’, ‘for’, ‘while’ conditions/ statements)
3. Data-driven (data is persisted outside of tests in a database, spreadsheet, or other mechanism)
4. Keyword-driven
5. Hybrid (two or more of the patterns above are used)
6. Agile automation framework
The Testing framework is responsible for:
1. defining the format in which to express expectations
2. creating a mechanism to hook into or drive the application under test
3. executing the tests
4. reporting results
Link:
https://en.wikipedia.org/wiki/Test_automation#Framework_approach_in_automation
Behavior-Driven Development
In software engineering, behavior-driven development (BDD) is an Agile software development process that encourages collaboration among developers, QA and non-technical or business participants in a software project. It encourages teams to use conversation and concrete examples to formalize a shared understanding of how the application should behave. It emerged from test-driven development (TDD). Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.
Although BDD is principally an idea about how software development should be managed by both business interests and technical insight, the practice of BDD does assume the use of specialized software tools to support the development process. Although these tools are often developed specifically for use in BDD projects, they can be seen as specialized forms of the tooling that supports test-driven development. The tools serve to add automation to the ubiquitous language that is a central theme of BDD.
BDD is largely facilitated through the use of a simple domain-specific language (DSL) using natural language constructs (e.g., English-like sentences) that can express the behaviour and the expected outcomes. Test scripts have long been a popular application of DSLs with varying degrees of sophistication. BDD is considered an effective technical practice especially when the “problem space” of the business problem to solve is complex.
Link:
https://en.wikipedia.org/wiki/Behavior-driven_development