3. The Generic Test Automation Architecture Flashcards
3.1 Introduction to gTAA
gTAA presents:
- Layers
- Components
- Interfaces
which are the further redefined into the concrete TAA
3.1 Introduction to gTAA
gTAA allows structured and modular approach
to building a TAS by:
- Defining the concept, space, layers, services and interfaces
- Supporting simplified components for the effective development of TA
- Re-using test automation components for different or evolving TASs
- Easing the maintenance and evolution of TASs
- Defining the essential features for a user of a TAS
3.1 Introduction to gTAA
TAA complies with the following principles:
- Single reponsibility - every TAS component must have a single responsibility, every component should be in charge of exactly one thing
- Extension (open/slosed principle by B.Myer) - every TAS component must be open for the extension but closed for the modification
- Replacement (substitution principle by B.Liskov) - every TAS component must be replaceable without affecting the behavior of the TAS
- Component segregation (interface segregation principle by R.C. Martin) - It is better have more specific components than a general, multi-purpose component
- Dependency inversion - The components of a TAS must depend on abstractions rather than on low-level details
3.1.1 Overview of the gTAA
The gTAA is structured into horizontal layers:
- Test generation
- Test definition
- Test execution
- Test adaption
It also has interfaces for:
- Project management
- Configuration management
- Test management
3.1.2 Test Generation Layer
Test Generation Layer
Consists of tool support for the following:
- Manually designing test cases
- Developing, capturing or deriving test data
- Automatically generating test cases from models that define the SUT and/or its environment
Components in this layer are used to:
- Edit and navigate test suite structures
- Relate test cases to test objectives or SUT requirements
- Document the test design
For automated test generation the following capabilities may also be included:
- Ability to model the SUT, its environment, and/or test system
- Ability to define test directives and to configure/parametrize test generation algorithms
- Ability to trase the generated tests back to the model (elements)
3.1.3 Test Definition Layer
Test Definition Layer
Consists of tool support for the following:
- Specifying test cases (at a high and/or low level)
- Defining test data for low-level test cases
- Specifying test procedures for a test case or a set of test cases
- Defining test scripts for the execution of the test cases
- Providing access to test libraries as needed
The components in this layer are used to:
- Partition/constrain, parametrize or instantiate test data
- Specify test sequences or fully-fledged test behaviors, to parametrize and/or to group them
- Document the test data, test cases and/or test procedures
3.1.4 Test Execution Layer
Test Execution Layer
Consists of tool support for the following:
- Executing test cases automatically
- Logging the test case executions
- Reporting the test results
This layer may consist of components that provide the following capabilities:
- Set up and tear down the SUT for test execution
- Set up and tear down test suites
- Configure and parametrize the test setup
- Interpret both test data and test cases and transform them into executable scripts
- Instrument the test system and/or the SUT for logging of test execution and/or fault injection
- Analyze the SUT responses during test execution to steer subsequent test runs
- Validate SUT responses for automated test case execution results
3.1.5 Test Adaptation Layer
Test Adaptation Layer
Consists of tool support for the following:
- Controlling the test harness
- Interacting with the SUT
- Monitoring the SUT
- Simulating or emulating the SUT environment
The test adaptation layer provides the following functionality:
- Mediating between the technology-neutral test definitions and specific technology requirements of the SUT and test devices
- Applying different technology-specific adaptors to interact with the suit
- Distributing the test execution across multiple test devices/test interfaces or executing tests locally
3.1.6 Configuration Management of a TAS
Configuration Management of a TAS
May need to include:
- Test models
- Test definitions/specifications including test data, test cases and components
- Test scripts
- Test execution engines and supplementary tools and components
- Test adaptors for the SUT
- Simulators and emulators for the SUT environment
- Test results and reports
3.1.7 Project Management of a TAS
Project Management of a TAS
- TAE needs to perform the tasks for all phases of the SDLC
- Environment of the TAS should be designes such that information (metrics) can be easily exctracted or automatically reported
3.1.8 TAS Support for Test Management
Support for Test Management
- A TAS must support the test management for the SUT
- Test reports, test logs and test results need to be exctraced easily or automatically provided to the test management of the SUT
3.2. TAA Design | 3.2.1 Introduction to TAA Design
Principal activities required to design TAA
- Capture requirements needed to define an apropriate TAA
- Compare and contrast different design/architecture approaches
- Identify areas where abstraction can deliver benefits
- Understand SUT technologies and how these interconnect with the TAS
- Understand the SUT environment
- Time and complexity for a given testware architecture implementation
- Ease of use for a given testware architecture implementation
3.2.1 Introduction to TAA Design
The requirements for a TA approach need to consider the following:
- Which activity or phase of the test process should be automated
- Which test level should be supported
- Which type of test should be supported
- Which test role should be supported
- Which software product, software product line, software product family should be supported
- Which SUT technologies should be supported
3.2.1 Introduction to TAA Design
Different Design/Architecture Approach
- Consideration for the test generation layer:
- selection of manual or automated test generation
- selection of (for example) requirements-based, data based, scenario-based or behavior-based test generation
- selection of test generation strategies
- choosing of the test selection strategy
3.2.1 Introduction to TAA Design
Different Design/Architecture Approach
- Consideration for the test definition layer:
- selection of data-driven, keyword-driven, pattern-based or model-driven test definition
- selection of notation for test definition
- selection of style guides and guidelines for the definition of high quality tests
- selection of test case repositories
3.2.1 Introduction to TAA Design
Different Design/Architecture Approach
- Consideration for the test execution layer:
- selection of test execution tool
- selection of interpretation or compilation approach for implementing test procedures
- selection of implementation technology for implementing test procedures
- selection of helper libraries to ease test execution
3.2.1 Introduction to TAA Design
Different Design/Architecture Approach
- Consideration for the test adaptation layer:
- selection of test interfaces to the suit
- selection of tools to stimulate and observe the test interfaces
- selection of tools to monitor the SUT during test execution
- selection of tools to trace test execution
3.2.1 Introduction to TAA Design
Benefits of Abstraction
Part I
- Abstraction in TAA enables technology independence in that the same test suite can be used in different test environments and on different target technologies
- The portability of test artifacts is increased
- Abstraction improves maintainability and adaptability to new or evolving SUT technologies
- Abstraction helps to make TAA more accessible to non-technicians, as test sutes can be documented and explained at a higher level
- This improves readability abd understandability
3.2.1 Introduction to TAA Design
Benefits of Abstraction:
Part II
- TAE must be aware that there are trade-offs between sophisticated and straightforward implementation of a TAA with respect of overall functionality, maintainability, and expandability
- A decision on which abstraction to use in a TAA needs to take into account these trade-offs
- The more abstraction is used in TAA, the more flexible it is with respect to further evolution or transitioning to new approaches or technologies
- This comes at the cost of larger initial investments, but can pay off in the long run
- It may also lead to lower performance of the TAS
3.2.1 Introduction to TAA Design
Benefits of Abstraction:
Part III
TAE provides inputs to the ROI analysis by providing technical evaluations and comparisons of different test automation architectures and approaches with respect to:
- timing
- cost
- efforts
- benefits
3.2.1 Introduction to TAA Design
Interconnection of SUT Technologies with TAS
Part I
The access to the test interfaces of the SUT is central to any automated test execution, and it can be available at the following levels:
- software level
- API level
- protocol level
- service level
3.2.1 Introduction to TAA Design
Interconnection of SUT Technologies with TAS
Part II
Paradigm of interaction, (whenever the TAS and SUT are separated by APIs, protocols or services) include the following:
- event-driven paradigm, which drives the interaction via events being exchanged on an event bus
- client-server paradigm, which drives the interaction via service invocation from service requestors to service provider
- peer-to-peer paradigm, which drives the interaction via service invocation from either peer
3.2.1 Introduction to TAA Design
SUT Environment
SUT can be:
- SUT as standalone sofware
- SUT as software that works in relation to other:
- software (systems of systems)
- hardware (e.g. embedded systems)
- environmental components - A TAS simulates or emulates the SUT environment as part of an automated test setup
3.2.1 Introduction to TAA Design
Time and complexity of TAS
Methods for estimations and examples include the following:
- analogy-based estimation such as: function points, three-point estimation, wideband delphi and expert estimation
- estimation by use of work breakdown structures such as those found in management software or project templates
- parametric estimation such as Constructive Cost Model (COCOMO)
- size-based estimation such as Function Point Analysis, Story Point Analysis or Use Case Analysis
- group estimations such as Planning Poker
3.2.1 Introduction to TAA Design
TAS and Ease of Use
Usability issues for a TAS includes, but is not limited to:
- tester-oriented design
- ease of use of the TAS
- TAS support for other roles in the software development, quality assurance, and project management
- effective organization, navigation, and search in/with the TAS
- useful documentation, manuals, and help text for the TAS
- Practical reporting by and about the TAS
- Iterative design to address TAAS feedback and empirical insights
3.2.2 Approaches for Automating Test Cases
Approaches for Automating Test Cases
- TAE implements test cases directly into automated test scripts. This option is the least recommended as it lacks abstraction and increases the maintenance load
- TAE designs test procedures, and transforms them into automated test scripts. This option has abstraction but lacks automation to generate test scripts
- TAE uses a tool to translate test procedure into automated test scripts. This option combines both abstraction and automated script generation
- TAE uses a tool that generates automated procedures and/or translates the test scripts directly from models. This option has the highest degree of automation
3.2.2 Approaches for Automating Test Cases
Capture/Playback Approach
Part I
- Tools are used to capture interactions with the SUT while performing the sequence of actions as defined by a test procedure
- Inputs are captured; outputs may also be recorded for later checks
During the replay of events, there are various manual and automated output checking possibilities:
- Manual: the tester has to watch the SUT outputs for anomalies
- Complete: all system outputs that were recorded during capture must be reproduced by the SUT
- Exact: all system outputs that were recorded during capture must be reproduced by the SUT to the level of the recording
- Checkpoints: only selected system outputs are checked at certain points for specified values
3.2.2 Approaches for Automating Test Cases
Capture/Playback Approach
Part II
Pros:
- The capture/playback approach can be used for SUTs on the GUI and/or API level
- Initially, it is easy to setup and use
Cons:
- Capture/playback scripts are hard to maintain and evolve because the captured SUT execution depends strongly on the SUT version from which the capture has been taken
- Implementation of the test cases (scripts) can only start when the SUT is available