Performance Testing Flashcards

1
Q

User-based objectives

A
  1. focus primarily on end-user satisfaction and business goals.
  2. are less concerned about feature types or how a product gets delivered.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Technical objectives

A

focus on operational aspects and providing answers to questions regarding a system’s ability to scale, or under what conditions degraded performance may become apparent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Key objectives of performance testing include

A
  1. identify potential risks
  2. find opportunities for improvement
  3. identify necessary changes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Questions to stakeholders about the tes

A
  1. What transactions will be executed in the performance test and what average response time is expected?
  2. What system metrics are to be captured (e.g., memory usage, network throughput) and what values are expected?
  3. What performance improvements are expected from these tests compared to previous test cycles?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The Performance Test Plan Content

A
  1. Objective
  2. Test Objectives
  3. System Overview
  4. Types of Performance Testing to be Conducted
  5. Acceptance Criteria
  6. Test Data
  7. System Configuration
  8. Test Environment
  9. Test Tools
  10. Profiles
  11. Relevant Metrics
  12. Risks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Objective from PTP

A
  1. describes the goals, strategies and methods for the performance test
  2. enables a quantifiable answer to the central question of the adequacy and the readiness of the system to perform under load.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Acceptance Criteria (PTP)

A
  1. response time is a user concern, throughput is a business concern, and resource utilization is a system concern
  2. AC should be set for all relevant measures and related back to the following as applicable:
    a. overall objectives
    b. SLAs
    c. Baseline values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Baseline values

A

a set of metrics used to compare current and
previously achieved performance measurements. This enables particular performance improvements to be demonstrated and/or the achievement of test acceptance criteria to be confirmed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Test Data can include:

A
  1. User account data
  2. User input data
  3. Database (e.g., the pre-populated database that is populated with data for use
    in testing)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

User Data creation process should address the following aspects:

A
  1. data extraction from production data
  2. importing data into the SUT
  3. creation of new data
  4. creation of backups that can be used to restore the data when new cycles of testing are performed
  5. data masking or anonymizing. Adds risk to the performance tests as it may not have the same data characteristics as seen in real-world use.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

System Configuration (PTP)

A
  1. A description of the specific system architecture, including servers (e.g., web, database, load balancer)
  2. Definition of multiple tiers
  3. Specific details of computing hardware (e.g., CPU cores, RAM, Solid State
    Disks (SSD), Hard Drive Disks (HDD) ) including versions
  4. Specific details of software (e.g., applications, operating systems, databases,
    services used to support the enterprise) including versions
  5. External systems that operates with the SUT and their configuration and
    version (e.g., Ecommerce system with integration to NetSuite)
  6. SUT build / version identifier
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test Environment

A

The test environment is often a separate environment that mimics production, but at a smaller scale. How will the examples be extrapolated?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Operational profiles

A
  1. provide a repeatable step-by-step flow through the application for a particular usage of the system.
  2. Aggregating these operational profiles results in a load profile (commonly referred to as a scenario)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Risks (PTP)

A
  1. areas not measured as part of the performance testing
  2. limitations to the performance testing
  3. limitations of the test environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Examples of Recommended Actions

A
  1. change physical components (hardware, routers)
  2. change software (e.g., optimizing applications and database calls),
  3. altering network (e.g., load balancing, routing)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Test Analysis Elements

A
  1. Status of simulated (e.g., virtual) users.
  2. Transaction response time.
  3. Transactions per second.
  4. Transaction failures.
  5. Hits (or requests) per second.
  6. Network throughput.
  7. HTTP responses.
17
Q

A typical performance testing report may include:

A
  1. Executive Summary
  2. Test Results
  3. Test Logs/Information Recorded
  4. Recommendations
18
Q

Executive Summary

A

The goal is to present concise and understandable conclusions, findings, and recommendations for management with the goal of an actionable outcome.

19
Q

Test Results may include

A
  1. A summary providing an explanation and elaboration of the results.
  2. Results of a baseline test that serves as “snapshot” of system performance at
    a given time and forms the basis of comparison with subsequent tests.
  3. A high-level diagram showing any architectural components that could (or did) impact test objectives.
  4. A detailed analysis (tables and charts) of the test results showing response times, transaction rates, error rates and performance analysis.
20
Q

Recommendations may include:

A
  1. Technical changes recommended, such as reconfiguring hardware or software
    or network infrastructure
  2. Areas identified for further analysis (e.g., analysis of web server logs to help
    identify root causes of issues and/or errors)
  3. Additional monitoring required of gateways, servers, and networks so that
    more detailed data can be obtained for measuring performance characteristics and trends (e.g., degradation)
21
Q

Transient state monitoring

A
  1. some standard approaches—such as monitoring averages—may be very misleading.
22
Q

Test Execution

A
  1. generation of a load against the SUT according to a load profile
  2. monitoring all parts of the environment
  3. collecting and keeping all results and information related to the test.
23
Q

Execution: important points

A
  1. specify how failures should be handled to make sure that no system issues are introduced. (logouts)
  2. One technique for verifying load tests which are communicating directly on the protocol level is to run several GUI-level (functional) scripts or even to execute similar operational profiles manually in parallel to the running load test.
24
Q

Communicate to stakeholders with buisness focus

A
  1. are less interested in the distinctions between functional and non-functional quality characteristics.
  2. The connection between product risks and performance test objectives must be clearly stated.
  3. Stakeholders must be made aware of the balance between the cost of planned performance tests and how representative the performance testing results will be, compared to production conditions.
  4. Project risks must be communicated. These include constraints and dependencies concerning the setup of the tests, infrastructure requirements (e.g., hardware, tools, data, bandwidth, test environment, resources) and dependencies on key staff.
  5. The high-level activities must be communicated (see Sections 4.2 and 4.3) together with a broad plan containing costs, time schedule and milestones.
25
Q

Communicating to a Stakeholders with a Technology Focus

A
  1. The planned approach to generating required load profiles must be explained and the expected involvement of technical stakeholders made clear.
  2. Detailed steps in the setup and execution of the performance tests must be explained to show the relation of the testing to the architectural risks.
  3. Steps required to make performance tests repeatable must be communicated. (organizational and technical)
  4. Where test environments are to be shared, the scheduling of performance tests must be communicated to ensure the test results will not be adversely impacted.
  5. Mitigations of the potential impact on actual users if performance testing needs to be executed in the production environment must be communicated and
    accepted.
  6. Technical stakeholders must be clear about their tasks and when they are
    scheduled.
26
Q

Transactions

A

Transactions describe the set of activities performed by a system from the point of initiation to when one or more processes (requests, operations, or operational processes) have been completed.

27
Q

Elapsed time for that transaction

A

transaction response time + the think time

28
Q

Ramp-up

A

By ramping up load and measuring the underlying transaction times, it is possible to correlate the cause of degradation with the response times of one or more transactions.

29
Q

Operational profiles

A
  1. Specify distinct patterns of interaction with an application such as from users or other system components.
  2. May be combined to create a desired load profile for achieving particular performance test objectives
30
Q

Principal steps for identifying operational profiles

A
  1. identify data to be gathered
  2. gather the data using one or more sources
  3. evaluate the data to construct the operational profiles