Performance Flashcards
Do you have any experience with back-end testing? Performance, load, stress?
Yes, I’ve done performance testing on individual services and databases, as well as integrated or end-to-end performance tests across a variety of services.
For example, when we investigated our response times for particular user scenarios in our web app (clicking Account Information multiple times, or clicking through different folders), we discovered that we were making a lot of direct database calls through AR to Mongo. The developers implemented a more light-weight cacheing mechanism using Postgres.
There are a lot of tasks involved in performance testing which isn’t simply writing and running tests. Setting parameters and benchmarks, so that the tests are meaningful.
Using 3rd party tools like New Relic to find common user scenarios, investigating those user scenarios and finding out how we can improve performance.
So sometimes my performance testing involved investigating and setting benchmarks on a particular release, and then measuring subsequent releases for deviations from that benchmark. And sometimes it involved more targeted performance testing where the developers had implemented a specific improvement, and my task was to test whether this was in fact an improvement.
What are some questions you might ask when load, stress, or performance testing? That is, what metrics would you gather?
Performance — how long does it take for X to execute?
Stress — how many users before we see a problem?
Load — how the application responds under different amounts of concurrent or simultaneous transactions?
For each, look at averages. Large systems are non-deterministic.
Considerations: environment configurations, time constraints (i.e., if a problem is found, it might not just be a matter of running these tests again, but fixing the problem, etc.).
TEST EARLY AND OFTEN — operational efficiency. Close the feedback loop.
There’s also different layers to consider — the client running the browser, OS, etc. The request being sent through the internet (connection types, speeds, congestion). The web server communicating with its own system-level APIs, database interactions, etc. There are lots of potential points of failure.
Diagram the network, identify possible points of failure, and design the performance tests with that in mind.
Determine environment variables for Load Generator (Client side) — CPU/RAM/OS, and System Under Test (Server side) — CPU/RAM/OS/Application server/Database server/Middleboxes, other services, etc.
Throughput (bytes/minute) - graphically, it should cap and not go down. Deviations (previous request and current) should be minimal.
Latency (delay from input to outcome)
What tools have you used for performance testing?
For more high-level realistic user scenarios or end-to-end performance, I used JMeter.
Mock data generated through mockaroo or other public data generators, imported via CSV and looped through for account creation, item creation, parsing items, deleting items, etc.
To narrow down specific pain points I could use JMeter, or command line tools like siege which allows for more targeted API calls at various levels of concurrency over extended periods of time. This was useful for finding memory leaks, etc.
Even more narrow, I could use the JS profiler in DevTools or Rails benchmarking tools to get specific CPU usage, memory leaks, overactive method calls, etc.
What are the main components you have used in JMeter?
Workbench for HTTP/HTTPS recording — helpful for designing real user scenarios.
Thread Groups and Stepping Thread Groups (more realistic ramp up of users, add 5, wait 30 seconds, add 10 more, etc.)
Listeners — Table and Graph Results
Assertion Controllers, Loop Controllers
Data generators
Controllers — blocks of code consisting of requests
Cookie Manager
What did you use for more targeted API node performance testing?
If it’s HTTP, there’s a number of good command line tools for beating up on API nodes:
brew install parallel
echo “curl -v -X GET -H ‘User-Agent: Faraday v0.9.0’ -H ‘X-NEAT-DEVICE: x86_64-darwin13.4.0’ -H ‘Authorization: OAuth abc’ ‘https://api-qa.neat.com/api/v1/account’” > goget.sh
chmod 777 goget.sh
seq 100 | parallel -j0 ./goget.sh 2>&1 | grep ‘HTTP/1.1 ‘
change seq 100
to any number of parallel processes that you think would be fun to run simultaneously. then go get a coffee because your machine might brick for awhile while it runs the processes.
Siege (and Apache AB) is another tool that mimics concurrent HTTP requests:
However, JMeter is probably a better if you are trying to simulate devices that are requesting data from the cloud. More realistic user scenarios.
How could you performance test a realistic user scenario?
I think JMeter is a great tool for that.
I ran a user test against two environments — Staging and QA — involving the following Controllers: Create User, Get OAuth Token, Account Details, List System Folders, Create Many Folders (in a loop), List All Folders, Get Revisions, Get Latest Revision, Get Shared Folders, Create Items, List All Folders, and Get Revisions (in a loop).
I used mockaroo.com to create a CSV of realistic user data. Then used a CSV Data Set Controller and User Parameters to generate the data in JMeter.
Generated a Response Time Graph. I also outputed the response times to a CSV and copied/pasted some R code to generate some nice-looking graphs.
This was a benchmarking test in anticipation of implementing some postgres (vs. mongo) cacheing around revisions.
What is the difference between performance testing and profiling?
Performance testing happens at the system level, under varying types of load, and makes sure your system lives up to Service Level Agreements.
Profiling is one thing you do when your performance tests indicate a problem. It helps you identify those parts of your system that contribute most to the performance problem and shows you where to concentrate your investigative efforts.