Contents ▼

Running Load Tests

When you launch a test (from the scenario editor) Loadster will display a dashboard showing the real-time results.

The dashboard has a sidebar with high-level gauges and metrics, and a collection of charts and tables. You can use these to find out how your test is doing and watch for errors.

The test dashboard shows real-time stats while a test is running

If you notice problems, you can stop the test. Otherwise, the test will run for planned duration and then exit on its own.

The Sidebar

The sidebar shows a live snapshot of how your test is progressing.

Response Times

The Response Times section is one of the key indicators of how well your application or site performs under load. These are a very close approximation of how long your real users would have to wait for a page or endpoint to respond, under load conditions similar to what your test is simulating.

The avg value here is a simple weighted average response time of all steps across all virtual user groups in your test.

The p80 and p90 values are an approximation of the 80th and 90th percentile response times. Loadster calculates them first at the group level (returning a percentile for each group) and then averaging the percentile across each group. While not quite a true percentile of all individual values, this approach meets a similar need while being much less computationally intensive.

Network Throughput

The Network Throughput section tells how many bytes/bits are being sent and received across the network. The value is calculated based on the total size of each request sent (upload) and each response received (download). The calculation is based on a rolling 6-second window.

Note: Earlier versions of Loadster, prior to 4.0, reported network throughput as the total uncompressed size of the data transferred. As of Loadster 4.0, the network throughput calculation takes compression into account. The new method correlates much more closely to the values reported by wire-level network monitoring tools.

Transaction Throughput

The Transaction Throughput section tells the average number of pages and hits per second are being generated in a rolling time window.

Pages refers to any request made by an HTTP step in one of your scripts, whether or not the page had additional resources rolled up underneath it.

Hits refers to any request made by an HTTP step, plus any additional resources that may have been loaded along with that step.

If none of your steps have any additional resources, then the Pages and Hits value will be equal.

Transactions

The Transactions section shows cumulative totals of key transaction types.

Pages refers to any request made by an HTTP step in one of your scripts, whether or not the page had additional resources rolled up underneath it.

Hits refers to any request made by an HTTP step, plus any additional resources that may have been loaded along with that step.

Iterations refers to how many complete iterations have been executed of any of your scripts. Each time a virtual user finishes an iteration of your script, it starts again, as long as there is time remaining in the test phase. The iteration count is a good indication of how many user sessions or user journeys have been simulated.

Errors refers to any HTTP error, validation error, or other unexpected error. If the scripts are properly designed and the website you’re testing is handling the load gracefully, you should not see errors. If you do see errors, they probably warrant further investigation.

Errors

The Errors section provides more details on the types of errors that are occurring (if any), and a count of errors by type.

Virtual Users

The Virtual Users section shows each of your virtual user groups. Next to each group name, it shows how many virtual users are currently running and the total number allocated. For example, if a total of 100 virtual users have been allocated but only 57 are currently running, it will show (57/100).

During the ramp-up phase and the ramp-down phase, it is normal to have only some of the virtual users running.

Load Engines

The Load Engines section shows each of the self-hosted load engines or cloud clusters active in your test. If you have multiple virtual user groups and each is using a different engine or region, multiple lines will show up here.

Next to the load engine or cloud cluster’s name is a tiny graph with a high-level load average. The load average here is the overall “busyness” of the engine or cluster itself. This is important because an overloaded engine can sometimes report less accurate response times. Generally, as long as the engine isn’t maxed out, there is no cause for concern.

The Charts

Clicking on any section in the sidebar will focus a full-page chart related to that section. These charts provide more detail, along with a historical view into how the metrics have changed throughout the duration of the load test.

Average Response Times by Page

Average Response Times by Page is useful so you can see which of your pages/URLs are slower. In all but the simplest sites, certain pages tend to account for the bulk of the slowness. Slow pages are often your best optimization candidates. The term “page” is used loosely here and can also refer to an endpoint or anything else represented by a URL.

Network Throughput

Network Throughput shows the rate of bytes and bits transferred per second throughout your test. Although Loadster deals with HTTP (an application layer protocol), this should be a close approximation of actual throughput at the transport layer as well.

Cumulative Network Throughput

Cumulative Network Throughput is the total number of bytes uploaded (requests) and downloaded (responses) in your test. Since the number reported is cumulative, it will climb throughout the test, especially during the peak load phase.

Transaction Throughput

Transaction Throughput is the rate of pages and hits per second. For the purposes of this chart, a “page” is any top-level HTTP step in any of your scripts, while a “hit” is any request against a top-level HTTP step or one of its included resources.

Transactions

The Transactions chart shows a cumulative count of the pages, hits, iterations, and errors in the test.

Errors by Type

The Errors by Type chart shows how many errors have occurred of each type. This may include HTTP errors (any response with an HTTP 4xx or 5xx status), or validation errors (which are thrown when a step fails one of your validation rules).

Errors by Page

The Errors by Page chart shows the URLs on which errors occurred. It is useful for pinpointing which of your pages or endpoints are having trouble, and by inference, which of the steps in your script you may need to revisit. The term “page” is used loosely here and can also refer to an endpoint or anything else represented by a URL.

Error Breakdown

The Error Breakdown table provides more detail on errors. It includes a longer error message as well as the exact script and virtual user that experienced the error.

Virtual Users

The Virtual Users chart shows, for each virtual user group, how many virtual users have been running at any point during the test. The ramp-up and ramp-down phase should resemble what you configured in your scenario. Virtual users may take a bit longer than planned to exit during the ramp-down phase, because they must complete the current iteration of their script before exiting.

Load Engine CPU Utilization

Load Engine CPU Utilization shows how busy the CPU(s) are on each load engine or cluster. If the CPU remains 100% utilized for a significant amount of time, it can result in inaccurate response time measurements! If this happens, it may be a good idea to split the virtual user group into multiple smaller groups on different engines or clusters.

Load Engine Thread Count

Load Engine Thread Count is another measurement of how busy the load engine or cluster is. The thread count is directly correlated to how many virtual users the engine is running. Engines will always have at least one thread per virtual user, and more if the script calls for additional page resources to be downloaded in parallel with the primary request.

Load Engine Memory Utilization

Load Engine Memory Utilization tells how well the engine is managing its memory. This is rarely a problem, but things to look out for include very high memory usage (close to 100%) and extremely frequent garbage collection (lots of big spikes and drop-offs in the chart).

Load Engine Latency

Load Engine Latency is essentially the ping time between your local Loadster Workbench and the load engine or cloud cluster. Latency higher than 300ms can cause problems with data collection in some cases. Make sure you have a fast network connection between your Workbench and your engines or the cloud.

When the Test Finishes

After a test finishes (or you stop it prematurely), Loadster compiles the data it collected from the engines and stores it in its own repository.

You can then open the test result in the test report editor for further analysis, or to share with your team members. Learn more about this in Analyzing Test Results.