An Introduction to Load Testing

Huge traffic spikes can occur when you launch a new product or feature, run a successful marketing campaign, or get featured on a popular blog or news site. Sometimes, traffic spikes happen unexpectedly.

If your site handles the traffic and delivers a good experience to all those people, the flood of traffic can be a big win for your business. On the other hand, if the site fails to perform well, it can be an embarrassing and costly failure.

High traffic events raise the stakes for your business. They present a valuable money-making opportunity, but they can also be dangerous.

Reducing Risk from High Traffic Events

Can we reduce the downside risk of a high traffic event, while maintaining or increasing the upside?

Yes. Load testing is a way to do exactly that.

By simulating heavy traffic ahead of time, you can find out how your site will respond when the flood of real users hits. Better yet, you can use the realistic simulation to optimize and improve your site, so it handles real traffic flawlessly when the time comes.

Load and stress testing can help you answer questions like these…

  • Will my site perform well on the busiest day of the year?
  • Do I have enough cloud infrastructure or hardware to run my application at scale?
  • Am I making good use of autoscaling, if hosted on the cloud?
  • Does my application deliver quick response times and a good user experience even under peak load?
  • When pushed to the breaking point, does my application crash hard and lose data?
  • Are there concurrency issues or “heisenbugs” in my application that only appear under heavy load?
  • Do I have memory leaks and other issues that appear over an extended period of usage?
  • Are my site’s redundancy and failover systems in place and working properly?

Answering these questions with concrete data is critical, if you want to deliver a fast and bulletproof user experience to your customers during high traffic events.

Unlike functional testing, which can be accomplished by a single person, load testing requires special tools to simulate hundreds or thousands of concurrent users interacting with your site in a realistic way. Loadster is one of those tools.

Loadster is designed to load test websites, web applications, and APIs. For the sake of brevity, in this manual we will often refer to them generally as “sites”.

Answering the Important Questions

If you’re responsible for an important site or web application, you probably already know how important it is to deliver a fast and stable experience to your users.

For a successful load test, you’ll need to compile specific requirements stating how many concurrent users the site needs to serve, what range of end user response times or page load times is acceptable, and what kind of user behavior is to be simulated. If you’re working with stakeholders, the process of gathering these requirements might not sound fun, but it’s actually a great opportunity to synchronize everyone’s expectations around the testing effort and about how your site is expected to perform in real-life high traffic events.

That said, load testing techniques can be useful even if you don’t yet have specific performance or scalability requirements. In the early stages of a project, it can be incredibly useful to run some exploratory load tests to see what bottlenecks you encounter. Running load tests in rapid succession, while making changes to the site and environment, is a great way to find obvious and not-so-obvious performance problems.

Performance Tuning

Tuning an existing website, web application, or API is often one of the best and quickest ways to deliver value. But don’t operate in the dark! When you’re tuning for performance and scalability, you’ll first want to create a repeatable load test to simulate the same user behavior with the same amount of load each time, so you can accurately compare performance before and after each tuning change. Avoid the pitfall of tuning blindly, without a steady repeatable load on the system.

Performance Regression Testing

If you’re constantly releasing changes to your web application (like most software teams), you’ll want to know if each successive change makes performance better or worse. Running the same load test against each build lets you compare high-level performance metrics to see if performance has improved or degraded. It doesn’t need to be a manual process, either: you can add Loadster to your continuous integration pipeline to get automatic performance test results from each build.

Spike Testing

If your site receives a temporary spike in volume, does it handle the spike gracefully and recover properly afterwards? Or does it break and then remain slow afterwards? It’s fairly common for web applications to suffer bad performance for a while or even indefinitely after a large spike in traffic. This can happen when an internal resource is exhausted, when old requests are queued up causing backpressure, or when your autoscaling configuration starts to snowball. Since the causes of such misbehavior are wide and varied, a proper load test is the best way to see if there’s a problem.

Stress Testing

All systems have a breaking point. When you’re building and maintaining a complex system such as a web application or API, it’s important to know exactly what happens when it’s put under stress by heavy traffic. Do database operations intermittently fail, leaving the system or its data in a half-baked or unclean state? And once the flood of traffic subsides, does the system recover gracefully on its own, or does it remain broken and require manual intervention? Pushing your site beyond the limit with a stress test will give you the answer to these questions.

Stability Testing

Web applications sometimes run smoothly for a while and then bog down as some resource is gradually exhausted. Don’t wait for it to happen in production! Before a major release, it’s a great idea to run several extended stability load tests and closely monitor the application’s performance and stability over a 24-72 hour period. If you see gradual degradation under sustained load, there’s a good chance some resource is being exhausted. It might be a hard limit (like physical memory) or a soft limit (like an internal cache or buffer). Either way, you’ll be glad you found it ahead of time.

Progressively De-Risking Your Site

You might be thinking “Wow, that’s an overwhelming list of things to test”. And it’s true, if you actually load tested all the things all the time, that would take a lot of time.

But there’s good news: reducing the risk of crashing from high traffic events isn’t an all-or-nothing endeavor. You can de-risk your site substantially, even with a little bit of testing.

Load testing (like many things) follows the Pareto Principle: something like 80% of the outputs come from 20% of the inputs. So with a little effort in the right places you can greatly reduce the risk of a site failure.

To get started, read about running successful load tests.