testing. The next part will showcase a practical example using Gatling, an “open-source load
testing framework based on Scala, Akka and Netty”.
IntroductionTesting system performance can be done on multiple levels. Last time, we covered micro
benchmarking with JMH. Whereas micro benchmarking deals with optimization of code and algorithms, testing system performance ensures that an entire system meets the required metrics such as throughput and response times.
Before taking a closer look at Gatling, let’s cover some theory about ensuring system
performance goals, i.e. performance testing types and the steps involved in performance testing.
Performance tests usually aim to achieve different goals. Depending on the goal, the test setup can
differ significantly. These are the three most common goals for testing.
Performance testing goals & types
This testing type is what most people think of when they hear the term “performance testing”. It is
used to identify the limits of an application. An example would be sending more and more
concurrent user requests to the tested system until it breaks down or the required
response times are not acceptable anymore. The level of load level is normally far
beyond what is expected in production. Nevertheless, it is important to know if a system fails
gracefully or, for example, starts to corrupt data.
For load testing, the expected production load must be known or estimated. This test covers a
normal user load. The aim is not to bring down the system or to figure out what the maximum
number of possible requests is. However, this test should be used to obtain some information about
how the system is going to behave and possibly identify bottlenecks such as the data base server, connection pools, etc. Another variable that might be important are response times under normal load. These data can be used, for example, for SLA
It is often forgotten to test the long-time performance of a system. A service can perform well for a
couple of minutes or even hours of a stress or load test, but can become less and less responsive
after some time (in the matter of hours or days). This can be, for example, due to some caching
issues or memory leaks. Such issues can be diagnosed very easily with a long-running endurance
test (lasting, for example, 2-3 days) with normal system load.
The performance testing process involves several and not only technical steps, all of which are comprehensively documented here:
Identify Test Environment
The physical hardware needs to be identified, including every piece that should be tested, such as
servers, network, load balancers, etc.
Specify Acceptance Criteria
A performance test itself does not have any meaning if the results are not compared to metrics that
are predefined. These metrics, for example response times or throughput per minute, will most
likely be defined by SLAs or just your traffic which you can estimate based on other components
that are already in use.
Plan & Design tests
Performance testing for real-world systems does not mean firing out one specific request a million
times. It means specifying scenarios that simulate real-life user behavior. For example, a user will
log in, then search for something, browse to page two and then look at a detail page. Surely, the user
is not going to request the detail page directly a couple of million times. Therefore, it is very
important to match the user behavior as closely as possible.
Configure Test Environment
The test environment needs a special setup, every system must be monitored. Furthermore, a
dedicated server is needed to execute the load tests. This server must be in the same data center
to avoid errors due to network bottlenecks between the tester and the tested servers. The tests
cannot be executed on a server under test. That would compromise the results, as the tests itself
need resources to be executed.
Based on all the new knowledge gained from the previous steps, the actual tests that are going to be
executed now have to be implemented. For this step, various tools like Gatling or JMeter can be
used. This will be covered in detail in the second part of this article.
The written tests need to be executed. Depending on the system you are going to test, a single server
that executes the test might be sufficient or a distributed test setup may be needed. During the
execution, various metrics like CPU usage, IO and memory usage need to be monitored. This data
shows what the bottleneck is going to be. Only knowing that a system can handle 200 requests per
minute is not a good result. The more important information of such a test are system utilization
metrics and how many spare resources the system has left to deal with future requirements.
Analyze, optimize and retest
The data we've gained from the last step now needs to be analyzed. This is where expectations and
reality are going to meet. Depending on the result, optimizations need to be put in place and the
system needs to be retested.
Gatling is going to support you in the last three steps, i.e. implementation, execution and analysis.
Executing performance tests is a process that involves many steps. To achieve satisfactory results,
all of these steps need to be prepared very carefully. Before even implementing a single line of test
code, a lot of other things need to be considered. One of the most important things is to set metrics
like response times and requests per seconds that need to be fulfilled. Without any baseline and
knowledge what the results of the test are going to mean, they have no value at all. Dealing with a
component that can actually handle 200 requests per second is noteworthy if production actually
requires 10,000 requests per second. Performance figures are only valuable if they are compared to
requirements of the real world.
The next part of this article will cover how to set up performance tests and get a graphical report