Latency is defined as the time interval between the stimulation and response and it is a value which is of importance in many computer systems (financial systems, games, websites, etc). Hence we - as computer engineers - want to specify some upper bounds / worst case scenarios for the systems we build. How can we do this?
The days of counting cycles for assembly instructions are long gone (unless you work on embedded systems) - there are just too many additional factors to consider (the operating system - mainly the task scheduler, other running processes, the JIT, the GC, etc). The remaining alternative is doing empirical (hands on) testing.
So we whip out JMeter, configure a load test, take the mean (average) value +- 3 x standard deviation and proudly declare that 99.73% of the users will experience latency which is in this interval. We are especially proud because (a) we considered a realistic set of calls (URLs if we are testing a website) and (b) we allowed for JIT warm-up.
But we are still very wrong! (which can be sad if our company writes SLAs based on our numbers - we can bankrupt the company single-handedly!)
Let"s see where the problem is and how we can fix it before we cause damage. Consider the dataset depicted below (you can get the actual values here to do your own calculations).
For simplicity there are exactly 100 values used in this example. Let"s say that they represent the latency of fetching a particular URL. You can immediately tell that the values can be grouped in three distinct categories: very small (perhaps the data was already in the cache?), medium (this is what most users will see) and poor (probably there are some corner-cases). This is typical for medium-to-large complexity (ie. "real life") composed of many moving parts and it is called a multimodal distribution. More on this shortly.
If we quickly drop these values into LibreOffice Calc and do the number crunching, we"ll come to the conclusion that the average (mean) of the values is 40 and according to the six sigma rule 99.73% of the users should experience latencies less than 137. If you look at the chart carefully you"ll see that the average (marked with red) is slightly left of the middle. You can also do a simple calculation (because there are exactly 100 values represented) and see that the maximum value in the 99th percentile is 148 not 137. Now this might not seem like a big difference, but it can be the difference between profit and bankruptcy (if you"ve written a SLA based on this value for example).
Where did we go wrong? Let"s look again carefully at the three sigma rule (emphasis added): nearly all values lie within three standard deviations of the mean in a normal distribution.
Our problem is that we don"t have a normal distribution. We probably have a multimodal distribution (as mentioned earlier), but to be safe we should use ways of interpreting the results which are independent of the nature of the distribution.
From this example we can derive a couple of recommendations:
Coordinate omission (a phrase coined by Gil Tene of Azul fame) is a problem which can occur if the test loop looks something like:
start:
t = time()
do_request()
record_time(time() - t)
wait_until_next_second()
jump start
That is, we"re trying to do one request every second (perhaps every 100ms would be more realistic, but the point stands). Many test systems (including JMeter and YCSB) have inner loops like this.
We run the test and (learning from the previous discussion) report: the 85% of the request will be served under 0.5 seconds if there are 1 requests per second. And we still can be wrong! Let us look at the diagram below to see why:
On the first line we have our test run (horizontal axis being time). Let"s say that between second 3 and 6 the system (and hence all requests to it) are blocked (maybe we have a long GC pause). If you calculate the 85th percentile, you"ll 0.5 (hence the claim in the previous paragraph). However, you can see 10 independent clients below, each doing the request in a different second (so we have our criteria of one request per second fulfilled). But if we crunch the numbers, we"ll see that the actual 85th percentile in this case is 1.5 (three times worse than the original calculation).
Where did we go wrong? The problem is that the test loop and the system under test worked together ("coordinated" - hence the name) to hide (omit) the additional requests which happen during the time the server is blocked. This leads to underestimating the delays (as shown in the example).
Make sure every request less than the sampling interval or use a better benchmarking tool (I don"t know of any which can correct this) or post-process the data with Gil"s HdrHistogram library which contains built-in facilities to account for coordinated omission
This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!