In our first example we did several things that I perhaps took for granted. A reader asked why I chose to use CLI load generation tools instead of surfing with a browser, and if the initial numbers generated are even reasonable. Allow me to address both points.
There is nothing stopping a tester from performing their load test using a browser or other GUI based load generation tool. Providing the tool has some manner of minimal controllability and repeatability, feel free to use any tool you so desire. Should you choose to use a browser, such as Firefox, be sure to recognize that the plugins are capable of altering the behavior and characteristics of the browser considerably and all future testing would need to be done using the same exact browser, machine, and plugin combination for the numbers to be comparable.
Essentially, I choose to use a CLI tool, such as wget, because it behaves the same every time and I can wrap the application in a shell script to guarantee both the same behavior, and instrument things further.
Second, the first cut performance numbers. When trying to rapidly determine a speed of light, its more important to get to the right magnitude first and then refine than it is to try and get the exact answer on the first pass.
For example, we know that we cannot move more packets across the network than we can fit through a single network interface. Therefore, if the size of the interface is N bytes per second, regardless of what we do, we will never be able to push more than N+1 total bytes through the interface. Similarly, if we know that we have M sized pages, then we will never be able to push N/M pages through the interface at a single point in time.
What this does for us is provide an upper bound for our testing. If our results indicated that we were able to do N/M + 5302 transactions per second, we know know that something was wrong with our calculation as those last 5302 operations/second would simply not fit through the pipe. However, if our results indicated 5302, and N/M is 8192, then we know that our result is a reasonable number.
It is important to obtain the speed of light at the start and then refine using the bounding boxes we know to be true. This ideal holds whether testing network, disk I/O, cpu, etc. If we know what the fastest possible speed of a given component is, then we know that if our results report numbers faster than what we know to be possible, then we must question the results and find them to be true because of another reason, or discard them and start over.
And a few thoughts just to provide a bit of context as to when speed of light may be false. When testing disk I/O, if the file size is sufficiently small, the file will be loaded directly into the buffer cache. This is a section of machine memory dedicated to buffering all disk I/O transactions and provides access times and speeds comparable to system memory and not the disk subsystem. Ie, the speed of light for the buffer cache is based off of a 50ns access time and not an 80ms access time.
Iâ€™d like to thank the reader that brought the questions to me and invite others to comment or email me with any questions they may have. I can be reached at kmajer at karlmajer dot com.