≡ Menu

Performance Testing – First Example

This article is second in a series on performance and performance testing. The principles of the scientific method were discussed in the first article and now this article will detail a basic and slightly simplistic example of a performance task.

The developers you support as a systems administrator are considering moving the static content for the company website www.somefamoussite.com to its own webserver to free up resources for the dynamic page generation and generally speed things up. They are curious how fast an apache serving only static content will be able to serve requests.

In order to accomplish this task, we must examine the steps of the scientific method and see how each plays its part in providing us a sound and thorough roadmap to provide the developers with their answer.

  1. First, define the question: “How fast can an apache webserver go serving static content.”
  2. Next gather information and resources:
  3. It would benefit us to know the number of static items and the size of them to determine how best to answer the question, so we ask development that question and are handed a tarball containing 256 images with an average size of 6k each.

    Since we setup the hardware that is being used, we know that the servers have gigabit ethernet cards in them. We also expand the tarball into the tree on our webserver and use a find to create a logfile full of relative links to use to fetch the static content.

    find . -type f | awk ‘{print “/path/to/images/$1”}’ > logfile.out
    cat logfile.out logfile.out logfile.out logfile.out > logfile.1k
    cat logfile.1k logfile.1k logfile.1k logfile.1k > logfile.4k
    cat logfile.4k logfile.4k logfile.4k logfile.4k > logfile.16k

  4. Form the hypothesis:
  5. Based on an average size of 6KBytes, and knowing that the hardware has gigabit ethernet, we can compute that in lab conditions with perfect network, the machines can do no more than approximately 21,845 requests/second.

    ( (1Gbit/sec == 128 MBytes/sec) / 6KBytes avg size == 21,845.3 objects/sec)

    Our hypothesis, therefore, based solely on the network capacity of the hardware, is “A server can do no more than 21,845 operations/second.”

  6. Perform the experiment(s) and collect data:
  7. You’ll want to run top on the webserver to get a rough idea of how much free cpu there is.

    Copy the logfile.16k to each of the load generators. In this example there will be 4 load generators.

    Use wget on one of the load generators to mark the logfile with something we can search for later.

    wget http://www.myserver.com/images/TESTSTARTEDHERE

    Use wget on each load generator to fetch the images

    wget -i logfile.16k -o wget.out

    Fire off all four wget’s at the same time and let it run.

    Watch top running on the webserver and keep rough track of our idle cpu. Time passes and the load generators will eventually run out of logs, probably within a few minutes.

    Use wget to mark the logfile again.

    wget http://www.myserver.com/images/TETENDEDHERE

  8. Analyze the data:
  9. With each load generator having a 16k logfile, we had the potential load capability of 64k instantaneous requests. This is unlikely, however, as there is a certain amount of overhead between requests that must be accounted for. A reasonable assumption would be that each generator could generate close to 8k instantaneous requests, the four of which still totaling over the ~22k maximum of the network.

    Using the logfile from the apache we can determine how much traffic we received.

    First use sed or perl or your language of choice and extract the logfile.

    sed -n /TESTSTARTEDHERE/,/TESTENDEDHERE/p access.log > test.log

    Next determine the starting time by looking at the next line after STARTED line in the test.log and looking at the timestamp on the line

    head -2 test.log

    Do the same for the ending time by looking at the second to last line of test.log.

    tail -2 test.log

    Determine the total test time by subtracting the two times from each other.

    Determine the total number of lines in the logfile and removing 2 for the header and footers.

    wc -l test.log

    This will likely be 64k unless you interrupted the test prematurely.

    Now extract the successful image retrievals using grep.

    grep “HTTP/…. 200” test.log > 200.log

    Count the number of successful requests

    wc -l 200.log

    Now compute the average request speed

    Good Requests / Total Time in minutes == Average Good Requests/Minute

    Next, determine the number of requests per unit. Typically a 5 minute unit works best but for simplicity we will use a 1 minute unit.
    Extract column 4 (the timestamp in a CLF apache log) from the 200.log file

    awk ‘{print $4}’ 200.log > timestamp.out

    Truncate the seconds from the logfile using either cut or awk

    cut -d: -f 1,2,3 timestamp.out > trunctime.out

    Uniq and count the truncated timestampts to get the number occuring during each minute:

    uniq -c trunctime.out > counttime.out

    Now reverse the two columns to make the graphic easier and add a closing bracket.

    awk ‘{print $2”] ”$1}’ counttime.out > graphdata.out

    It is now possible to look through the logfile at this time and see a rough estimate of how the webserver did, however it is more valuable if we can graph this data and examine it visually.

    Import the data into excel, pages, or use gnuplot on the command line and plot the graph using a line graph.

    Load Graph

    The graph above was manufactured to illustrate the desired point. Note that the middle of the graph plateaus around 8500 requests/second. The flatness of the graph suggests that we’ve hit a bottleneck of some point. Since we know that the network is capable of nearly 22k request/second, and the network on each load client is presumably similar, we know that we’re either hitting the limitations of the disk subsystem or that we’ve pushed the webserver out of CPU.

    If, during the test, you saw the idle CPU approach single digits, then we have reasonable confidence that we pushed the machine to its limit of CPU otherwise we may be pushing the limits of I/O.

  10. Interpret the data and draw conclusions:
  11. Now, by using the graph, we can decide on a limit for the webserver. The flat line starts approximately around 8500 req/sec. A reasonable buffer is 10% of that number, and so we would say that the max capacity of the webserver is 7650 req/sec. If you wish to be more conservative, and you should, you could leave yourself 25% capacity and call it 6325.

    As a general rule you want to leave sufficient capacity on the machines to handle any excess load from failed hosts. If you have 2 machines, each machine should be able to handle all of the load. If you have 3 machines, then each should be able to handle 66% of the total load, and so forth.

  12. Publish Findings:
  13. With this simplistic testing done, you could approach development and tell them that you have some confidence that based on the preliminary testing you’ve done, the webserver can do 6325 ops/sec. Additionally, you should then provide the developers with your step by step guide as to how to get their own numbers to both allow them to validate your work, and to enable them to do this level of testing on their own in the future.

This concludes our first example. There are several more to come.

If you like what you’ve read, please share the blog with others. If you have any questions or comments, feel free to send me email at kmajer at karlmajer dot com.

>>> Karl