httperf is another tool for measuring web server performance. Its input and reports are different from the ones we saw while using ApacheBench. This tool's manpage includes an in-depth explanation of all the options it accepts and the results it generates. Here we will concentrate on the input and on the part of the output that is most interesting to us.

With httperf you cannot specify the concurrency level; instead, you have to specify the connection opening rate (—rate) and the number of calls (—num-call) to perform on each opened connection. To compare the results we received from ApacheBench we will use a connection rate slightly higher than the number of requests responded to per second reported by ApacheBench. That number was 856, so we will try a rate of 860 (—rate 860) with just one request per connection (—num-call 1). As in the previous test, we are going to make 5,000 requests (—num-conn 5000). We have set a timeout of 60 seconds and allowed httperf to use as many ports as it needs (—hog).

So let's execute the benchmark and analyze the results:

panic% httperf --server localhost --port 80 --uri /perl/simple_test.pl \
--hog --rate 860 --num-conn 5000 --num-call 1 --timeout 60

Maximum connect burst length: 11

Total: connections 5000 requests 5000 replies 5000 test-duration 5.854 s

Connection rate: 854.1 conn/s (1.2 ms/conn, <=50 concurrent connections)
Connection time [ms]: min 0.8 avg 23.5 max 226.9 median 20.5 stddev 13.7
Connection time [ms]: connect 4.0
Connection length [replies/conn]: 1.000

Request rate: 854.1 req/s (1.2 ms/req)
Request size [B]: 79.0

Reply rate [replies/s]: min 855.6 avg 855.6 max 855.6 stddev 0.0 (1 samples)
Reply time [ms]: response 19.5 transfer 0.0
Reply size [B]: header 184.0 content 6.0 footer 2.0 (total 192.0)
Reply status: 1xx=0 2xx=5000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.33 system 1.53 (user 5.6% system 26.1% total 31.8%)
Net I/O: 224.4 KB/s (1.8*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

As before, we are mostly interested in the average Reply rate—855, almost exactly the same result reported by ab in the previous section. Notice that when we tried —rate 900 for this particular setup, the reported request rate went down drastically, since the server's performance gets worse when there are more requests than it can handle.