[Twisted-web] making streams optional in web2

Jean-Paul Calderone exarkun at divmod.com
Thu Feb 9 13:08:27 MST 2006


On Thu, 09 Feb 2006 14:20:05 -0500, Glyph Lefkowitz <glyph at divmod.com> wrote:
>On Thu, 2006-02-09 at 13:40 -0500, James Y Knight wrote:
>
>> Benchmarking is a dangerous activity.
>
>Truly.
>

Here's the ab2 output for a twisted.web2 server:

    Server Software:        Twisted/SVN-Trunk
    Server Hostname:        kunai.lan
    Server Port:            8080

    Document Path:          /
    Document Length:        26 bytes

    Concurrency Level:      500
    Time taken for tests:   36.267711 seconds
    Complete requests:      10000
    Failed requests:        0
    Write errors:           0
    Total transferred:      1751750 bytes
    HTML transferred:       260260 bytes
    Requests per second:    275.73 [#/sec] (mean)
    Time per request:       1813.386 [ms] (mean)
    Time per request:       3.627 [ms] (mean, across all concurrent requests)
    Transfer rate:          47.15 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0   87 718.4      1    9000
    Processing:   108  461 122.7    474    1944
    Waiting:       40  459 122.7    473    1942
    Total:        110  548 766.0    481   10912

    Percentage of the requests served within a certain time (ms)
      50%    481
      66%    505
      75%    511
      80%    515
      90%    546
      95%    585
      98%    733
      99%   3575
     100%  10912 (longest request)

And here's the output of a run against CherryPy:

    Server Software:        CherryPy/2.2.0beta
    Server Hostname:        kunai.lan
    Server Port:            8080

    Document Path:          /
    Document Length:        26 bytes

    Concurrency Level:      500
    Time taken for tests:   7.855088 seconds
    Complete requests:      10000
    Failed requests:        0
    Write errors:           0
    Total transferred:      1360136 bytes
    HTML transferred:       260026 bytes
    Requests per second:    1273.06 [#/sec] (mean)
    Time per request:       392.754 [ms] (mean)
    Time per request:       0.786 [ms] (mean, across all concurrent requests)
    Transfer rate:          169.06 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0  102 545.4      0    3003
    Processing:    16  229 169.3    305    3419
    Waiting:       15  228 169.4    304    3418
    Total:         33  332 623.2    306    6420

    Percentage of the requests served within a certain time (ms)
      50%    306
      66%    329
      75%    333
      80%    334
      90%    340
      95%    343
      98%   3202
      99%   3460
     100%   6420 (longest request)

The CherryPy numbers might be artificially depressed; from a quick check, it seems the bottleneck may be on the client side.

This is the wsgi app being served:

    def app(environ, start_response):
        start_response('200 Ok', [('Content-Type', 'text/plain')])
        return 'WSGI is a wonderful thing.'

Jean-Paul



More information about the Twisted-web mailing list