[Twisted-web] [Twisted-Python] Speed of rendering?

exarkun at twistedmatrix.com exarkun at twistedmatrix.com
Sun Jan 6 15:22:09 EST 2013

On 12:48 am, peter.westlake at pobox.com wrote:
>On Fri, Jan 4, 2013, at 19:58, exarkun at twistedmatrix.com wrote:
>>On 06:30 pm, peter.westlake at pobox.com wrote:
>> >A while back I promised to write some benchmarks for
>> >twisted.web.template's flattening functions. Is something like this
>> >suitable? If so, I'll add lots more test cases. The output format 
>> >be improved, too - any preferences?
>>The output should be something that we can load into our codespeed
>>instance.  The output of any of the existing benchmarks in lp:twisted-
>>benchmarks should be a good example of that format (I don't even 
>>what it is right now - it may not even be a "format" so much as a 
>>of data to submit to an HTTP API).
>It's pretty simple. The main difference is that all the other 
>only print a single result, and I was planning to do a number of tests.
>They can always go in separate files if it's a problem.

Codespeed cannot handle more than one result per benchmark.
>>The `timeit` module is probably not suitable to use to collect the 
>>as it makes some questionable choices with respect to measurement
>>technique, and at the very least it's inconsistent with the rest of 
>>benchmarks we have.
>What sort of choices? As far as I can see it just gets the time
>before the benchmarked code and the time after and subtracts.
>That looks quite close to what the other benchmarks do.

It does a ton more stuff than this, so I'm not sure what you mean here. 
It's full of dynamic code generation and loop counting/prediction logic, 
gc manipulation, and other stuff.  Plus, it changes from Python version 
to Python version.
>What method would you prefer?

Something simple and accurate. :)  You may need to do some investigation 
to determine the best approach.

>>Selecting data to operate on is probably an important part of this
>>benchmark (or collection of benchmarks).  It may not be possible to
>>capture all of the interesting performance characteristics in a single
>>dataset.  However, at least something that includes HTML tags is
>>probably desirable, since that is the primary use-case.
>Yes, that's where I'm going to spend most of my effort.
>>There are some other Python templating systems with benchmarks.  One
>>approach that might make sense is to try to build analogous benchmarks
>>for twisted.web.template.  (Or perhaps a little thought will reveal 
>>it's not possible to make comparisons between twisted.web.template and
>>those systems, so there's no reason to follow their benchmarking 
>I'll do that if I get time, thanks.
>Twisted-web mailing list
>Twisted-web at twistedmatrix.com

More information about the Twisted-web mailing list