[Twisted-Python] profiling twisted

glyph at divmod.com glyph at divmod.com
Wed Jun 27 12:32:12 EDT 2007


On 02:32 pm, markus at bluegap.ch wrote:
>Requests per second:    494.40 [#/sec] (mean)

>Now, measured while the server is under very small load:

>Requests per second:    24.37 [#/sec] (mean)

>When putting the server under real load, those response times climb up 
>to

What is 'real load'?  Are you talking about things in process with, but 
not related to, the web server?
>two seconds, so there must be something wrong.

Maybe your server is slow? :)
>Can I somehow get the reactor's state, i.e. how many deferreds are 
>waiting in the queue, how many threads are running concurrently, etc?

There is no queue of Deferreds.  They don't actually have anything to do 
with the reactor.
>How good is the idea of deferring File I/O to threads, i.e. 
>threads.deferToThread(self.filehandle.write, data)?

If you do indeed discover that you are waiting a long time to write your 
particular stuff to files, then that might help.  It might also randomly 
interleave the data and corrupt your files, if 'data' is big enough.
>Another possibly blocking module might be the database api, but I'm 
>using twisted's enterprise adbapi, which should be async, AFAICT.

It does the operations in threads, yes.  However, the threadpool will 
eventually fill up; the concurrency is fairly limited.  (The default is 
10 or so workers, I think).
>Maybe copying data around takes time. I'm sending around chunks of 64k 
>size (streaming from the database to an external programm). Reducing 
>chunk size to 1k helps somewhat (i.e. response time is seldom over 
>150ms, but can still climb up to > 0.5 seconds).

That's a possibility that the "--profile" option to twistd which JP 
suggested might help you with.  You'll see the functions copying data 
taking a lot of CPU time in that case.
>Hum... external program.... maybe it's the self.transport.write() call 
>which blocks several 100ms? Is it safe to write:
>
>   d = threads.deferToThread(self.transport.write, dataChunk)
>
>(i.e. call transport.write from a thread?)

No.  _All_ Twisted APIs are not thread safe.  This call does not block 
though, and it is extremely, vanishingly unlikely that it is causing 
your problems.  It just sticks some data into the outgoing queue and 
returns immediately.
>How much resources do these deferToThread() deferreds eat? AFAICT, the 
>reactor prepares a thread pool, which leads me to think that it's a 
>well optimized implementation...

It's not particularly "optimized", in the sense that we haven't measured 
the performance or improved it much, but it's also not doing very much; 
put a value into a queue, get it out in a thread, do it: that's about 
all.  It would certainly surprise me if that operation were taking 
100ms.

One quick measurement you can do to determine what might be causing this 
performance loss is to examine the server in 'top' while it is allegedly 
under load.  Is it taking up 100% CPU?  If not, then it's probably 
blocked on some kind of I/O in your application, or perhaps writing the 
log.  If so, then there's some inefficient application code (or Twisted 
code) that you need to profile and optimize.  The output of "strace -T" 
on your Twisted server *might* be useful if you discover that you're 
blocking on some kind of I/O.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20070627/52ac8e14/attachment.htm 


More information about the Twisted-Python mailing list