[Twisted-Python] Degrading under load
alecm at chatango.com
Thu Mar 9 16:49:22 EST 2006
I can give you some information regarding option A.
We are running our chatserver using twisted. When we run twisted as a single
process, when the number of connections per second is more than 50 (> 3000
per min), twisted often blocks and does not accept new connections. The CPU
load showed by "top" for twisted process is 99.9%
This behavior is on linux 2.6 on a 64 bit 4 CPU machine.
On 2.4 kernel on a 32 bit 4 CPU machine however, it always accepted new
connections, even with 99.9% load, but then they would often time out, since
under that load no data was written into them for a long time. This was with
I should add here that in our case, the load was driven not by
connection/disconnection events, but by the number of established
connections. When that number was in the vicinity of 5000, system poll()
became very slow (we run poll reactor).
Another observation: we had a memory leak, so when the RSS memory grew say
3x the starting memory, the performance severely degraded. I should note
that the machine was not running out of memory: we have 4GB RAM, and the
total used memory was at most 400MB, with twistd process using maybe 160MB
at the most.
We are now moving to Twisted 2.2 and multiprocess architecture, somewhat
similar to your B option.
From: twisted-python-bounces at twistedmatrix.com
[mailto:twisted-python-bounces at twistedmatrix.com] On Behalf Of Yitzchak Gale
Sent: Thursday, March 09, 2006 1:13 PM
To: twisted-python at twistedmatrix.com
Subject: Re: [Twisted-Python] Degrading under load
Sorry, I guess my question wasn't clear enough.
The most important things I need to know are:
When running listenTCP, how often does twisted
accept pending connections on the port? Is it only
when the previous connection is finished
processing, or every time the event loop gets
control, or something in between?
And when twisted does accept pending connections,
does it accept ALL of them and queue them all for
processing, or just one at a time?
My original post:
> I need to set up a TCP service (on a linux box)
> that will get something like a few hunderd connections
> per minute at peak load. For each connection, I do
> some XML processing, and possibly send a query
> to another nearby machine and get a respone.
> Seems to me that twisted should be able to handle that.
> But what happens when I get the occasional burst
> of connections, lets say tens of connections within
> one second? What I need is:
> o Every client gets a socket connection promptly, so
> no danger of TCP timeout.
> o Under medium load, clients will have to wait a
> bit longer for the response.
> o Under heavy load, some clients will get a "busy"
> response (defined in the protocol I am implementing)
> and immediate socket close.
> What is the best way to do that in twisted? I envision
> one of the following architectures:
> A. Just use twisted in the usual way. Watch twisted's
> event queue for heavy load.
> B. Two processes: one to dish out connections and one
> to queue requests and process them.
> C. Three processes: one to dish out connections, one
> to queue requests and watch for load, and one to
> process the requests.
> Which of these do I need to use to get the desired
> effect under load? Or is there some better way?
Twisted-Python mailing list
Twisted-Python at twistedmatrix.com
More information about the Twisted-Python