[Twisted-Python] What kind of throughput can we expect to achieve when using DatagramProtocol (UDP)

John Draper lists at webcrunchers.com
Mon Mar 30 18:07:08 EDT 2009


We have an application we intend to release where we intend to have 
about a million applications sending UDP packets to a Twisted Python 
server which needs to process incoming data (Approx 128 bytes of text) 
per packet,   which a BACK end system is inserting this data into mySQL 

Has anyone done some serious "stress testing" of Twisted Python simple 
server code to see just how much data it can digest at a time.    Our 
server will no doubt be hosted on an OC3 - capable of 150 megabits of 
throughput with approx a 75 - 80% load.    We need to know how many 
servers we need to put into some kind of a load sharing cluster to be 
able to handle this very high data before it chokes.   

Right now,  in our proof of performance,  we are using UDP,  but 
planning to move to the more reliable TCPIP protocol when we get into 

These data requests will come in fast and violent spurts of data.


More information about the Twisted-Python mailing list