[Twisted-Python] Question regarding async stuff

Greg Fortune lists at gregfortune.com
Thu Jul 11 09:58:22 EDT 2002


Good deal, I did a poor job of communicating my question, but I did 
understand everything.  Some of the processing I was considering doing was 
fairly CPU intensive, but some simple things reduced the processing overhead 
to almost nothing.

So, I can even assume that during processing of a request (I'm not talking 
about data transport here, just the processing) that operations on data 
members in the protocols factory can be considered atomic?  If the server 
can't "process" more than one request at a time, two protocols can not be 
accessing the factory members concurrently, correct?  I've got a mutex 
wrapper around some stuff in my factory right now, but it sounds like I can 
rip that stuff out.

The server I'm writing is pretty simple.  In principle, it's an ftp server 
with special restrictions.  It's a file server with the requirement that it 
provide a pool of unbound files and then a unique path/name to any file that 
has been bound.  I'm going to use it to store and retrieve graphics 
associated with entities in a database for a point of sale/inventory system 
I'm developing.

That way I can be sure that my pathnames will be at most a certain length.  
All directories will be 1 char long and filenames will be 6 chars long.  At a 
depth of 4 with 10 directories spanning from each node, I can store somewhere 
over 10E9 files. 

Thanks for the quick response,

Greg

<snip>
>
> Yeah, I think you're misunderstanding something ;).
>
> Protocol.dataReceived is called only when data is available from a network
> connection; therefore, partial requests coming in are partially parsed and
> buffered by state machines (Protocol instances).
>
> When a full request has been received, the request can be processed.  If
> processing that request requires accessing other asynchronous data that's
> not yet available, that's fine too -- just do your transport.write(...) to
> respond later on, when a different event arrives.  Some parts of the
> framework (twisted.spread, twisted.enterprise) make this extremely
> explicit, by allowing the user to return a Deferred when their response is
> not yet ready.
>
> Twisted can be "processing multiple requests at the same time" in the sense
> that while it's waiting on data from the network, it won't be blocked,
> since all I/O is asynchronous.  It will be "stopped" while doing literal
> CPU-bound "processing" of a request; but while this may seem bad if you
> look at it naively, 90% of all request-processing you'll do is incredibly
> brief, and managing the resources needed to parallelize that processing is
> an order of magnitude (or more, thanks to python's global interpreter lock,
> mutex contention, context switching, and other thread nastinesses) more
> intensive than just running the requests one after another.
>
> This is before we even start talking about the inherent, dangerous
> complexity of thread-based approaches to state management; they're
> inefficient, and they're often buggy too.
>
> Even given all that, Twisted does have good support for threads when you
> really need them.
>
>    
> http://twistedmatrix.com/documents/TwistedDocs/Twisted-0.19.0rc3/twisted/in
>ternet/interfaces_IReactorThreads.py.html
>
> I hope this answers your questions.  What sort of file server are you
> writing?




More information about the Twisted-Python mailing list