[Twisted-Python] Synchronization techniques

glyph at divmod.com glyph at divmod.com
Thu Apr 5 11:47:50 EDT 2007

On 01:53 pm, daniel at keystonewood.com wrote:
>On Apr 5, 2007, at 5:53 AM, glyph at divmod.com wrote:
>> >>I'm afraid that the feature you want doesn't make any sense and  is, 
>>in a
>> >>broad sense, impossible.
>> >Maybe it's impossible for you to see things the way I see them 
>>because you
>> >have become drunk on Twisted Kool-Aide.
>>You are making it sound like you are bringing a fresh new idea to  the 
>>discussion here, which I've never heard before and am unwilling  to 
>>consider because I'm inflexible in my thinking.  That's not  what's 
>I'm sorry I wrote that...it was inflammatory and did not bring any 
>value to the conversation. Please accept my apology.

Thank you, I very much appreciate the sentiment!  I'm glad to see you 
also only quoted the actually useful / productive parts of my response 
too ;).
>The external "blocking" resource is just a shell script that takes 
>some time t run. It does not acquire any shared resources that would 
>result in dead lock and it will always return (maybe with an error, 
>but it will return) unless something terrible happens (e.g. plug is 
>pulled on server, fire, etc.).

I thought I understood what was going on, but now I'm confused again. 
Why do you need mutual exclusion at all if it doesn't acquire any shared 
resources?  Couldn't you just run it concurrently?
>It would be more maintainable because it would look just like normal 
>sequential python code:

Yes, it would *look* like sequential python code.  But it wouldn't be 
:).  There's a heck of a lot that can happen in acquire(); your whole 
application could run for ten minutes on that one line of code.  Worst 
of all, it would only happen in extreme situations, so testing or 
debugging issues that are caused by it becomes even more difficult.

<snip blocking code>
>This is very simple and very easy to maintain. It could be written 
>with inlineCallbacks fairly easily as well:
>yield lock.acquire()
>     yield process.check_call(...)
>     yeild process.check_call(...)
>     lock.release()
>That's pretty nice (so nice I might just rewrite my code that way).

I'm glad you think so.  I was originally not too happy about 
inlineCallbacks (its predecessors did not do so well) but I keep seeing 
examples like this which it makes look much nicer.
>My complaint is that the code must have knowledge of the twisted 
>environment (why else would it yield the result of process.check_call 
>()?). I do not really see the conceptual difference between these two 
>code blocks except one yields to and one calls into the reactor event 
>loop. Is there some other inherent problem with the first example? Of 
>course you need to make sure that the code inside the try/finally 
>block does not try to acquire the lock again, but that's a basic 
>concurrency problem which can even happen in the next example.

This is really the key thing.  If you're running your code in the 
Twisted environment, and you want it to be correct, it really must know 
about the Twisted environment.  The simple presence of the 'yield' 
keyword at every level where a Deferred is being returned forces you to 
acknowledge, "yes, I know that a context switch may occur here". 
Without it, any function could suddenly and radically change the 
assumptions that all of its callers were allowed to make.
>Moving on, in a fully deferred world we have this:

<snip ugly stuff>
>... you get the picture.
>Notice the code to acquire/release the lock--there are three  different 
>calls to lock.release() in there, and they all must be  carefully 
>sorted out to make sure that exactly one of them will be  called in any 
>given scenario --that's hard to maintain.

There are other ways to deal with that.  maybeDeferred, for example, 
will make sure you always get a Deferred back and that it looks vaguely 
>Right, that would work and that's exactly what subprocess.check_call () 
>(the real python built-in version) would do. Unfortunately twisted 
>does not work with the subprocess module--spawnProcess() is the only 
>alternative I found that actually works and that means I have to use  a 

The only thing I have to say about that is:
>>Another solution here would be for Twisted to have a nice  convenience 
>>API for dispatching tasks to a process pool.  Right now  setting up a 
>>process pool is conceptually easy but mechanically  difficult; you 
>>have to do a lot of typing and make a lot of  irrelevant decisions 
>>(AMP or PB or pickle? stdio or sockets?).
>That sounds nice.

Something I'd be doing in my copious spare time, if I had any.
>I understand that PB is fully symmetrical. In my case I am only using 
>half (client makes request, server responds). Would it make sense to 
>relax the constraints when PB is used in this way?

I don't know if it would be feasible to do the work required for PB, due 
to other, less fundamental implementation issues.  However, it was a 
design goal of AMP that it be possible to implement a "naive", only-a 
-few-lines-of-Python version for drop-in ease-of-use comparable to 
XMLRPC while still providing the actual "good" version in Twisted 
itself.  I have heard rumors to the effect that Eric Mangold actually 
wrote such a thing, but I don't know where it is.
>This looks very interesting. I'll try to help out with this effort if 
>I can find some time.

>Thanks for taking time to read my ramblings and understand the 
>problems that I am having (even if we don't quite agree on the 
>simplest solutions). Your input is valuable, and I am indebted to you 
>for providing free support in your spare time.

Thanks very much for taking the time to acknowledge this.  You leave me 
here with the impression that writing these emails was time well spent. 
And, thanks in advance for working on any of those tickets I gave you 
links to ;-).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20070405/da4f9e94/attachment.htm 

More information about the Twisted-Python mailing list