[Twisted-Python] reactor.stop() won't, threads and Queue to blame?

Brett Viren bv at bnl.gov
Mon Oct 25 14:56:02 EDT 2004


Glyph Lefkowitz <glyph at divmod.com> writes:

> On Mon, 2004-10-25 at 12:41 -0400, Brett Viren wrote:
>
>> In this case the Deferred is used as a return value for Twisted's
>> XML-RPC server implementation.  I go to this trouble of a CommandQueue
>> because my system blurs the distinction between server and client and
>> this was leading to deadlocks.  This CommandQueue should make sure
>> that all the troublesome communications are atomic.
>
> Doing things in threads almost always makes things *less* atomic than
> just leaving them all in the main reactor thread.  Even if I'm totally
> mistaken, I feel like I have to ask a few questions to make sure that
> newbies don't stumble across this thread in the future and think they
> need to start managing their own threadpools so Twisted won't
> deadlock ;)
>
> When you say you're "blurring the distinction between server and
> client", do you mean you're implementing something like an XMLRPC proxy,
> where the server is itself a client, relaying requests elswhere and
> waiting for their results?  Or something else?

It is basically as you describe but with some additions.  The primary
aim is to marshal data from an XML-RPC client to a server using a
custom protocol while providing status information as well as control.

      XML-RPC       Custom
data   ---->  proxy ---> data
source <----  proxy      sink
               ^  |
              /|\ |
               |  |  XML-RPC
               | \|/
               |  V
               GUI
          Monitor/Control
                 

The data source listens (is a server) for data requests which include
a callback URL.  After that, it sends data to (is a client for) the
proxy which forwards the data to the data sink and sends a
confirmation to the GUI monitor.  The proxy also sends heartbeats
fired via reactor.callLater to the GUI.

> Were you running requests in threads before you came up with the
> CommandQueue abstraction?  If not, what caused the deadlocks?  How was
> the client/server blurring related to the deadlocks?

Yes.  In the proxy, I handle the XML-RPC requests from the data source
and the GUI via this class:

class Spawner(threading.Thread):
    '''Call callable in its own thread, return value is sent into the
    Spawner.deferred.callback()'''

    def __init__(self,callable,errable=None,**kwds):
        threading.Thread.__init__(self,**kwds);
        self.callable = callable
        if errable is None:
            errable = self.chirp
        self.deferred = defer.Deferred()
        self.deferred.addErrback(errable)
        self.setDaemon(1)
        self.start()
        return

    def chirp(self,*args):
	print str(args)
        log.error(str(args))
        return args

    def run(self):
        self.deferred.callback(self.callable())

This runs the request in a thread an returns the value via a deferred
(which is used as the return value for the XML-RPC method).

> Finally, did you consider an approach where, rather than queueing
> commands, you just executed them synchronously and let the reactor
> serialize them?  If so, what lead to the decision to change to a
> thread-based approach?

The basic data proxying must not be interupted.  Some of the control
requests sent from the GUI can take more than the period between data
updates and thus block that proxying.


It's possible I'm doing something stupid in this design.  Please let
me know if you have improvements.

Thanks,
-Brett.





More information about the Twisted-Python mailing list