[Twisted-Python] Crash when using XmlRPC during reactor shutdown

David Anderson david.anderson at calixo.net
Sat Apr 29 09:10:06 MDT 2006


Hi,

I have been experiencing a crash when trying to use the XmlRPC query
subsystem while the reactor is shutting down.

The context of it is that my twisted application talks to an upstream
server using XmlRPC.  When it comes online, it registers, and when it
goes offline, it should unregister itself, using a simple XmlRPC
query.

So, I figured the natural place to put the notifications were in the
startService and stopService methods, that get called at just the
right time.  startService is not a problem, and works fine.  for
stopService, I made the method return the XmlRPC Deferred, so that the
reactor would wait until request completion before continuing the
shutdown.

The thing is, with this scheme Twisted dumps an exception during
shutdown, because the XmlRPC Deferred gets fired twice.  I've
reproduced this behaviour with a minimal application:

****************************************************************
****************************************************************
from twisted.application.service import Service
from twisted.python import log
from twisted.web.xmlrpc import Proxy
from twisted.application.service import Application, IServiceCollection
from twisted.internet import reactor, defer

class TestXmlRpcService(Service):
    def startService(self):
        log.msg("Starting service")
        reactor.callLater(0, reactor.stop)

    def stopService(self):
        def handle_success(res):
            log.msg("Unregistration successful, but reactor will now crash")
            return True
        def handle_error(e):
            log.msg("Unregistration failed...")
            return False

        log.msg("Calling XmlRPC backend (and crashing)")
        p = Proxy('http://natulte.net/pub/fgs/RPC2.php')
        defer.setDebugging(True)
        # Call any old method, this is just a proof of concept
        d = p.callRemote('fgsd.register',
                         "00000000000000000000000000000000",
                         "bleh")
        d.addCallbacks(handle_success, handle_error)
        return d


service = TestXmlRpcService()
application = Application("proxy")
service.setServiceParent(application)
****************************************************************
****************************************************************

If you run that with `twistd -n -l- -y test.tac`, you should see the
crash immediately.

I know that patients shouldn't give diagnostics about their problems
to the doctor, but nevertheless, examining the exception log, I
believe I know what the problem is.

 - When stopService returns a Deferred, the reactor stops the shutdown
   and adds callbacks to the Deferred, to resume shutdown at the right
   time.

 - The XmlRPC query succeeds, and fires the Deferred. It walks right
   down the notification chain, and eventually trips the reactor's
   callback.

 - The reactor, within the callback, resumes shutdown, and calls
   disconnectAll to kill all active connections.

 - At this point, the XmlRPC query still has its socket open, and so
   gets notified that the connection was lost.  It attempts to fire
   our Deferred's errback to notify of this, and foom,
   AlreadyCalledError.

If this makes sense, the resolution might be simple: close the
transport connection before firing the deferred, so that the reactor
can't kill the connection and trigger the double-call.

The other option would be to include a flag in the xmlrpc query
object, to avoid double-firing in this (admitedly corner-case)
situation.

Does all that make sense?  And, of course, does anyone have an idea of
a workaround I could use in the meantime?  As I'm shutting down the
service anyway, the exception is not really a problem in itself, it's
just a little unclean to have an exception dump in the logs at each
shutdown.

Thanks in advance (and already, for Twisted :-),
- Dave.




More information about the Twisted-Python mailing list