[Twisted-Python] Re: ReconnectingClientFactory creates multiple Protocols? (clarification)

mikah at ceruleansoftware.com mikah at ceruleansoftware.com
Tue Sep 20 03:36:18 EDT 2005

Itamar Shtull-Trauring <itamar at itamarst.org> wrote:
> mikah at ceruleansoftware.com wrote:
> >   As far as I can tell, it's related to the connection being
> > dropped, at which point the Factory reconnects and creates a
> > new protocol instance.
> Protocols (at least, when used with TCP) are designed so that their
> lifetime matches that of the TCP connection they are handling. As a
> result, when the connection is lost and the factory reconnects, this is
> a new TCP connection with a new protocol. This is why there's a factory,
> to manage data and logic that is not tied to a specific TCP connection
> (e.g. "should I reconnect?" or "how far along was the download when I
> got disconnected.")
> In general you'd want to use the factory to store state that needs to
> last past the lifetime of the protocol. You can, of course, have
> buildProtocol always return the same instance, but that's pretty ugly
> and can easily lead to obscure bugs if you don't clean up the state
> correctly.


  Thanks for the reply. I think I have to clarify though, I
wasn't very clear describing the situation -- my problem isn't
that the RCFactory creates protocols on the fly, it's that
after disconnecting/reconnecting, the old protocol instance
seems to be still active! After the server has run a few days,
I start seeing a handful of protocol instances. I know they
aren't the same protocol because I can tell them apart from the
data they write to the log file.

  Only one of the protocols is actually connected to the remote
host, that much I can say. It's the 'newest' one (I hope). The
other instances are not connected but they attempt to send
requests anyway, and worse, continue to pull tasks out of the
task queue. So I end up with a bunch of tasks that never get
acted on because the protocols can't service them without a

  My question is: when an RCFactory makes a new protocol
instance, is the old one supposed to be deleted and cleaned

  Everything hinges on this, pretty much. If the answer is yes,
then I'm doing something wrong because mine aren't getting
cleaned up. If the answer is no, then I must somehow take
responsibility for preventing the old instances from trying to
do work when they shouldn't, and somehow delete them.

  I have some related questions if someone would like to answer
them ... (1) is there a proper way for me to disconnect a
connected protocol and then stop its factory from reconnecting
and (2) start the factory connecting again at some time in the

  I've found several ways to do this, but they seem kludgy and
not quite correct ...

  Thanks in advance!



More information about the Twisted-Python mailing list