[Twisted-Python] Hanging Deferreds in PB Paging code
ellisonbg.net at gmail.com
Tue Jan 16 19:08:14 EST 2007
We are using PB as an initial protocol for some IPython related stuff.
Overall, PB is working well, but we need to be able to send larger
things around so we have been trying to implement things using the
pb.util.Pager stuff. I have spent a fair amount of time understanding
how the Paging works.
Now the fun part. I have have two versions of our methods - one that
uses Paging and one that doesn't. I am using trial to test these
methods. The test passes for the non-Paging version, but not for the
Paging version. The problem with the Paging version it seems is that
there are still unanswered PB requests (and associated Deferreds) that
haven't completed when my tearDown method is run. Thus I see errors
Traceback (most recent call last):
Failure: twisted.spread.pb.PBConnectionLost: [Failure instance:
Traceback (failure with no frames):
twisted.internet.error.ConnectionDone: Connection was closed cleanly.
which indicates that the connection was closed before all PB requests
We have written lots of unittests using trial so we are _very_ used to
making sure our Deferreds are cleaned up properly in tests. It
appears that the problem is coming from this code in twisted.pb:
"""Called when the consumer attached to me runs out of buffer.
# Go backwards over the list so we can remove indexes from it as we go
for pageridx in xrange(len(self.pageProducers)-1, -1, -1):
pager = self.pageProducers[pageridx]
if not pager.stillPaging():
if not self.pageProducers:
Both the pager.sendNextPage and pager.stillPaging calls invoke PB
calls to the other side. But, notice that the Deferreds for these
calls are not dealt with in any way. Thus there is no promise that
these PB requests have finished by the time the actual paged data has
been received and passed down the callback chain the completes the
Paged send (and test).
My overall feeling is that the Paging mechanism in PB in not used that
much and that I am likely running into uncharted territory/problems
with the underlying implementation. We are trying to decide if we
should continue to struggle with this or just move on to using a
different protocol that is better suited to streaming large objects
(such as http/1.1).
Any thoughts on this dillema?
More information about the Twisted-Python