[Twisted-Python] I could swear I've seen this pattern *somewhere* in Twisted...

Mike C. Fletcher mcfletch at rogers.com
Tue May 10 00:30:12 EDT 2005


Jp Calderone wrote:

> I'd be interested to hear about the relative performance of your two
> solutions.  I would expect DeferredSemaphore to create many more
> Deferreds, and thus perhaps perform less well.

Hmm, never really worried about raw performance overhead of the
scheduling operations... should I be?  I'd been under the impression
that the deferred objects were rather lightweight.  Within our apps I'm
more concerned about getting the queues so full that they can't get
through to process the high-priority events.

>  Also, Glyph and I did some work for Quotient today on some code which
> may be applicable to this kind of problem.  Here's an example:
>
>    from twisted.internet import defer
>    from atop.tpython import Cooperator
>
>    def parallel(iterable, count, callable, *args, **named):
>        source = iter(iterable)
>        def work():
>            for elem in source:
>                yield callable(elem, *args, **named)
>        coop = Cooperator()
>        tasks = []
>        for i in range(count):
>            tasks.append(coop.coiterate(work()))
>        return defer.DeferredList(tasks)
>
>  Note that this returns a DeferredList of each "task", rather than of
> each result.  If results are desired as well, adding a callback
> (before yielding) to the result of callable() inside work() which
> saved the result, then adding a callback to the DeferredList which
> discarded the list of task results and returned the saved results
> would accomplish this.
>
>  Note also that Cooperator is in a branch at the moment.  It'll most
> likely be in Quotient trunk sometime tomorrow.

Seems okay, but Jame's implementation of parallel with
DeferredSemaphore.run seems less complex as an implementation (to me
anyway), but then I haven't really gotten into the whole "flow" Twisted
sub-culture yet :) .

Your version really does highlight the possibility of just passing in an
"iterapply" instance instead of defining the "work" function inside
(especially given the need to alter the work function to retrieve the
results).  I am curious, though, doesn't your version cause the source
to be iterated over count times in total?  I *think* you want to share
the work() instance among the coop.coiterate calls (at least, you would
if I understand how everything works from their names).

Have fun,
Mike

-- 
________________________________________________
  Mike C. Fletcher
  Designer, VR Plumber, Coder
  http://www.vrplumber.com
  http://blog.vrplumber.com





More information about the Twisted-Python mailing list