[Twisted-Python] Deferred Groups?

Moof moof at metamoof.net
Sat Jan 21 09:01:03 EST 2006


On 1/21/06, Duncan McGreggor <duncan.mcgreggor at gmail.com> wrote:
>
> I have a question about an approach I used... I'm worried that I've
> over-worked it and have over-looked a more elegant and standard
> solution.
>
> I have the need to fire off network connections in groups. Deferreds
> added to a DeferredList don't fit the bill (because there's no control
> over all the deferreds in the list). As an example, if you wanted to
> make a whole batch of concurrent connections, but didn't want to incur
> the overhead of firing off more than 20 simultaneous connections, you'd
> split your destination hosts up into groups of 20. As a group was
> completed, a callback could fire off the next group, etc.
>
> What's more, I didn't want to put this kind of control in a factory or
> a protocol. In my mind, that didn't seem the proper place for it...


This is one approach. It has the characteristic that if one site in your
group is considerably slower than the others, you will wait till all the
sites in your group are finished before firing off the next group. This may
or may not be  good thing for your particular app.

An alternative is to create a "pool" of connections that will consume from a
queue of potential connections. you feed your list into a DeferredQueue, and
create as many concurrent connection handlers as you want, that will all
consume from that same queue. this has the characteristic that as long as
you keep the queue full you are constantly running 20 connections. This may
or may nto be an advantage in the case of your application.

Or if you want to use the built-in twisted magic, take a look at
twisted.protocols.policies.ThrottlingFactory and other similar things int he
same package see if one can be adapted to your use.

Keep in mind that twisted is not *actually* concurrent, so you may not need
to throttle your connections that much, you might be able to let the reactor
handle the connection load itself.

Actually, given that the reactor handles a thread pool size, is there an
equivalent "connection pool size" that can be manipulated from inside the
programme? Does such a concept have any use or meaning?

Moof - not a reactor expert, as you can see.

Moof
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20060121/0a08c428/attachment.htm 


More information about the Twisted-Python mailing list