[Twisted-Python] Multicast XMLRPC
eprparadocs at gmail.com
Fri Aug 25 14:43:43 EDT 2006
glyph at divmod.com wrote:
> On Fri, 25 Aug 2006 13:07:49 -0400, "Chaz." <eprparadocs at gmail.com> wrote:
>> I will state what I thought was obvious: I need to make "calls" to
>> thousands of machines to do something. I want to minimize the overhead
>> both of making the call and the machines sending back the responses.
> Maybe you could start off a little bit further back into the problem
> today. Like, "I got up this morning, and I thought, 'I would like some
> toast.', but I don't know how to make toast, so I wanted to design a
> 100,000 node parallel neural network to develop a receipie for toast."
> Perhaps then someone on this list could relate their toast development
> experiences, such as "using a TCP-based tree topology similar to IRC
> servers has been sufficient in my experience for toast-oriented data
> exchange although I have been using a parallelized coordinated genetic
> algorithm rather than a neural network to develop an optimal
> crunch/warmth experience", or possibly "ToastVortex, my Twisted-basted
> toast application server is available at
> http://toastvortex.example.com/" or better yet, "buy a toaster and put
> some bread in it".
>> TCP is pretty resource intensive so I need something else. I think a
>> reliable datagram service on top of some underlying transport is the
>> way to go (on top of multicast/broadcast/IP is what I am thinking about).
> TCP's "resource" consumption is localized in an a highly optimized
> environment; in OS kernels, where the TCP stack is tuned constantly by
> thousands of people, in routing hardware that is specialized to give TCP
> traffic priority to improve performance, and in the guts of the public
> internet that runs such hardware and is constantly monitored and tweaked
> to give TCP even more of a boost. Any custom multicast protocol you
> develop, while perhaps theoretically better than TCP, is possibly going
> to get swamped by the marginalia that TCP has spent decades
> eradicating. In Python, you're going to be doing a lot of additional
> CPU work. For example, TCP acks often won't even be promoted to
> userspace, whereas you're going to need to process every unicast
> acknowledgement to your multicast message separately in userspace.
> While my toast network deployments are minimal, I *have* written quite a
> few multi-unicast servers, some of which processed quite a high volume
> of traffic acceptably, and in at least one case this work was later
> optimized by another developer who spent months working on a multicast
> replacement. That replacement which was later abandoned because the
> deployment burden of a large-scale multicast-capable network was huge.
> That's to say nothing of the months of additional time required to
> develop and properly *test* such a beast.
> You haven't said what resources TCP is consuming which are unacceptble,
> however, Is it taking too much system time? Too much local bandwidth?
> Is your ethernet experiencing too many collisions? Are you concerned
> about the cost of public internet bandwidth overages with your service
> provider? What's your network topology? It would be hard to list the
> answers to all of these questions (or even exhaustively ask all the
> questions one would need to comment usefully) but one might at least
> make guesses that did not fall too wide of the mark if one knew what the
> application in question were actually *doing*.
> In any event, XML-RPC is hardly a protocol which is famous for its low
> resource consumption on any of these axes, so if you're driven by
> efficiency concerns, it seems an odd choice to layer on top of a
> hand-tuned multicast-request/unicast-response protocol.
> Twisted-Python mailing list
> Twisted-Python at twistedmatrix.com
Perhaps the simple way to say this is that I need to do group
communications that support RPC semantics with minimal overhead.
You ask about the network topology; all I can say is that it supports
the normal communication means: unicast, broadcast and maybe multicast.
I am being intentionally vague since I don't want to have any specific
I don't want to use overlay networks, if at all possible. While they are
nice, I would prefer something a little more direct (though that might
not be possible). The reason? Direct operations are faster.
I have a membership list of the state of all the processors in the
system (and I am talking 1000's of processors) without the use of
standard heartbeat (in the traditional use of heartbeat I would have N!
ping messages!). I figured out probabilistic polling with gossip was enough.
I don't particular care if it is PB, XML-RPC or SOAP as the marshalling
mechanism. I mention them since they allow me to solve one problem at a
time. I would like to build the solution a piece at a time to do some
measurements and testing. Today the underlying transport and tomorrow
Now let me address the issue of TCP. It is a pretty heavy protocol to
use. It takes a lot of resources on the sender and target and can take
some time to establish a connection. Opening a 1000 or more sockets
consumes a lot of resources in the underlying OS and in the Twisted client!
If I use TCP and stick to the serial, synchronized semantics of RPC,
doing one call at a time, I have only a few ways to solve the problem.
Do one call at a time, repeat N times, and that could take quite a
while. I could do M spawnProcesses and have each do N/M RPC calls. Or I
could use M threads and do it that way. Granted I have M sockets open at
a time, it is possible for this to take quite a while to execute.
Performance would be terrible (and yes I want an approach that has good
to very good performance. After all who would want poor to terrible
So I divided the problem down to two parts. One, can I reduce the amount
of traffic on the invoking side of the RPC request? Second, is how to
deal with the response. Obviously I have to deal with the issue of
failure, since RPC semantics require EXACTLY-ONCE.
That gets me to the multicast or broadcast scheme. In one call I could
get the N processors to start working. Now I just have to solve the
other half of the problem: how to get the answers returned without
swamping the network or how to detect when I didn't get an answer from a
processor at all.
That leads me to the observation that on an uncongested ethernet I
almost always have a successful transmission. This means I have to deal
with that issue and a few others. Why do I care? Because I believe I can
accomplish what I need - get great performance most of the time, and
only in a few instances have to deal with do the operation over again.
This is a tough problem to solve. I am not sure of the outcome but I am
sure that I need to start somewhere. What I know is that it is partly
transport and partly marshalling. The semantics of the call have to stay
Hope this helps cast the problem...I didn't mean to sound terse before I
just figured everyone had already thought about the problem and knew the
More information about the Twisted-Python