[Twisted-Python] Implementing streaming (broadcast) TCP servers
zmola at acm.org
Thu Jun 28 09:32:03 EDT 2007
In general, if you are streaming different data to each client, it is
going to be expensive. There really are only 3 areas to optimize
1) disk IO -- Read in bigger chunks and make sure you flush your
buffer. You might want to drop into C and write your buffering or find
a good "ring buffer" (big array that you reuse for the stream) so you
don't have to allocate and deallocate memory all the time.
2) Network IO -- check what protocol you can use. If you can get away
with lossy streaming, UDP or RTCP might be wins.
3) CPU -- Profile your application and see where the time is being
spent. You need to be careful about memory copying and allocation.
Python normally handles this well, so this probably isn't the problem.
OOPS, I just realized that you said the same data to multiple clients.
Is the data the same at the same time, meaning the clients are all
getting the same data simultaneously? If so, if you have any control
over the network, then UDP Multicast can be a big win. You would only
be sending out one stream of data and everyone would hear it.
Also, I would look at making sure that you are not copying data around much.
I must admit that my Knowledge of all the twisted classes that are
available is limited, so I don't know what class to use but I would
suggest dropping in to the low level classes below web2 since you will
need a little more control for this than normal web applications.
zmola at acm.org
Adam Atlas wrote:
> I'm working on a server that will be streaming the same data from a
> source to many clients. It has to be invoked from an HTTP server, so
> I'm using twisted.web2 (not twisted.web because it also needs to host
> a WSGI application). In other words, calling GET on a certain path
> results in the stream as the response, at which point I don't really
> need any more HTTP functionality. I have it working... currently, the
> basic process is: I have a class implementing IByteStream, an instance
> of which is created for each client; it simply keeps a queue of data
> to be sent, sending one piece with each call to read(), or a sending a
> Deferred if the queue is empty. Meanwhile, the part of the application
> that provides the data (a different server, receiving data from a
> broadcaster) adds the data to all the clients' byte stream queues,
> whenever it receives a piece of data.
> Since efficiency is obviously important for a streaming server, I was
> just wondering if anyone had any efficiency tips. I've tested it with
> at most 64 concurrent clients. There doesn't seem to be any slowdown
> from the clients' perspective (the data is coming in at the rate it
> should), but server-side, CPU and memory usage start increasing pretty
> quickly. Especially CPU -- with 64 clients, CPU usage peaked at 26.8%,
> which is not acceptable. Any thoughts? Is there a better way to handle
> the streaming process as I previously described?
> - Adam
> Twisted-Python mailing list
> Twisted-Python at twistedmatrix.com
More information about the Twisted-Python