[Twisted-Python] Serving files, again

Tommi Virtanen tv at twistedmatrix.com
Fri Feb 28 11:03:32 MST 2003


On Wed, Feb 26, 2003 at 09:06:57PM +0000, Clark C. Evans wrote:
> The files that I need to serve up are quite big (some are a meg or more),
> and it would be bad to block other resources while the file loads into
> memory via file.read() or for the time it takes for the client to 
> completely consume the file.

	If file loading is too slow, buy some more memory. Keeping
	hundreds of megs of files in RAM is standard procedure for
	any sane operating system these days. Let it worry about
	keeping the file access fast.

	It should (and AIUI will) be served to the client
	chunk-by-chunk, processing other tasks between the reads.

> |   BTW, deferring to a thread would not be the way to go.  Something similar
> | to twisted.spread.util.Pager would probably be appropriate, or maybe
> | something that implements IProducer.  Or maybe just a chain of Deferreds :)
> | No need to go into threads for this, though.

	(sorry for responding to doublequoted text)

	Oh, python threads may not the low-level enough to actually
	help with disk IO (on Linux, atleast). Don't know if they are
	or are not.. Avoiding blocking on disk IO needs a separate
	process context in the kernel, userspace threading will not
	help.


> Is this the Jist of it?  It still has the problem that file.read 
> is a blocking call; I suppose for unix platforms you could use
> "poll()" to not block.  This is probably resonable; on the server
> side you don't block, while for desktop windows clients it blocks.

	poll() or select() won't work in file access, files block always
	unless you use AIO or something like that. Sorry.

	If you are that worried about performance, type "c10k" into
	google and start writing C. Nothing else will really help;
	file access only becomes a bottleneck _after_ you've done all
	the other things suggested at c10k.

-- 
:(){ :|:&};:




More information about the Twisted-Python mailing list