[Twisted-Python] Serving files, again
Clark C. Evans
cce at clarkevans.com
Wed Feb 26 16:06:57 EST 2003
Thanks Jp, this is helpful.
On Wed, Feb 26, 2003 at 02:50:31PM -0500, Jp Calderone wrote:
| We usually consider IO on local fixed disks to be fast enough. In any
| case, select() in POSIX tells you that files are always ready for reading,
| so being smarter about it requires using a different mechanism (which is
| entirely possible, but requires a different reactor, not to mention platform
The files that I need to serve up are quite big (some are a meg or more),
and it would be bad to block other resources while the file loads into
memory via file.read() or for the time it takes for the client to
completely consume the file.
| BTW, deferring to a thread would not be the way to go. Something similar
| to twisted.spread.util.Pager would probably be appropriate, or maybe
| something that implements IProducer. Or maybe just a chain of Deferreds :)
| No need to go into threads for this, though.
Ok. So this would be the equivalent of a "file generator" which
returns its content in say 4K chunks? This would work by returning
a callback which (a) wrote out 4K and then (b) deferred itself again?
def __init__(self,filename,chunksize = 4096):
self.filename = filename
self.file = None
self.chunksize = 4096
if not self.file:
self.file = open(filename,"r")
chunk = self.file.read(self.chunksize)
(written but not tested)
Is this the Jist of it? It still has the problem that file.read
is a blocking call; I suppose for unix platforms you could use
"poll()" to not block. This is probably resonable; on the server
side you don't block, while for desktop windows clients it blocks.
Is this what you were thinking with the chain of deferreds?
More information about the Twisted-Python