[Twisted-Python] Memory usage in large file transfers

Nikolaos Krontiris nkrontir at hotmail.com
Mon Dec 1 04:25:13 EST 2003


Hi there.
I am writing a file transfer program using twisted as my framework. 
I have been having some problems as far as memory usage is concerned (i.e. both client and server just eat through available memory without ever releasing it back to the kernel while transfering data). I am aware that in theory, the client and server will consume at least as much memory as the file to be transferred, but this memory should also be made available to the O/S after the operation has completed.
I also use a garbage collector, which makes things just marginally better and the only TWISTED operations I use are a few transport write and callLater commands.
The only culprits responsible for this I can imagine to be a difference between the hardcoded buffer sizes in TWISTED and the amount of data I send (I send 64Kb of data per request for faster delivery in LANs) and/or possibly that this memory lost is in many small chunks of data -- in this case no O/S can free this data, since there are always limits only above which the kernel will deem an amount of memory worth the trouble to be released (I think glibc has around a 2MB limit)...
As professional network programmers, do you believe my diagnosis is correct? Have you encountered such problems in the past? Are there workarounds for this?

Many thanks,

Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20031201/539eaa2b/attachment.htm 


More information about the Twisted-Python mailing list