[Twisted-Python] strategies for tracking down memory issues
Kevin Horn
kevin.horn at gmail.com
Fri Jan 17 18:32:10 MST 2014
On Fri, Jan 17, 2014 at 6:06 PM, Glyph Lefkowitz <glyph at twistedmatrix.com>wrote:
>
> On Jan 17, 2014, at 12:43 PM, Jonathan Vanasco <twisted-python at 2xlp.com>
> wrote:
>
>
> some recent changes to a very-happy twisted daemon have resulted in a
> process that grows in memory until it crashes the box. boo!
>
> looking through the code and logs, i'm wondering if i''ve coded things in
> such a way that defferds or deferrd lists are somehow not getting cleaned
> up if an unhandled exception occurs.
>
> i've been looking through all my former notes and some questions on stack
> overflow, and I've seen a lot of info on using heapy and other tools to
> find issues on a function-by-function basis.
>
> i'm wondering if anyone has experience in simply monitoring the lifecycle
> of deferreds ?
>
>
> First off, just manhole in and inspect gc.garbage :-).
>
> <snip>
>
> -glyph
>
I had a similar situation several years ago, and messed around with heapy
and some other Python memory profiling tools, but the manhole + gc.garbage
was both the easiest and most effective.
One other thing I did was to set up a separate Twisted Service that would
run a memory profiling function periodically (I think it just looked at
gc.garbage, and sorted things nicely) and log it.
I used txScheduler (which I wrote) for that. In fact that's part of why I
wrote it.
I can't give you much more detail than that, though. It was over 5 years
ago, and I don't have access to that code any more.
--
Kevin Horn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/twisted-python/attachments/20140117/3a08f49d/attachment-0002.html>
More information about the Twisted-Python
mailing list