I've been prototyping a client that connects to thousands of servers and calls some method. It's not real important to me at this stage whether that's via xmlrpc, perspective broker, or something else.<br><br>What seems to happen on the client machine is that each network connection that gets opened and then closed goes into a TIME_WAIT state, and eventually there are so many connections in that state that it's impossible to create any more.<br>
<br>I'm keeping an eye on the output of<br>netstat -an | wc -l<br>Initially I've got 569 entries there. When I run my test client, that ramps up really quickly and peaks at about 2824. At that point, the client reports a callRemoteFailure:<br>
<br>callRemoteFailure [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionLost'>: Connection to the other side was lost in a non-clean fashion: Connection lost.<br>
<br>Increasing the file descriptor limits doesn't seem to have any effect on this.<br><br>Is there an established Twisted sanctioned canonical way to free up this resource? Or am I doing something wrong? I'm looking into tweaking SO_REUSEADDR and SO_LINGER - that sound sane?<br>
<br>Just tapping the lazywebs to see if anyone's already seen this in the wild.<br><br>Thanks guys<br><br>Donal<br>