[Twisted-Python] Thoughts about testing

Jean-Paul Calderone exarkun at divmod.com
Tue Oct 25 12:59:24 EDT 2005

On Tue, 25 Oct 2005 05:19:43 -0700 (PDT), Antony Kummel <antonykummel at yahoo.com> wrote:
>Hi all,
>I just want to share some ideas that I have had.
>>From my experience, testing with Trial is too hard. My
>problems can be divided into three categories: 1.
>obscure error reporting, 2. unclean reactor 3. whoop
>whoop whoop.
>I am also pretty sure that I'm doing things that Trial
>does not want me to do, such as actually opening
>sockets and communication over the network. I'm saying
>this because it does not seem to happen in the Twisted
>test-suite. But I don't think there is a good reason
>for this limitation.

The network is a source of unpredictability.  Unit tests 
that rely on it fail intermittently, mysteriously, and 
with no clear course of action for reproducing the failure, 
making debugging the problem extremely difficult, and reduces 
the overall utility of the test suite by introduces failures 
that aren't really failures, but which nevertheless must be 
investigated to determine whether they represent a real problem.

So, there's a pretty good reason, I think.  However, trial 
doesn't prevent you from doing this, so I'm not sure what 
the objection is.

>Actually, in sum, I think there are too many
>limitations on writing tests with Trial, and they all
>boil down to the unclean reactor problem (or at least
>most of them). I want to suggest an alternative

If tests are allowed to leave things like connections and timers 
(in general, event sources) lying around, subsequent tests can 
fail through no fault of their own when one of these event sources 
misbehaves.  If, for example, one causes an exception to be logged, 
trial will notice this and attribute it to some other hapless test, 
causing it to fail.  These problems are even more difficult to
track down than the ones I mentioned above, since it is not even 
clear in these cases _which_ test is /really/ failing.

>How about not requiring the reactor to be clean at the
>end of a test? If anyone wants to make sure that
>anything they do leaves the reactor in a clean state,
>they can test for it themselves. This seems to me more
>like a constraint imposed by the implementation of
>Trial rather than a useful feature.

I think it's a good idea.  I don't know that, in its current form, 
it is complete.  There are probably some improvements that could 
be made to ease the process of tracking down the sources of 
various problems it reports.  I don't think that means the entire 
feature should be scrapped.

>Also, parts of
>Twisted itself are practically unusable inside a test
>because they leave the reactor dirty (such as threaded
>address resolution).

This should be addressed, certainly.  However, I don't often 
find myself resolving names using the system resolver in unit 
tests.  What if the system resolver is buggy?  What if the 
system is misconfigured?  What if there is a transient DNS 
failure?  What if the DNS server for the host you are interested 
in is temporarily offline?  These are not conditions I am happy 
to allow to cause my unit tests to fail.

>An alternative feature could be
>enabling the user to specify that a certain delayed
>call or thread is allowed to remain after the test,
>and then Trial won't complain. The only question
>remaining is how to do it. Simple: use a different
>process. Run the tests in a different process, and
>create a new one each time the reactor is dirtied.
>py.execnet is a nice example of this concept.

Running tests in a child process is an interesting idea.  It 
provides a much greater degree of isolation between tests than 
essentially any other approach, which is great.  Isolation is 
great for unit tests.  Unfortunately, it comes with a lot of 
overhead.  While there are techniques for optimizing the 
implementation of such a feature, running each test method in a
different process would probably add at least 4 minutes to 
Twisted's test suite.  This is basically unacceptable (Twisted's 
suite takes way too long to run already).

Beyond performance problems, there's also the issue of debugging.
As in, how do you?  I'm aware of remote debuggers for Python, but
they're all third-party.  This is not necessarily a killer
drawback, but it is definitely a downside.

>The second thought is this: there seem to be popping
>up different testing toolkits each with their own very
>nice extensions and features
>http://codespeak.net/py/current/doc/test.html). Trial
>cannot benefit from this, having branched away at the
>pyUnit level. I think Trial's special features can be
>relatively easily formulated as plugins to a
>plugin-oriented testing framework (especially if the
>clean reactor requirement is relieved), and so can the
>other testing packages. What this means, is that the
>best thing anyone who wants the world of unit testing
>to develop, and to benefit from it, is to push for a
>plugin-oriented pyUnit, and for an implementation of
>Trial (and the other tools) as a plugin for that
>framework. I think.

Rather than hearing about the plethora of new testing libraries 
appearing, I'd like to hear about features they provide that are
valuable for writing tests.  I would certainly like to borrow 
py.test's magical assert.  What other features are test authors 
finding useful in some of these projects?


More information about the Twisted-Python mailing list