[Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

Adi Roiban adi at roiban.ro
Sun Aug 14 04:38:39 MDT 2016


Hi,

We now have 5 concurrent jobs on Travis-CI for the whole Twisted organization.

If we want to reduce the waste of running push tests for a PR we
should check that the other repos from the Twisted organization are
doing the same.

We now have in twisted/twisted 9 jobs per each build ... and for each
push to a PR ... we run the tests for the push and for the PR merge...
so those are 18 jobs for a commit.

twisted/mantisa has 7 jobs per build, twisted/epsilon 3 jobs per
build, twisted/nevow 14 jobs, twisted/axiom 6 jobs, twisted/txmongo 16
jobs

.... so we are a bit over the limit of 5 jobs

---------

I have asked Travis-CI how we can improve the waiting time for
twisted/twisted jobs and for $6000 per year they can give us 15
concurrent jobs for the Twisted organization.

This will not give us access to a faster waiting line for the OSX jobs.

Also, I don't think that we can have twisted/twisted take priority
inside the organization.

If you think that we can raise $6000 per year for sponsoring our
Travis-CI and that is worth increasing the queue size I can follow up
with Travis-CI.

-------------

I have also asked Circle CI for a free ride on their OSX builders, but
it was put on hold as Glyph told me that Circe CI is slower than
Travis.

I have never used Circle CI. If you have a good experience with OSX on
Circle CI I can continue the phone interview with Circle Ci so that we
get the free access and see how it goes.

--------

There are multiple ways in which we can improve the time a test takes
to run on Travis-CI, but it will never be faster than buildbot with a
slave which is always active and ready to start a job in 1 second and
which already has 99% of the virtualev dependencies already installed.

AFAIK the main concern with buildot, is that the slaves are always
running so a malicious person could create a PR with some malware and
then all our slaves will execute that malware.

One way to mitigate this, is to use latent buildslaves and stop and
reset a slave after each build, but this will also slow the build and
lose the virtualenv ... which of docker based slave should not be a
problem... but if we want Windows latent slaves it might increase the
build time.

What do you say if we protect our buildslaves with a firewall which
only allows outgoing connections to buildmaster and github ... and
have the slaves running only on RAX + Azure to simplify the firewall
configuration?

Will a malicious person still be interested of exploiting the slaves?

I would be happy to help with buildbot configuration as I think that
for TDD, buildbot_try with slaves which are always connected and
virtualenv already created is the only acceptable CI system.

Cheers
-- 
Adi Roiban



More information about the Twisted-Python mailing list