[Twisted-Python] Twisted receiving buffers swamped?

Dustin J. Mitchell dustin at v.igoro.us
Sat Jan 10 16:59:20 MST 2015


As someone partially responsible for the infrastructure Mozilla uses
to do its performance benchmarking, I can say that it's *really* hard.
Getting live operating systems to sit still and behave is a mess, and
then *keeping* them still over months and years (while attending to
necessary security upgrades, hardware migrations, and so on) is even
worse.

One of the smarter things we've figured out how to do is to "phase in"
potentially disruptive changes so that we can either see that there's
no impact, or estimate a correction factor for comparing results
before and after the change.

Dustin

On Sat, Jan 10, 2015 at 6:47 PM, Glyph <glyph at twistedmatrix.com> wrote:
>
>> On Jan 10, 2015, at 02:22, Tobias Oberstein <tobias.oberstein at tavendo.de> wrote:
>>
>>> It sounds like http://speed.twistedmatrix.com but far more ambitious :).  Are you familiar with that site, and the benchmarks repository that powers it?  It's nowhere near as comprehensive as what you'd like, but it is a good place to start.
>>
>> I've look into it a little. I am confused;)
>>
>> E.g. take "SSL throughput big writes":
>>
>> http://picpaste.com/pics/Clipboard01-0isEvjph.1420884805.png
>>
>> There is a big dropoff in commit 43146.
>>
>> It's cool to see a history of performance correlated with commits.
>>
>> Now, if I dig into that commit, I see:
>>
>> http://picpaste.com/pics/Clipboard02-HV47NdTC.1420884887.png
>>
>> The commit seems to be a "doc only" commit. No actual code changes at all.
>>
>> How should I interpret that?
>>
>> Probably the test machine was changed, a new version of OpenSSL or pyOpenSSL, or something else?
>
> One of those things.  There is no infrastructure in place for identifying events which impact the performance testing infrastructure.  The only performance testing environment is a very old mac mini still running Snow Leopard, which is probably the environment which we care the *least* about performance in, so it's not in great shape ;).
>
>> I'd say: the infrastructure aspects when doing performance tests do matter. To the degree that performance results are of very limited value at all, if the former aspects are not accounted for.
>
> I don't think the results that we have presently are worth much at all.  My point was mostly that there is some infrastructure which is halfway usable, and so you don't have to start from scratch.  If you could take over this project (I am pretty sure at this point there is nobody to take it over *from*, exarkun did some work a long time ago and hasn't given it a second look in years) it would be highly appreciated!
>
> (And if you care a lot about performance in a particular environment you could set it up in that environment and get attention for it :)).
>
> You should also have a look at the existing benchmark suite and potentially look at maintaining / expanding that as well.
>
> Thoughts?
>
> -glyph
> _______________________________________________
> Twisted-Python mailing list
> Twisted-Python at twistedmatrix.com
> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python




More information about the Twisted-Python mailing list