[Twisted-Python] Scaling Twisted Web on multicore

Tobias Oberstein tobias.oberstein at tavendo.de
Thu Oct 31 17:16:54 MDT 2013


> Looks nice. It's something that has been around in poor form for a long time in
> several places (I'm thinking about
> http://twistedmatrix.com/trac/browser/sandbox/exarkun/copyover/ which
> inspired http://twistedmatrix.com/trac/browser/sandbox/therve/prefork/).

Wow. 10 years ago.;) I already had given credit in the README.md to Jean-Paul, but referring to a relatively recent answer by him on Stackoverflow. I didn't knew it was around so long, and also not aware of your tests.

> It would be good to have some documented examples. It would be even better
> to have a proper Twisted API for that.
> 
> Note that for testing static files, sendfile may be an interesting
> boost: https://tm.tl/585

The thing is: sendfile() doesn't support TLS. As far as I can see.

Further: the tests that are getting "close" (up to 50%-70%) to Nginx performance are using a "Fixed Resource" .. that is merely a Resource returning a string. 

When I take static.File, the performance gap widens up considerable (factor 4-5 vs Nginx). Which brought me to the following idea .. not sure if that would work:

Why not completely cache a Twisted Web HTTP _response_ (including headers and all) upon the first request to that resource _in RAM_?

If the underlying is a file, maybe use FS notify to invalidate the cache entry.

That would still allow doing TLS and push octets from RAM.

A static.CachingFile resource. Or a general CachingWrapper factory, wrapping any resource hierarchy.

Of course that breaks for real dynamic sites .. but it is useful: e.g. we use FrozenFlask to freeze Flask and deploy to S3, and normal Flask for easy and standard development. I can still use all the routing goodies of Flask and have a set of statics in the end.

> 
> Also, on BSDs (not sure about OS X) and recent Linux, you can use
> SO_REUSEPORT which would make for an even simpler code.

This is very interesting. Thanks for pointing me there .. didn't knew that.

It is useful and I will try, since as

https://lwn.net/Articles/542629/

notes:

"when multiple threads are waiting in the accept() call, wake-ups are not fair, so that, under high load, incoming connections may be distributed across threads in a very unbalanced fashion."

I have seen this behavior also. Accept from multiple processes is skewed.

I am fine with support for FreeBSD and Linux only (for now).

So: I will further explore:

1) SO_REUSEPORT
2) CachingWrapper

Thanks for feedback and hints!
/Tobias

> 
> --
> Thomas
> 
> _______________________________________________
> Twisted-Python mailing list
> Twisted-Python at twistedmatrix.com
> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python



More information about the Twisted-Python mailing list