Ticket #5298 enhancement new
ensmartenify threadpool resource limits
|Reported by:||glyph||Owned by:|
Our current reactor threadpool design leaves something to be desired.
Sometimes developers run into the threadpool maximum in difficult-to-diagnose ways. For example, there may be a deadlock between a deferToThread that needs a thread to free up in order to run and a blockingCallFromThread (taking up a free slot in the threadpool)
This is of course exacerbated by the following:
- the threadpool is necessary for name resolution if clients want platform-defined hostname resolution behavior
- the pool is frequently used for database interaction too (adbapi)
- setting up an external threadpool (as twisted's built-in WSGI support encourages you to do) is error-prone and confusing, so applications sometimes use the reactor threadpool for that too
- the default maximum number of threads is relatively small (20 threads)
- there is no configuration file for Twisted, so applications need to call a method to increase the threadpool maximum size
Most of these problems would go away if we just did away with the thread pool maximum size entirely. So, why not do that? What's the purpose of the threadpool max size? Given that this code has been with it for so long, it's probably a good idea to re-consider this.
Threads consume some resources to exist. Each one consumes a couple of megabytes of stack space, at least, so they're relatively heavyweight objects to create, and we don't want to allocate a zillion of them. Still, it's not like we attempt to limit allocation of other objects, however large; we just let users catch MemoryError if they care. So what's special about threads?
The thing about threads is that you might reasonably queue up a thousand or so callInThread items at once; each one's just a function and an entry in a Queue. A couple dozen bytes at most, not something to get worried about. But if you actually attempted to start a thousand threads in response to that, that's 2 gigabytes of memory, which definitely is something to worry about. So one reason is just to rate-limit this potentially significant memory consumption, especially in the case where each job may complete very quickly, making room for the next in the queue, making it pointless to start so many parallel threads at once.
Another reason for providing a simple maximum is that on some systems (those which have been carefully tuned to a particular workload) it's actually known in advance how many threads the system needs to do its job. This is a much more advanced use-case though, and given that the only way to tune your operating environment is to edit your code, I suspect that it is not as frequently used.
There are two problems with this strategy. Either over-allocates (once, at startup, your server needs to do a couple hundred threaded tasks: then, it holds on to these useless thread stacks for its whole lifetime) or under-allocates (leading to poor performance or possibly deadlocks in the worst case).
My suggestion would be to provide a hard maximum, but to make it very large, and instead of spawning new threads immediately, providing a hook which spawns new threads in the main reactor thread when a certain amount of time has passed without making progress, attempting to break deadlocks or provide additional thread resources on demand when progress is slowed by a slow blocking resource or heavy computation. Then, when threads are idle for a certain amount of time, spinning them down (stopping/joining them) to release those resources associated with those threads.