<br><br><div class="gmail_quote">On Wed, Jul 11, 2012 at 2:08 AM, Werner Thie <span dir="ltr"><<a href="mailto:werner@thieprojects.ch" target="_blank">werner@thieprojects.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On 7/10/12 6:17 AM, Laurens Van Houtven wrote:<br>
> FWIW, I have used Ampoule to great effect, but as JP points out it's hardly the only option. You're bound to end up with some measure of multiprocessing. Bear in mind that not all workloads are well-suited for this kind of problem! Always measure before deciding to make your codebase that much more complicated :)<br>
><br>
><br>
> cheers<br>
> lvh<br>
><br>
><br>
><br>
> On 10 Jul 2012, at 18:03, <a href="mailto:exarkun@twistedmatrix.com">exarkun@twistedmatrix.com</a> wrote:<br>
><br>
>> On 03:14 pm, <a href="mailto:augustocaringi@gmail.com">augustocaringi@gmail.com</a> wrote:<br>
>>> Hi,<br>
>>><br>
>>> I'm researching the best way to implement/use a Twisted-based<br>
>>> server in a multicore environment...<br>
>>><br>
>>> There is the Ampoule project, that I realize is considered the<br>
>>> best way to do that. Right?<br>
>><br>
>> It's a way. "Best" depends on the details and goals of the project.<br>
>><br>
>> Here's a stackoverflow question/answer on basically the same topic. In<br>
>> particular, it specifically answers the question of a listening port<br>
>> shared between multiple processes and gives examples of how to do this:<br>
>><br>
>> <a href="http://bit.ly/MiCHtQ" target="_blank">http://bit.ly/MiCHtQ</a><br>
>><br>
>> Jean-Paul<br>
>>> I'm also reading about the internals of Nginx HTTP server. This<br>
>>> server utilizes the same reactor pattern of Twisted (epoll based)...<br>
>>><br>
>>> "What resulted is a modular, event-driven, asynchronous,<br>
>>> single-threaded, non-blocking architecture which became the foundation<br>
>>> of nginx code." <a href="http://www.aosabook.org/en/nginx.html" target="_blank">http://www.aosabook.org/en/nginx.html</a><br>
>>><br>
>>> But to maximize the use of processors in a multicore environment,<br>
>>> Nginx do this:<br>
>>><br>
>>> "nginx doesn't spawn a process or thread for every connection.<br>
>>> Instead, worker processes accept new requests from a shared "listen"<br>
>>> socket and execute a highly efficient run-loop inside each worker to<br>
>>> process thousands of connections per worker"<br>
>>><br>
>>> My question: There is something similar in Twisted? Or do you<br>
>>> think that is easy to implement something like that?<br>
>>><br>
>>> Thanks!<br>
>>><br>
>>> --<br>
>>> Augusto Mecking Caringi<br>
</div></div>We observed really great scaling on multi cores with moving the<br>
application part either to ampoule for PDF production or in the other<br>
case I wrote an implementation of self regulating process pool based on<br>
spread, leaving only the serving to twisted in both cases.<br>
<br>
With handing work out to other processes you get another benefit which<br>
is isolation of python, which is the only way to use a package, like<br>
reportlab which survives no sort of reentrancy, for a webservice.<br>
<span class="HOEnZb"><font color="#888888"><br>
Werner<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
Twisted-Python mailing list<br>
<a href="mailto:Twisted-Python@twistedmatrix.com">Twisted-Python@twistedmatrix.com</a><br>
<a href="http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python" target="_blank">http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python</a><br>
</div></div></blockquote></div><br>Hi Werner<br><br> I want to know whether you have experienced any serious bug with ampoule? What version of ampoule did you use? Is it 0.2.0? Thanks.<br><br>Regards<br><br>gelin yan<br>