[Twisted-Python] Perspective Broker and thread safe!

B. B. thebbzoo at gmail.com
Sat Feb 7 06:47:31 EST 2009


Thank you very much for you answer. Maybe the twisted details removed my
focus from the "big picture".
I have one followup question, added next to its context below..

On Fri, Feb 6, 2009 at 8:29 PM, Jean-Paul Calderone <exarkun at divmod.com>wrote:

> On Fri, 6 Feb 2009 17:30:19 +0100, "B. B." <thebbzoo at gmail.com> wrote:
>
>> Hello, I am relative new to twisted, though I have manage to make a simple
>> client server solution ( using perspective broker ) that I am now try to
>> refactor a little.
>>
>> The solution is now something like :
>>
>> Using perspective broker, clients connects to the server ( setting up a
>> remote reference to each other ).
>> Via a PB call, each client can request the server do some processing. The
>> result of  the processing is distributed to each connected client - and
>> the
>> client asking for the request gets a simple status report!
>>
>> This solution works quite well, the only bottleneck is that all the
>> "thread-safe" processing on the server is done in the mainloop of the
>> server
>> reactor. I want to do concurrent processing on the server!
>>
>
> That requires that your server be capable of doing the processing
> concurrently.  If the processing is CPU bound, then you need multiple
> CPUs.  If it is disk I/O bound, you need multiple disks.  etc.


Actually I use a quadcore processor and there is only a litte diskusage. But
thanks to your reply, I am thinking on the famous "Global Interpreter Lock"
in python.
Since allmost all processing is happing in pure python, does'nt the GIL
prevent me to obtain any remarkable performance gain by doing concurrent
processing in pure python???
( At least as long as I only use one process for the threads )

>
>
>  The function doing the processing on the server, also distribute the
>> result
>> to all the connected clients, and returns the result to the klient asking
>> for the request.
>>
>> My question is :
>> If I just "defer" that function "toThread" ( like  threads.deferToThread(
>> myServerProcessingFunction ) ). Do I have potential problems, when several
>> threads try to distribute its result to the connecting clients???
>>
>
> If you use deferToThread with your "processing" function, then the function
> must be thread-safe.  Since nearly no Twisted APIs are thread-safe, it
> cannot
> use them directly.  It may only use reactor.callFromThread to send events
> back to the reactor thread.  So, in general, yes, the potential for
> problems
> exists.
>
>
>> If I have a problem:
>> Can I make a quick fix as follows:  Within each thread ( started with the
>> threads.deferToThread function ), I do the distributing to the clients by
>> using the "reactor.callFromThread"
>> ( like reactor.callFromThread( distributeToAllClient , myResult ) where
>> the
>> function distributeToAllClient are using the remoteReference for each
>> client
>> to send the "myResult" )
>>
>
> This may work.
>
>  Or is the only solution, to let the myServerProcessingFunction returns the
>> "myResult" to be distributed to all connected clients, and do the
>> distribution in the reactor mainthread???
>>
>
> There are always lots of possible solutions.  Since you are particularly
> interested in performance, it will probably be necessary for you to create
> a way to measure how well your application is performing and then repeat
> these measurements using different implementation techniques.  That is the
> only way you'll know which one is best.
>
> Jean-Paul
>
> _______________________________________________
> Twisted-Python mailing list
> Twisted-Python at twistedmatrix.com
> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20090207/f7949542/attachment.htm 


More information about the Twisted-Python mailing list