[Twisted-Python] Re: Twisted-Python Digest, Vol 45, Issue 18

Josef Novak josef.robert.novak at gmail.com
Sun Dec 16 05:04:57 EST 2007


Ok I see this will definitely not work as we are talking about threads and
not processes so os.kill is of no use to me.  I guess there is no way to
track these new threads and stop/pause/restart them based on some other
decision logic?  Would have been nice if there were though...  -joe

2007/12/16, Josef Novak <josef.robert.novak at gmail.com>:
>
>
> Hmm.  This seems like a pretty nice solution, except that it seems it will
> be blocking incoming calls during processing.  What I really need to do is
> pause the deferToThread objects based on their pid.
>
> How can I obtain the pid of a new deferToThread object?  If I can obtain
> it, then I can just keep a persistent dictionary containing all the running
> compute-intensive processes, and unregister each one in a callback after it
> finishes up.  Then, every time a new call comes in, I check my persistent
> pid dictionary/list,
> for pid in pid_list:
>   os.kill(pid, signal.SIGSTOP)
>
> and every time a call finishes, I check the number of current calls, and
> then if we're back to zero, we resume the compute intensive processes again:
> if parentObj.live_calls == 0::
>   for pid in parentObj.pid_list:
>     os.kill(pid, signal.SIGCONT)
>
> so, my new question is...
> How can I obtain the pid of a new deferToThread object?
>
> I'm pretty sure that, while not very elegant, this should solve my problem
> - but please let me know if this sounds really, cockamamey!
>
> -Joe
>
> > ------------------------------
> >
> > Message: 2
> > Date: Sat, 15 Dec 2007 07:50:02 -0800
> > From: Ed Suominen <general at eepatents.com>
> > Subject: Re: [Twisted-Python] finer control of deferToThread ?
> > To: Twisted general discussion <twisted-python at twistedmatrix.com>
> > Message-ID: <4763F7AA.2090800 at eepatents.com >
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > You can do priority queuing to one or more threads using AsynQueue,
> > http://foss.eepatents.com/AsynQueue or python-asynqueue in Debian. Use
> > multiple threads and queue up the compute-intensive calls with a low
> > priority and the other stuff with higher priorities.
> >
> > However, you will need to chop up your compute-intensive stuff into
> > smaller pieces for this to be helpful. (That's good asynchronous
> > processing practice, generally.) The priority queuing is only effective
> > at deciding which calls to dispatch next, and each thread call is on its
>
> > own once it is dispatched from the queue. Each call uses an entire
> > thread for its entire duration, and will keep the queue from dispatching
> > anything else to that thread while it's squatting on it, no matter how
> > low-priority it is.
> >
> > Best regards, Ed
> >
> > Josef Novak wrote:
> > > Hi,
> > >   I am writing a twisted application using StarPY fastagi API for
> > > Asterisk.  My application involves answering user calls, and then
> > > based on their responses to an IVR dialogue, running some some
> > > compute-intensive applications on the same server, after they hang up.
> > >   At the moment I am running the compute-intensive application (3rd
> > > party code) in a deferred.deferToThread.  The application works fine,
> > > however if the number of callers goes above 2-3, and one of the
> > > compute-intensive applications from a previous call has not finished
> > > up, the audio for the call gets very jumpy because of CPU usage.
> > >   This compute-intensive process needs to be run immediately after
> > > hangup, and I'd prefer to take care of everything on the same machine
> > > (rather than send the compute-intensive application request somewhere
> > > else).
> > >
> > >   What I'd like to do is pause the thread with the
> > > deferred.deferToThread process any time a new  call comes in (this
> > > sort of violates what I'm saying above but the number of calls going
> > > to a particular ASterisk trunk is limited so in most cases this would
> > > never result in more than a couple-seconds delay, which is
> > > acceptable).
> > >
> > >   Is there any way to control these deferred.deferToThread objects in
> > > a more fine-grained manner?  Say from a reactor factory?  Can I
> > > register them separately somehow, and then pause these
> > > compute-intensive applications temporarily every time a new call comes
> > > in?  I'm imagining something as simple as as ctrl+z and $ fg linux
> > > terminal commands... but I appreciate that it is probably not so
> > > straightforward.
> > >
> > >   -Joe
> > >
> > > _______________________________________________
> > > Twisted-Python mailing list
> > > Twisted-Python at twistedmatrix.com
> > > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
> > >
> > >
> > >
> >
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > Twisted-Python mailing list
> > Twisted-Python at twistedmatrix.com
> > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
> >
> >
> > End of Twisted-Python Digest, Vol 45, Issue 18
> > **********************************************
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://twistedmatrix.com/pipermail/twisted-python/attachments/20071216/b57022bb/attachment.htm 


More information about the Twisted-Python mailing list