[Twisted-web] /freeform_post!!random causes exceptions

J Turner jamwt-twistedlist at jamwt.com
Thu Jan 13 12:32:51 MST 2005


On Mon, Jan 10, 2005 at 06:03:40PM +0100, Andrea Arcangeli wrote:
> I
> don't know about the postgres and the other bits you mention (so far I
> tried pgasync but it's not working correctly and its interface is
> inefficient since it requires duplicating lots of code so at the very
> least that needs fixing, while psycopg2 rocks).

What does that mean?  How is the interface inefficient?  How do you have
to duplicate lots of code?  I'm not disagreeing, I'm trying to draw out
more concrete criticism.

Before you answer, you should know that I've released quite a few versions
recently enchancing the api/workflow area.  In the latest version, full
queuing happens.  So this is ok:

--- 
conn = pgasync.connect(**DBARGS)
cursor = conn.cursor()
cursor.execute("insert into ...." ,{"blah"  : "toast"})
conn.commit()
cursor.release()
---

No callbacks required.  Everything queued, including commands issued
before the *real* connection is made.  Only add a callback when you care
about the outcome.  (or to add an errBack, etc).

Also, I added a connection.exFetch()

---
pgasync.connect(**DBARGS).exFetch("select ...").addCallback(lambda rows:
print rows)
---

1 line of code.

Sure, it's still kinda clunky, but that's basically DB API/SQL.  There's
not too much noise up above not related to DB API that (to my knowledge)
all the sync libraries have to deal with as well.

But feel free to offer suggestions, I'm all about improving this.

> Another misconception of pgasync is that it's not true a connection is
> not expensive. If using ssl on a remote box connection handshake is very
> expensive both for cpu and RTT delays, infact it may be more expensive
> than the query itself (especially if run through the internet with bad
> rtt). So pooling the connections is generally a good thing (even with
> pgasync where doing it locally doesn't require clone() and normally
> nobody uses ssl locally ;).

I'm not sure if you know this, but pgasync *does* pool connection.  

pgasync removes connections from the pool when they've been unused for >
30 seconds.  If you have a connection that's gone unused for more than
30 seconds, I doubt the sub-second overhead of connecting is a big deal.

If the pooling needs to be more sophisticated than this, that's
certainly something I'm open to.

When all is said and done, pgasync has, in every test I've tried, shown
to be *much* faster than adbapi/threads/synclib in an async environment
under heavy load. 

Thanks for the input,

 - jamwt



More information about the Twisted-web mailing list