Wed Mar 1 21:38:49 PST 2006
- Previous message: [Slony1-general] Replicating between 8.0.x and 8.1.x databases
- Next message: [Slony1-general] Replicating between 8.0.x and 8.1.x databases
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
> On Wed, 2006-03-01 at 18:22 -0500, Christopher Browne wrote: >> Rod Taylor wrote: >> >> >>The one "grand challenge" you'll face is that getting the subscription >> >>going, with 224GB of data, will take quite a while, which will leave >> >>transactions open for quite a while. >> >> >> >> >> > >> >It helps if you subscribe one table at a time and merge them into an >> >existing set. >> > >> >So, Create set, add table to set, wait..., merge set. Repeat for each >> >table. >> > >> I'd be inclined to wait 'til the end and merge them all, but that's just >> me... > > I've ran into pretty big performance problems with more than a few sets. > The query for querying for data ends up with a large number of OR's in > the where clause. Take a look at the 1.1 schema; there's an extra index on sl_log_1/sl_log_2 which seems to make an *enormous* difference when you have more than one set. >> >In fact, is there a reason that Slony doesn't do this by default? Just >> >change ADD TABLE to spit out the 3 step process in all circumstances >> >using a set of temporary set IDs (sequence that wraps between 2^31 and >> >2^32 or something). > >> Alas, if you're subscribing the *third* node, this means you're >> repeatedly taking tables out of a set and putting them back in. I think >> the semantics of it break down, at that point. > > Doh. Keep bringing up ideas... Even if most of them "strike out," that's OK; we only need for some to turn out well in order to get improvements. > I guess we go back to PostgreSQL needing a general purpose improvement > for VACUUM with long running transactions. I have been watching the "VSM" discussion on -hackers with some disappointment. I thought the notion of having an FSM-like structure to track dirtied pages would be an excellent idea for tables where large portions never get touched again once they stabilize. Every time an update takes place, pages are marked dirty in VSM, so that you could run a modified VACUUM, perhaps a "VACUUM DIRTY" which would solely examine the dirtied pages for dead tuples. Tom Lane keeps knocking down proposals... I'm hoping it's just that the proposers need to find the right approach to it, at which point "VACUUM DIRTY" can happen. I'm a bit concerned that the whole idea may appear neat but be fundamentally flawed such that there can be no "right approach."
- Previous message: [Slony1-general] Replicating between 8.0.x and 8.1.x databases
- Next message: [Slony1-general] Replicating between 8.0.x and 8.1.x databases
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list