Jan Wieck JanWieck
Thu Nov 11 16:15:56 PST 2004
On 11/10/2004 9:28 PM, Rod Taylor wrote:

> On Wed, 2004-11-10 at 09:25 -0500, Rod Taylor wrote:
>> The declare'd statement which finds the 2000 or so statements within a
>> "snapshot" seems to take a little over a minute to FETCH the first 100
>> entries. Statement attached.
> 
> After bumping the snapshots to 100, it managed to get through the
> contents of the 48 hour transaction within about 8 hours -- the 48 hour
> transaction was running while slony was doing the COPY step.
> 
> I think Slony could do significantly better if it retrieved all sync's
> which have the same minxid in one shot -- even if this goes far beyond
> the maximum sync group size.

The reason for not going beyond the maximum sync group size is to avoid 
doing the whole work again if anything goes wrong. It is bad enough that 
the copy_set needs to be done in one single, humungous transaction.

What you seem to experience here are several of the nice performance 
improvements that PostgreSQL received after 7.2 ... just in reverse.

> 
> - Have Slony track the last minXid for a sync group.
> - If the new minXid for the current Sync Group is the same as the
> previous, see if there are other Sync's with the same min. If there are,
> add them to the current group and process them all a single extra large
> group.

Have you thought about the side effects of such strategy? For example, 
the minxid of all transactions that start while a pg_dump is running 
will be the same. I am not sure you really want to apply hours worth of 
replication data in one step.


Jan

-- 
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck at Yahoo.com #


More information about the Slony1-general mailing list