Brian Fehrle brianf at consistentstate.com
Wed Sep 19 15:46:08 PDT 2012
Hi all,

Postgres 8.4, slony 1.2.21

Previously I had reached out on an issue where the sl_log_1 or sl_log_2 
table would get so full that replication would come to a crawl, only 
processing one event at a time. It seems as though HUGE data 
insert,updates,or deletes to replicated tables are to cause, and being 
on slony 1.2 there isn't much we can do to get around it.

The size of the sl_log table was above 9 million rows where we saw this 
as an issue. We are now going along the path of doing much smaller 
groups of updates so we don't get into the same condition as before. We 
just did 1.2 million rows worth of updates and it only took a few 
minutes to replicate it all to the slave. Good news.

But now our sl_log_1 table is sitting at 1.2 million rows, and we'd like 
to let it be switched and truncated by slony before kicking off a few 
more million rows worth of updates. From what I can tell via 
documentation, this is not all that often.

So what is the thoughts on manually kicking off the logswitch via 
"select _slony.logswitch_start()" on the master? I reviewed the code and 
it won't let a switch occure if it's already in progress, so it seems 
it's being pretty safe in its execution. However it looks like all it 
really does is update a sequence to say "we're currently switching" and 
then slony does it in the background.

So my questions are. 1. is this a safe practice to do? We may be doing 
it multiple times a day (guestimate, ten or more times?). and 2. what is 
slony doing in the background for this to occur? It looks like it 
actually switches to the new log right away, but takes some time before 
the old log is truncated, does it need to wait until a cleanevent can 
run on the data within, aka about 10 minutes?  (#2 is more out of 
curiosity).

Thanks in advance,
- Brian F


More information about the Slony1-general mailing list