Christopher Browne cbbrowne
Tue Mar 21 11:49:10 PST 2006
Ujwal S. Setlur wrote:

>Hello,
>
>I am replicating a fairly large database (~100 million
>rows in one table, other tables hundreds to thousands
>of entries). Around 30,000 rows are added to the large
>table every hour. 
>
>Are there any general slony maintenance guidelines for
>such a scenario? For example, I noticed last night
>that sl_log_1 was very big (~25 million entries). Does
>slony take care of these things, or do I need to do
>anything?
>
>  
>
The slon "cleanup thread" tries to do cleanup itself.

http://linuxfinances.info/info/maintenance.html

If you have 25M entries in sl_log_1, it seems likely to me that there is
some sort of event propagation problem, in that it isn't getting cleaned
out.

You might try running the program "test_slony_state.pl" (there's also a
DBI version) in the tools directory; it will search out various possible
problems.

If events are propagating properly, and replication is staying fairly
much up to date, you should only have about 10 minutes worth of old data
in sl_log_1 on the origin node.  Based on your comments, that should
mean you have maybe 10K rows in the table.  25M rows seems rather high.

>Also, what is a good value for "g", "o", and "c"
>parameters for this scenario?
>  
>
We often run with -g100, -o300000; the default for -c is 3, which seems
to work fine for us.

What I'd do first is to run test_slony_state.pl, and see if that points
out any problems.  You could also just look at the code in it, and run
the queries yourself, by hand.

That is a very useful diagnosis tool...



More information about the Slony1-general mailing list