cbbrowne at ca.afilias.info cbbrowne
Sun Feb 6 13:02:09 PST 2005
> Hi,
>
> we are running slony (1.0.5) on our servers (master node PostreSQL
> 7.3.6, slave node PostgreSQL 7.4.6). Everything seems to by OK, but
> there is a difference in relation sizes on master and slave
> node(realations on slave are much larger). Example: relation 'results'
> on master: 831283200bytes and relation 'results' on slave:
> 1660649472bytes). Number of rows are same on both (master and slave). Is
> this OK? I am affraid about making failover, will be the database size
> greater again? If there is something wrong how can I fix it?

If the number of rows is matching, then you've got the right data around.

I'd expect the relation to be larger on the origin, normally.  On that
node, you'll encounter inserts/updates that fail, leading to dead tuples
getting left around.  Those inserts/updates won't propagate to the
subscribers, which would be expected to lead to the files being a bit
bigger at the origin.

In an insert-only table, I would find the result VERY surprising.

Do you do a lot more vacuuming on this table on the origin node than you
do on the subscriber node?  If so, that would certainly explain the
phenomenon.

What you ought to do is to run, on both nodes, the query:

 vacuum verbose analyze name_of_this_table;

I expect what you'll find is that, on the subscriber node, there are a LOT
of dead tuples that the VACUUM will clean out.

The file won't shrink unless you do a VACUUM FULL, which is a blocking
operation so you sure do NOT want to do that on the origin node unless
you're prepared for your applications to have to wait for some time while
the vacuum completes.  The blocking would likely cause less inconvenience
on the subscriber; it'll certainly block replication from continuing while
VACUUM FULL runs, but that may be pretty acceptable.

We make sure we vacuum even our bigger tables at least once or twice a day
so that things don't get to the point where VACUUM FULL would be
necessary.  In a sense, this isn't a "replication" issue...



More information about the Slony1-general mailing list