Brian Fehrle brianf at consistentstate.com
Wed Nov 9 17:07:09 PST 2011
Hi all,

I've been testing the theory of having a materialized view set up to be 
on two replicated tables in slony. The purpose of the materialized view 
is to replace a standard view that is just dog slow. I got my test 
system working with a few hitches, and would like any feedback / 
thoughts / warnings to see if this is a bad idea to do in the first place.

First off, I'm currently testing in slony version 1.2.21 on postgres 
8.3, but the hope would to be eventually on slony 2.1 and postgres 8.4, 
so anything I do would hopefully be compatable with that setup also.

So in my test environment I set up two base tables, a users table and a 
services table. The users table has a column that is a foreign key 
linked to the services table. I then created a view table that retrieves 
two columns from each table, with a join on that foreign key. I then set 
up 3 separate triggers on the 'users' table that modify all data in my 
view table on any INSERTS, UPDATES, and DELETES.

I set up the DDL for these three tables, including the functions and 
triggers, on two boxes. I then set up a slony cluster on the two 
machines, designating one as the master, and one as the slave. he moment 
I start the two slon daemons, and the initial copy happens to bring the 
slave up to date to the master, the 3 triggers I created on the slave 
table are removed. These triggers are not removed on the slony master, 
but only the slave.

First question, is there a way to not let this happen. I know newer 
versions of slony have an OMIT COPY, but does that also apply to 
removing triggers too? If so, I would still need to get the slave table 
up to date with the master anyways, a whole different issue.

So after this happened, I re-applied my triggers to the slave table, and 
started inserting/updating/deleting data on the master. In this test 
that I set up, the materialized view on the slave was successfully 
updated as I would hope, so everything looks good in my test.

It was mentioned in the postgres IRC that slony handles updates 
differently than I may think. I turned log_min_duration statement to 0 
on the slave so I can catch it, but I actually couldn't find it. The 
update in my test environment worked, but the real environment that this 
may be pushed to will have dozens of triggers on 9 or more tables, so my 
test may not be very handy in confirming that this will even work.

So with his, I have two more questions:
1. My test materialized view is simple, but the real one would be 
complex (9 or more tables with 3 triggers on each possibly). Are there 
any 'gotchas' or 'warnings' that would make me even rethink attempting this?

2. If I preform a switchover from the master to the slave, would the act 
of switching over cause my triggers to disappear like they did when I 
set up the cluster (I haven't had a chance to test this yet on my test 
environment). And if I do a switchover, since the slave (now master) 
view is not technically part of replication, would there be chance of it 
becoming out of sync with the tables it's a view of? Also, when the 
master becomes the slave, the hope would be that the view continues to 
be updated then the tables are updated via slony.

thanks for any thoughts / opinions on this,
- Brian F


More information about the Slony1-general mailing list