Jan Wieck JanWieck
Thu Apr 6 04:45:13 PDT 2006
On 4/5/2006 4:26 PM, F.Sluiter wrote:
> We have a setup where several databases share a schema with 5 tables.
> Many other tables are dependend with foreign keys on these 4 tables,
> but each database has its own unique set of these and the data in
> those is unique per database: The other schema's and tables between
> the slaves are therefore different from each other.
> Updates to the shared tables are infrequent (once a day a few records)
> and I can enforce that ids (pkeys) will not change or get deleted and
> all records are timestamped upon creation.
> Each database has its own maintainer(s) and is owned by a different
> organisation and each organisation has a few hundred users .
> 
> To keep the 4 tables consistent between all the databases, I am
> considering slony to replicate the tables from a single source to the
> slaves. (Slaves are not allowed to change those tables only the master
> may).
> 
> Now the catch:
> How do I backup the clients? Each night we do a pg_dump on all the
> databases. And in case of trouble we need to be able to separately
> restore a single database, sometimes even to a version from a few
> weeks ago.
> 
> How do I get the shared tables in sync with the master, considering
> that emptying those tables for a rebuild is not a workable option
> because of the foreign key constraints?

If you keep the shared tables in one schema and the "local" tables in 
another, you can create a pg_dump of only the local tables. In case you 
need to rebuild a node, you drop it completely, recreate the "shared" 
part and let it subscribe. When the subscription is done and the shared 
part is back in sync, you restore the dump of the local part.


Jan


> 
> Is slony the way to go? or is dbmirror a better solution in this case,
> because that doesn't demand to start from an empty table, it just
> needs a table that is the same, which although a pain, can be done
> manually after the dump is restored and before bringing replication
> online again. I'm not even sure if dbmirror is stable enough, it does
> not seem to be used a lot.
> 
> A third alternative is creating a trigger that contacts all databases
> through dblink directly, or just copy/past all the sql statements on
> the master via a script to all clients. When I log that with a
> timestamp in a file, I can recreate databases up to any point in time
> and reissue that sql to the shared tables to make them consistent
> again. But that sounds a lot like a manual replication system and I'd
> rather use something that is automated.
> 
> Any answers, thoughts and comments are welcome!
> 
> Floris
> _______________________________________________
> Slony1-general mailing list
> Slony1-general at gborg.postgresql.org
> http://gborg.postgresql.org/mailman/listinfo/slony1-general


-- 
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck at Yahoo.com #



More information about the Slony1-general mailing list