Bill Moran wmoran at collaborativefusion.com
Tue Jan 22 06:09:26 PST 2008
In response to "Diego Algorta Casamayou" <diego.algorta at gmail.com>:

> On Jan 11, 2008 7:32 PM, Christopher Browne <cbbrowne at ca.afilias.info> wrote:
> > Tim Goodaire <tgoodair at ca.afilias.info> writes:
> > > You could take your backup from the origin instead of the subscriber.
> >
> > You can do considerably better than that...
> >
> > You need to take the *SCHEMA DUMP* from the origin, which isn't hugely
> > expensive.
> >
> > You can then take a data-only dump from a subscriber.
> >
> > You can then meld them together, to give you a Proper Schema, and a
> > dump that *didn't* have to open a 4-hour-long transaction against the
> > origin node.
> 
> OK. So just to summarize:
> 
> On the master:
> $ pg_dump -Fc -s -f database_schema.sql my_replicated_database
> 
> On the slave:
> $ pg_dump -Fc -a -f database_data.sql my_replicated_database

I recommend the addition of --disable-triggers on the second one.

But more important is to test the restore process periodically to ensure
that it works.

> To restore the backup:
> $ createdb my_restored_database
> $ pg_restore -d my_restored_database database_schema.sql
> $ pg_restore -d my_restored_database database_data.sql
> 
> And that's it? Any option/switch I'm missing?
> 
> What usage could I give to this restored database? May I use it as a
> master in case the master died? In that case I think a failover to the
> running slave would be faster.
> What should I do to use it in a non-replicated environment? A staging
> environment, for example for testing purposes where I need
> production-like data.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran at collaborativefusion.com
Phone: 412-422-3463x4023


More information about the Slony1-general mailing list