Fri Apr 16 09:00:26 PDT 2010
- Previous message: [Slony1-general] Slony-L: alterTableRestore() table with id not found
- Next message: [Slony1-general] recreating a cluster when the master dies
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Greetings all, I have a master-slave setup and am trying to automate a recovery situation where the master fails and it is recreated from scratch based on a dump from the slave's database. Here's the flow of events I am using to test the transition: 1. the cluster is registered, the master and slave are in sync, all good. 2. the master dies. the master database is recreated from scratch using a dump from the slave's database 3. the master-slave replication cluster is deleted using the following code snippet: cluster name = my_cluster; node 1 admin conninfo = 'dbname=replica_test_master host=localhost user=postgres'; node 2 admin conninfo = 'dbname=replica_test_slave host=localhost user=postgres'; uninstall node ( id = 1 ); uninstall node ( id = 2 ); 4. the slony cluster is recreated from scratch using the exact same commands used in step 1 5. data is inserted into the master database, but IT IS NOT populated into the slave. The last lines found on the slony process running against the slave are the following: 2010-04-16 11:39:42 AST CONFIG version for "dbname=replica_test_slave user=postgres" is 80401 2010-04-16 11:39:42 AST CONFIG remoteWorkerThread_1: update provider configuration 2010-04-16 11:39:42 AST CONFIG version for "dbname=replica_test_master host=localhost user=postgres" is 80401 TODO: ********** remoteWorkerThread: node 1 - EVENT 1,27 STORE_NODE - unknown event type 2010-04-16 11:39:42 AST CONFIG storeListen: li_origin=1 li_receiver=2 li_provider=1 TODO: ********** remoteWorkerThread: node 1 - EVENT 1,28 ENABLE_NODE - unknown event type 2010-04-16 11:39:42 AST CONFIG storeListen: li_origin=1 li_receiver=2 li_provider=1 2010-04-16 11:39:42 AST CONFIG storeListen: li_origin=1 li_receiver=2 li_provider=1 2010-04-16 11:39:42 AST CONFIG remoteWorkerThread_1: update provider configuration These log events are the same when the cluster is working flawlessly (although more events are logged after these, of course). It looks as thought the replication silently stops working with no apparent reason. Could anyone please help me understand what might be going wrong? Thanks Albert -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.slony.info/pipermail/slony1-general/attachments/20100416/fd56263b/attachment.htm
- Previous message: [Slony1-general] Slony-L: alterTableRestore() table with id not found
- Next message: [Slony1-general] recreating a cluster when the master dies
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list