Wed Oct 17 15:35:00 PDT 2012
- Previous message: [Slony1-hackers] Failover never completes
- Next message: [Slony1-hackers] Failover never completes
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On 10/16/2012 12:56 PM, Joe Conway wrote: > On 10/16/2012 05:50 AM, Steve Singer wrote: >> On 12-10-15 11:20 PM, Joe Conway wrote: >>> We are using 2.1.0. We tried upgrading to 2.1.2 but got stuck because we >>> cannot have a mixed 2.1.0/2.1.2 cluster. We have constraints that do not >>> allow for upgrade-in-place of existing nodes, which is why we want to >>> add a new node and failover to it (to facilitate upgrades of components >>> other than slony, e.g. postgres itself). Please elaborate on those constraints. Does that mean you cannot deploy any binaries on an existing, running master? If that is the case, you could deploy the 2.1.2 binaries but not use them yet on all replicas. Switch over to one of them (still using 2.1.0) to deploy the 2.1.2 on the previous master (now replica). Then use the regular Slony upgrade mechanism from there. >> >> So your >> 1. Adding a new node >> 2. Stopping the old node >> 3. Running UPGRADE FUNCTIONS on the new node >> 4. Starting up the new slon and running 'FAILOVER' ? > > No, as I understand it from > http://slony.info/documentation/slonyupgrade.html > we would need to: > > 1) Stop the slon processes on all nodes. (e.g. - old > version of slon) > 2) Install the new version of slon software on all > nodes. > 3) Execute a slonik script containing the command > update functions (id = [whatever]); for each node > in the cluster. > > We are trying to avoid #1, and in any case cannot easily do #2 (no > upgrade in place). > > At the moment we are testing with clusters that are all running 2.1.0. > It is in this configuration where failover is failing. People need to stop using FAILOVER when there is actually no physical problem with the existing master node. What you probably want to do is a controlled MOVE SET instead. > We could possibly test a cluster with all 2.1.2, which might be > instructive, especially if it turns out that the problem we are running > into is solved in 2.1.2. However we would still have the challenge of > getting from existing 2.1.0 clusters to 2.1.2 clusters without excessive > downtime. Stopping the slon processes, running the update functions slonik script and starting the slon processes again doesn't take hours. Jan -- Anyone who trades liberty for security deserves neither liberty nor security. -- Benjamin Franklin
- Previous message: [Slony1-hackers] Failover never completes
- Next message: [Slony1-hackers] Failover never completes
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-hackers mailing list