Andy Dale andy.dale at gmail.com
Thu Dec 17 06:00:22 PST 2009
Hi Filip,

I managed to figure out that I was missing the event node section of the
store node command, this was causing me to think it was not possible to do
so from the old master.

Multiple applications and hosts sounds like it makes failover pretty
difficult.

Cheers,

Andy

2009/12/17 Filip Rembia=B3kowski <plk.zuber at gmail.com>

>
>
> W dniu 17 grudnia 2009 12:48 u=BFytkownik Andy Dale <andy.dale at gmail.com>=
napisa=B3:
>
> Hi Filip,
>>
>> Thanks for the quick response on this.  The good news is that only
>> application that writes data back the previous master DB will not be run=
ning
>> during the failover process (handled via resource ordering "no applicati=
on
>> with DB" in Heartbeat).
>>
>
> that's lucky. good to hear that someone has such clean setup :-)
>
> most of my production environments consisted of several applications
> connecting to the database from different hosts, sometimes via
> load-balancer, sometimes directly...
>
>
>
>>
>> The steps for adding the "previous master" must be performed on the
>> current master, this is correct ??
>>
>>
> All slonik tasks can be done from the shell of any node participating in
> the cluster.
> But I'd perform the procedure on "previous master", there you will have
> direct access to slon processes / logs.
>
>
>
>> Cheers,
>>
>> Andy
>>
>> 2009/12/17 Filip Rembia=B3kowski <plk.zuber at gmail.com>
>>
>>
>>> 2009/12/17 Andy Dale <andy.dale at gmail.com>
>>>
>>> Hi,
>>>>
>>>> For a project I am working on we are using slony (1.2-16) to replicate
>>>> data from a master to 2 slaves.
>>>>
>>>> The controlled failover (move set) works really well, however to do th=
is
>>>> all nodes must be reachable.  There is a requirement to perform a fail=
over
>>>> when the master site is no longer reachable, to do this I am currently=
 using
>>>> the failover command followed by the drop node command.
>>>>
>>>> The problem now is that it is very likely that the old master will be
>>>> started up in the same state as before the failover was performed (wit=
h all
>>>> the old replication settings) and this site should then become the mas=
ter
>>>> again (with any new data that was added during it was offline).  What =
is the
>>>> best approach to this problem, the only solution I can think of is:
>>>>
>>>> 1) Stop any slon process on the previous master site.
>>>> 2) Drop "replication schema" at the previous master.
>>>> 3) Add the node and paths to the previous master to the current running
>>>> slony cluster.
>>>> 4) Subscribe the previous master to the replication set(s).
>>>> 5) Move set(s) back to the previous master.
>>>>
>>>> Is this a better approach than the one above.
>>>>
>>>>
>>> Sounds very sensible.
>>>
>>> Comments:
>>>
>>> - you will need to bring slon process up again (or skip point 1, maybe)
>>>
>>> - you definitely need some "STONITH" mechanism. - enabled simultaneously
>>> with the failover.
>>>
>>> That is, make 100% sure that no application connects to the "previous
>>> master" - thinking it's still master, and issuing some data manipulation
>>> there.
>>>
>>> You understand it would be definitely uncool - ranging from "just a mes=
s"
>>> to "apocalypse" - depending how sensitive your transactions are from
>>> business point of view.
>>>
>>>
>>>
>>> --
>>> Filip Rembia=B3kowski
>>> JID,mailto:filip.rembialkowski at gmail.com
>>> http://filip.rembialkowski.net/
>>>
>>
>>
>
>
> --
> Filip Rembia=B3kowski
> JID,mailto:filip.rembialkowski at gmail.com
> http://filip.rembialkowski.net/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20091217/=
da98a39c/attachment.htm


More information about the Slony1-general mailing list