Dave Cramer davecramer at gmail.com
Mon Aug 24 14:29:42 PDT 2015
On 24 August 2015 at 17:23, Steve Singer <ssinger at ca.afilias.info> wrote:

> On 08/24/2015 05:17 PM, Dave Cramer wrote:
>
>>
>> On 24 August 2015 at 17:13, Steve Singer <ssinger at ca.afilias.info
>> <mailto:ssinger at ca.afilias.info>> wrote:
>>
>>     On 08/24/2015 05:06 PM, Dave Cramer wrote:
>>
>>
>>
>>
>>
>>              and a slonik script like
>>              ---------------
>>              cluster name=mycluster;
>>              node 1 admin conninfo='...'
>>              node 4 admin conninfo='...'
>>
>>              resubscribe node(origin=1,provider=1,receiver=4);
>>              ----------
>>
>>              is waiting?
>>
>>              If so it should tell you from which node it is waiting for
>>         events from.
>>
>>
>>         Close enough. I went from 2 to 4 and this is the output
>>
>>         waiting for events  (4,5002587214 <tel:5002587214>) only at
>>         (4,5002579907 <tel:5002579907>) to be
>>         confirmed on node 2
>>
>>
>>
>
> Now I assume the  "only at (2,5003478340) number is staying the same and
> it isn't going up. IF it is then you actually have progress being made but
> I find that unlikely if node 2 isn't the origin of any sets to node 1(you
> would be caught up quickly).
>
>
> Another option would be
>
>   cluster name=mycluster;
>   node 1 admin conninfo='...'
>   node 2 admin conninfo='..
>   node 3 admin conninfo='..
>   node 4 admin conninfo='..
>   failover(id=3, backup node = 2);
>
>
> Per the failover documentation
>
> Nodes that are forwarding providers can also be passed to the failover
> command as a failed node. The failover process will redirect the
> subscriptions from these nodes to the backup node.


failover provided no feedback and doesn't appear to have done anything.

>
>
>
>
>     are there paths between node 2 and 4?
>>
>> There are but I thought I would try your suggestion which evokes a
>> different error message
>>
>> waiting for events  (2,5003485579) only at (2,5003478340),
>> (4,5002587214) only at (4,5002579907) to be confirmed on node 1
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>                           Dave Cramer
>>
>>                           On 24 August 2015 at 15:38, Scott Marlowe
>>                           <scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>
>>                  <mailto:scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>>
>>         <mailto:scott.marlowe at gmail.com <mailto:scott.marlowe at gmail.com>
>>                  <mailto:scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>>>
>>                           <mailto:scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>
>>                  <mailto:scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>>
>>                           <mailto:scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>
>>                  <mailto:scott.marlowe at gmail.com
>>         <mailto:scott.marlowe at gmail.com>>>>> wrote:
>>
>>                                Note that the node will still show up in
>>         sl_nodes and
>>                           sl_status for a
>>                                while, until slony does a cleanup event /
>> log
>>                  switch (can't
>>                           remember
>>                                which right now). This is normal. Don't
>>         freak out.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.slony.info/pipermail/slony1-general/attachments/20150824/f93f1868/attachment-0001.htm 


More information about the Slony1-general mailing list