Steve Singer ssinger at ca.afilias.info
Mon Aug 24 14:35:36 PDT 2015
On 08/24/2015 05:29 PM, Dave Cramer wrote:
>

>
>
>
>     Now I assume the  "only at (2,5003478340 <tel:5003478340>) number is
>     staying the same and it isn't going up. IF it is then you actually
>     have progress being made but I find that unlikely if node 2 isn't
>     the origin of any sets to node 1(you would be caught up quickly).
>
>
>     Another option would be
>
>        cluster name=mycluster;
>        node 1 admin conninfo='...'
>        node 2 admin conninfo='..
>        node 3 admin conninfo='..
>        node 4 admin conninfo='..
>        failover(id=3, backup node = 2);
>
>
>     Per the failover documentation
>
>     Nodes that are forwarding providers can also be passed to the
>     failover command as a failed node. The failover process will
>     redirect the subscriptions from these nodes to the backup node.
>
>
> failover provided no feedback and doesn't appear to have done anything.

Most slonik commands don't produce feedback when they work.

select * FROM _mycluster.sl_subscribe
on your origin;

Does it show node 3 as a provider?

If not you can then drop node 3.

--
cluster name=mycluster;
node 1 admin conninfo=''
node 2 admin conninfo=''
node 4 admin conninfo=''
drop node (id=3, event node=1);


>
>
>
>
>
>              are there paths between node 2 and 4?
>
>         There are but I thought I would try your suggestion which evokes a
>         different error message
>
>         waiting for events  (2,5003485579 <tel:5003485579>) only at
>         (2,5003478340 <tel:5003478340>),
>         (4,5002587214 <tel:5002587214>) only at (4,5002579907
>         <tel:5002579907>) to be confirmed on node 1
>
>
>
>
>
>
>
>
>
>                                    Dave Cramer
>
>                                    On 24 August 2015 at 15:38, Scott Marlowe
>                                    <scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>
>                           <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com> <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>
>                           <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>>>
>                                    <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>
>                           <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>>
>                                    <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>
>                           <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>
>                  <mailto:scott.marlowe at gmail.com
>         <mailto:scott.marlowe at gmail.com>>>>>> wrote:
>
>                                         Note that the node will still
>         show up in
>                  sl_nodes and
>                                    sl_status for a
>                                         while, until slony does a
>         cleanup event / log
>                           switch (can't
>                                    remember
>                                         which right now). This is
>         normal. Don't
>                  freak out.
>
>
>
>
>
>
>
>
>
>




More information about the Slony1-general mailing list