Chris Browne cbbrowne at lists.slony.info
Wed Feb 27 11:37:05 PST 2008
Update of /home/cvsd/slony1/slony1-engine/doc/adminguide
In directory main.slony.info:/tmp/cvs-serv16748

Modified Files:
	adminscripts.sgml bestpractices.sgml failover.sgml 
	slonik_ref.sgml 
Log Message:
Modifications to address things noted as unclear this week on the 
Slony-I list:

1.  Why is FORWARDING=yes typically needed on a subscriber node?

[because otherwise, you can't fail over to that node!]

2.  RESTART NODE is a bit too strongly worded in claiming it is
useless after Slony-I 1.0.5.


Index: failover.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.25
retrieving revision 1.26
diff -C2 -d -r1.25 -r1.26
*** failover.sgml	25 Feb 2008 15:37:58 -0000	1.25
--- failover.sgml	27 Feb 2008 19:37:03 -0000	1.26
***************
*** 184,187 ****
--- 184,195 ----
  will receive anything from node1 any more.</para>
  
+ <note><para> Note that in order for node 2 to be considered as a
+ candidate for failover, it must have been set up with the <xref
+ linkend="stmtsubscribeset"> option <command>forwarding =
+ yes</command>, which has the effect that replication log data is
+ collected in &sllog1;/&sllog2; on node 2.  If replication log data is
+ <emphasis>not</emphasis> being collected, then failover to that node
+ is not possible. </para></note>
+ 
  </listitem>
  

Index: adminscripts.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.49
retrieving revision 1.50
diff -C2 -d -r1.49 -r1.50
*** adminscripts.sgml	24 Sep 2007 21:07:45 -0000	1.49
--- adminscripts.sgml	27 Feb 2008 19:37:03 -0000	1.50
***************
*** 135,147 ****
  replicated.</para>
  </sect3>
! <sect3><title>slonik_drop_node</title>
  
- <para>Generates Slonik script to drop a node from a &slony1; cluster.</para>
  </sect3>
! <sect3><title>slonik_drop_set</title>
  
  <para>Generates Slonik script to drop a replication set
  (<emphasis>e.g.</emphasis> - set of tables and sequences) from a
  &slony1; cluster.</para>
  </sect3>
  
--- 135,162 ----
  replicated.</para>
  </sect3>
! <sect3 id="slonik-drop-node"><title>slonik_drop_node</title>
! 
! <para>Generates Slonik script to drop a node from a &slony1;
! cluster.</para>
  
  </sect3>
! <sect3 id="slonik-drop-set"><title>slonik_drop_set</title>
  
  <para>Generates Slonik script to drop a replication set
  (<emphasis>e.g.</emphasis> - set of tables and sequences) from a
  &slony1; cluster.</para>
+ 
+ <para> This represents a pretty big potential <quote>foot gun</quote>
+ as this eliminates a replication set all at once.  A typo that points
+ it to the wrong set could be rather damaging.  Compare to <xref
+ linkend="slonik-unsubscribe-set"> and <xref
+ linkend="slonik-drop-node">; with both of those, attempting to drop a
+ subscription or a node that is vital to your operations will be
+ blocked (via a foreign key constraint violation) if there exists a
+ downstream subscriber that would be adversely affected.  In contrast,
+ there will be no warnings or errors if you drop a set; the set will
+ simply disappear from replication.
+ </para>
+ 
  </sect3>
  
***************
*** 232,239 ****
  
  <para>This goes through and drops the &slony1; schema from each node;
! use this if you want to destroy replication throughout a cluster.
! This is a <emphasis>VERY</emphasis> unsafe script!</para>
  
! </sect3><sect3><title>slonik_unsubscribe_set</title>
  
  <para>Generates Slonik script to unsubscribe a node from a replication set.</para>
--- 247,257 ----
  
  <para>This goes through and drops the &slony1; schema from each node;
! use this if you want to destroy replication throughout a cluster.  As
! its effects are necessarily rather destructive, this has the potential
! to be pretty unsafe.</para>
  
! </sect3>
! 
! <sect3 id="slonik-unsubscribe-set"><title>slonik_unsubscribe_set</title>
  
  <para>Generates Slonik script to unsubscribe a node from a replication set.</para>

Index: slonik_ref.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.80
retrieving revision 1.81
diff -C2 -d -r1.80 -r1.81
*** slonik_ref.sgml	25 Feb 2008 15:37:58 -0000	1.80
--- slonik_ref.sgml	27 Feb 2008 19:37:03 -0000	1.81
***************
*** 718,727 ****
      <title>Description</title>
      
!     <para> Causes an eventually running replication daemon on the
!     specified node to shutdown and restart itself.  Theoretically,
!     this command should be obsolete. In practice, TCP timeouts can
!     delay critical configuration changes to actually happen in the
!     case where a former forwarding node failed and needs to be
!     bypassed by subscribers.
       
       <variablelist>
--- 718,728 ----
      <title>Description</title>
      
!     <para> Causes an eventually running replication daemon
!     (<application>slon</application> process) on the specified node to
!     shutdown and restart itself.  Theoretically, this command should
!     be obsolete. In practice, TCP timeouts can delay critical
!     configuration changes to actually happen in the case where a
!     former forwarding node failed and needs to be bypassed by
!     subscribers.
       
       <variablelist>
***************
*** 742,747 ****
      <para> No application-visible locking should take place. </para>
     </refsect1>
!    <refsect1> <title> Version Information </title>
!     <para> This command was introduced in &slony1; 1.0; its use should be unnecessary as of version 1.0.5. </para>
     </refsect1>
    </refentry>
--- 743,751 ----
      <para> No application-visible locking should take place. </para>
     </refsect1>
!    <refsect1> <title> Version Information </title> <para> This command
!    was introduced in &slony1; 1.0; frequent use became unnecessary as
!    of version 1.0.5.  There are, however, occasional cases where it is
!    necessary to interrupt a <application>slon</application> process,
!    and this allows this to be scripted via slonik. </para>
     </refsect1>
    </refentry>
***************
*** 2075,2081 ****
         
         <listitem><para> Flag whether or not the new subscriber should
! 	 store the log information during replication to make it
! 	 possible candidate for the provider role for future
! 	 nodes.</para></listitem>
  
        </varlistentry>
--- 2079,2087 ----
         
         <listitem><para> Flag whether or not the new subscriber should
!        store the log information during replication to make it
!        possible candidate for the provider role for future nodes.  Any
!        node that is intended to be a candidate for FAILOVER
!        <emphasis>must</emphasis> have <command>FORWARD =
!        yes</command>.</para></listitem>
  
        </varlistentry>

Index: bestpractices.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/bestpractices.sgml,v
retrieving revision 1.30
retrieving revision 1.31
diff -C2 -d -r1.30 -r1.31
*** bestpractices.sgml	25 Feb 2008 15:37:58 -0000	1.30
--- bestpractices.sgml	27 Feb 2008 19:37:03 -0000	1.31
***************
*** 114,117 ****
--- 114,122 ----
  should be planned for ahead of time.  </para>
  
+ <para> Most pointedly, any node that is expected to be a failover
+ target must have its subscription(s) set up with the option
+ <command>FORWARD = YES</command>.  Otherwise, that node is not a
+ candidate for being promoted to origin node. </para>
+ 
  <para> This may simply involve thinking about what the priority lists
  should be of what should fail to what, as opposed to trying to



More information about the Slony1-commit mailing list