CVS User Account cvsuser
Fri Oct 20 09:01:55 PDT 2006
Log Message:
-----------
Add in docs for new shell scripts, reshape discussion of admin scripts

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        adminscripts.sgml (r1.40 -> r1.41)
        help.sgml (r1.18 -> r1.19)
        monitoring.sgml (r1.29 -> r1.30)

-------------- next part --------------
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.29
retrieving revision 1.30
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.29 -r1.30
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -100,8 +100,8 @@
 
 <indexterm><primary>script test_slony_state to test replication state</primary></indexterm>
 
-<para> This script is in preliminary stages, and may be used to do
-some analysis of the state of a &slony1; cluster.</para>
+<para> This script does various sorts of analysis of the state of a
+&slony1; cluster.</para>
 
 <para> You specify arguments including <option>database</option>,
 <option>host</option>, <option>user</option>,
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.40
retrieving revision 1.41
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.40 -r1.41
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -1,9 +1,18 @@
 <!-- $Id$ -->
-<sect1 id="altperl">
+<sect1 id="adminscripts">
 <title>&slony1; Administration Scripts</title>
 
 <indexterm><primary>administration scripts for &slony1;</primary></indexterm>
 
+<para> A number of tools have grown over the course of the history of
+&slony1; to help users manage their clusters.  This section along with
+the ones on <xref linkend="monitoring"> and <xref
+linkend="maintenance"> discusses them. </para>
+
+<sect2 id="altperl"> <title>altperl Scripts</title>
+
+<indexterm><primary>altperl scripts for &slony1;</primary></indexterm>
+
 <para>In the <filename>altperl</filename> directory in the
 <application>CVS</application> tree, there is a sizable set of
 <application>Perl</application> scripts that may be used to administer
@@ -22,7 +31,7 @@
 <emphasis>before</emphasis> submitting it to <xref
 linkend="slonik">.</para>
 
-<sect2><title>Node/Cluster Configuration - cluster.nodes</title>
+<sect3><title>Node/Cluster Configuration - cluster.nodes</title>
 <indexterm><primary>cluster.nodes - node/cluster configuration for Perl tools</primary></indexterm>
 
 <para>The UNIX environment variable <envar>SLONYNODES</envar> is used
@@ -74,8 +83,8 @@
                                         # = disable,allow,prefer,require
 );
 </programlisting>
-</sect2>
-<sect2><title>Set configuration - cluster.set1, cluster.set2</title>
+</sect3>
+<sect3><title>Set configuration - cluster.set1, cluster.set2</title>
 <indexterm><primary>cluster.set1 - replication set configuration for Perl tools</primary></indexterm>
 
 <para>The UNIX environment variable <envar>SLONYSET</envar> is used to
@@ -117,8 +126,8 @@
 <para> An array of names of sequences that are to be replicated</para>
 </listitem>
 </itemizedlist>
-</sect2>
-<sect2><title>slonik_build_env</title>
+</sect3>
+<sect3><title>slonik_build_env</title>
 <indexterm><primary>slonik_build_env</primary></indexterm>
 
 <para>Queries a database, generating output hopefully suitable for
@@ -129,99 +138,99 @@
 <listitem><para> The arrays <envar>@KEYEDTABLES</envar>,
 <envar>nvar>@SERIALT</envar>nvar>, and <envar>@SEQUENCES</envar></para></listitem>
 </itemizedlist>
-</sect2>
-<sect2><title>slonik_print_preamble</title>
+</sect3>
+<sect3><title>slonik_print_preamble</title>
 
 <para>This generates just the <quote>preamble</quote> that is required
 by all slonik scripts.  In effect, this provides a
 <quote>skeleton</quote> slonik script that does not do
 anything.</para>
-</sect2>
-<sect2><title>slonik_create_set</title>
+</sect3>
+<sect3><title>slonik_create_set</title>
 
 <para>This requires <envar>SLONYSET</envar> to be set as well as
 <envar>SLONYNODES</envar>; it is used to generate the
 <command>slonik</command> script to set up a replication set
 consisting of a set of tables and sequences that are to be
 replicated.</para>
-</sect2>
-<sect2><title>slonik_drop_node</title>
+</sect3>
+<sect3><title>slonik_drop_node</title>
 
 <para>Generates Slonik script to drop a node from a &slony1; cluster.</para>
-</sect2>
-<sect2><title>slonik_drop_set</title>
+</sect3>
+<sect3><title>slonik_drop_set</title>
 
 <para>Generates Slonik script to drop a replication set
 (<emphasis>e.g.</emphasis> - set of tables and sequences) from a
 &slony1; cluster.</para>
-</sect2>
+</sect3>
 
-<sect2><title>slonik_drop_table</title>
+<sect3><title>slonik_drop_table</title>
 
 <para>Generates Slonik script to drop a table from replication.
 Requires, as input, the ID number of the table (available from table
 <envar>sl_table</envar>) that is to be dropped. </para>
-</sect2>
+</sect3>
 
-<sect2><title>slonik_execute_script</title>
+<sect3><title>slonik_execute_script</title>
 
 <para>Generates Slonik script to push DDL changes to a replication set.</para>
-</sect2>
-<sect2><title>slonik_failover</title>
+</sect3>
+<sect3><title>slonik_failover</title>
 
 <para>Generates Slonik script to request failover from a dead node to some new origin</para>
-</sect2>
-<sect2><title>slonik_init_cluster</title>
+</sect3>
+<sect3><title>slonik_init_cluster</title>
 
 <para>Generates Slonik script to initialize a whole &slony1; cluster,
 including setting up the nodes, communications paths, and the listener
 routing.</para>
-</sect2>
-<sect2><title>slonik_merge_sets</title>
+</sect3>
+<sect3><title>slonik_merge_sets</title>
 
 <para>Generates Slonik script to merge two replication sets together.</para>
-</sect2>
-<sect2><title>slonik_move_set</title>
+</sect3>
+<sect3><title>slonik_move_set</title>
 
 <para>Generates Slonik script to move the origin of a particular set to a different node.</para>
-</sect2>
-<sect2><title>replication_test</title>
+</sect3>
+<sect3><title>replication_test</title>
 
 <para>Script to test whether &slony1; is successfully replicating
 data.</para>
-</sect2>
-<sect2><title>slonik_restart_node</title>
+</sect3>
+<sect3><title>slonik_restart_node</title>
 
 <para>Generates Slonik script to request the restart of a node.  This was
 particularly useful pre-1.0.5 when nodes could get snarled up when
 slon daemons died.</para>
-</sect2>
-<sect2><title>slonik_restart_nodes</title>
+</sect3>
+<sect3><title>slonik_restart_nodes</title>
 
 <para>Generates Slonik script to restart all nodes in the cluster.  Not
 particularly useful.</para>
-</sect2>
-<sect2><title>slony_show_configuration</title>
+</sect3>
+<sect3><title>slony_show_configuration</title>
 
 <para>Displays an overview of how the environment (e.g. - <envar>SLONYNODES</envar>) is set
 to configure things.</para>
-</sect2>
-<sect2><title>slon_kill</title>
+</sect3>
+<sect3><title>slon_kill</title>
 
 <para>Kills slony watchdog and all slon daemons for the specified set.  It
 only works if those processes are running on the local host, of
 course!</para>
-</sect2>
-<sect2><title>slon_start</title>
+</sect3>
+<sect3><title>slon_start</title>
 
 <para>This starts a slon daemon for the specified cluster and node, and uses
 slon_watchdog to keep it running.</para>
-</sect2>
-<sect2><title>slon_watchdog</title>
+</sect3>
+<sect3><title>slon_watchdog</title>
 
 <para>Used by <command>slon_start</command>.</para>
 
-</sect2><sect2><title>slon_watchdog2</title>
+</sect3><sect3><title>slon_watchdog2</title>
 
 <para>This is a somewhat smarter watchdog; it monitors a particular
 &slony1; node, and restarts the slon process if it hasn't seen updates
@@ -230,36 +239,37 @@
 <para>This is helpful if there is an unreliable network connection such that
 the slon sometimes stops working without becoming aware of it.</para>
 
-</sect2>
-<sect2><title>slonik_store_node</title>
+</sect3>
+<sect3><title>slonik_store_node</title>
 
 <para>Adds a node to an existing cluster.</para>
-</sect2>
-<sect2><title>slonik_subscribe_set</title>
+</sect3>
+<sect3><title>slonik_subscribe_set</title>
 
 <para>Generates Slonik script to subscribe a particular node to a particular replication set.</para>
 
-</sect2><sect2><title>slonik_uninstall_nodes</title>
+</sect3><sect3><title>slonik_uninstall_nodes</title>
 
 <para>This goes through and drops the &slony1; schema from each node;
 use this if you want to destroy replication throughout a cluster.
 This is a <emphasis>VERY</emphasis> unsafe script!</para>
 
-</sect2><sect2><title>slonik_unsubscribe_set</title>
+</sect3><sect3><title>slonik_unsubscribe_set</title>
 
 <para>Generates Slonik script to unsubscribe a node from a replication set.</para>
 
-</sect2>
-<sect2><title>slonik_update_nodes</title>
+</sect3>
+<sect3><title>slonik_update_nodes</title>
 
 <para>Generates Slonik script to tell all the nodes to update the
 &slony1; functions.  This will typically be needed when you upgrade
 from one version of &slony1; to another.</para>
+</sect3>
 </sect2>
 
 <sect2 id="mkslonconf"><title>mkslonconf.sh</title>
 
-<indexterm><primary>script - mkslonconf.sh</primary></indexterm>
+<indexterm><primary>generating slon.conf files for &slony1;</primary></indexterm>
 
 <para> This is a shell script designed to rummage through a &slony1;
 cluster and generate a set of <filename>slon.conf</filename> files
@@ -355,8 +365,7 @@
 
 <sect2 id="launchclusters"><title> launch_clusters.sh </title>
 
-<indexterm><primary>script - launch_clusters.sh</primary></indexterm>
-
+<indexterm><primary>launching &slony1; cluster using slon.conf files</primary></indexterm>
 
 <para> This is another shell script which uses the configuration as
 set up by <filename>mkslonconf.sh</filename> and is intended to either
@@ -473,6 +482,171 @@
 </itemizedlist>
 
 </sect2>
+
+<sect2 id="configurereplication"> <title> Generating slonik scripts
+using <filename>configure-replication.sh</filename> </title>
+
+<indexterm><primary> generate slonik scripts for a cluster </primary></indexterm>
+
+<para> The <filename>tools</filename> script
+<filename>configure-replication.sh</filename> is intended to automate
+generating slonik scripts to configure replication.  This script is
+based on the configuration approach taken by the <xref
+linkend="testbed">.</para>
+
+<para> This script uses a number (possibly large, if your
+configuration needs to be particularly complex) of environment
+variables to determine the shape of the configuration of a cluster.
+It uses default values extensively, and in many cases, relatively few
+environment values need to be set in order to get a viable
+configuration. </para>
+
+<sect3><title>Global Values</title>
+
+<para> There are some values that will be used universally across a
+cluster: </para>
+
+<variablelist>
+<varlistentry><term><envar>  CLUSTER </envar></term>
+<listitem><para> Name of Slony-I cluster</para></listitem></varlistentry>
+<varlistentry><term><envar>  NUMNODES </envar></term>
+<listitem><para> Number of nodes to set up</para></listitem></varlistentry>
+
+<varlistentry><term><envar>  PGUSER </envar></term>
+<listitem><para> name of PostgreSQL superuser controlling replication</para></listitem></varlistentry>
+<varlistentry><term><envar>  PGPORT </envar></term>
+<listitem><para> default port number</para></listitem></varlistentry>
+<varlistentry><term><envar>  PGDATABASE </envar></term>
+<listitem><para> default database name</para></listitem></varlistentry>
+
+<varlistentry><term><envar>  TABLES </envar></term>
+<listitem><para> a list of fully qualified table names (<emphasis>e.g.</emphasis> - complete with
+           namespace, such as <command>public.my_table</command>)</para></listitem></varlistentry>
+<varlistentry><term><envar>  SEQUENCES </envar></term>
+<listitem><para> a list of fully qualified sequence names (<emphasis>e.g.</emphasis> - complete with
+           namespace, such as <command>public.my_sequence</command>)</para></listitem></varlistentry>
+
+</variablelist>
+
+<para>Defaults are provided for <emphasis>all</emphasis> of these
+values, so that if you run
+<filename>configure-replication.sh</filename> without setting any
+environment variables, you will get a set of slonik scripts.  They may
+not correspond, of course, to any database you actually want to
+use...</para>
+
+<sect3><title>Node-Specific Values</title>
+
+<para>For each node, there are also four environment variables; for node 1: </para>
+<variablelist>
+<varlistentry><term><envar>  DB1 </envar></term>
+<listitem><para> database to connect to</para></listitem></varlistentry>
+<varlistentry><term><envar>  USER1 </envar></term>
+<listitem><para> superuser to connect as</para></listitem></varlistentry>
+<varlistentry><term><envar>  PORT1 </envar></term>
+<listitem><para> port</para></listitem></varlistentry>
+<varlistentry><term><envar>  HOST1 </envar></term>
+<listitem><para> host</para></listitem></varlistentry>
+</variablelist>
+
+<para> It is quite likely that <envar>DB*</envar>,
+<envar>USER*</envar>, and <envar>PORT*</envar> should be drawn from
+the global <envar>PGDATABASE</envar>, <envar>PGUSER</envar>, and
+<envar>PGPORT</envar> values above; having the discipline of that sort
+of uniformity is usually a good thing.</para>
+
+<para> In contrast, <envar>HOST*</envar> values should be set
+explicitly for <envar>HOST1</envar>, <envar>HOST2</envar>, ..., as you
+don't get much benefit from the redundancy replication provides if all
+your databases are on the same server!</para>
+
+</sect3>
+
+<sect3><title>Resulting slonik scripts</title>
+
+<para> slonik config files are generated in a temp directory under
+<filename>/tmp</filename>.  The usage is thus:</para>
+
+<itemizedlist>
+
+<listitem> <para><filename>preamble.slonik</filename> is a
+<quote>preamble</quote> containing connection info used by the other
+scripts.</para>
+
+<para> Verify the info in this one closely; you may want to keep this
+permanently to use with future maintenance you may want to do on the
+cluster.</para></listitem>
+
+<listitem><para> <filename>create_set.slonik</filename></para>
+
+<para>This is the first script to run; it sets up the requested nodes
+as being &slony1; nodes, adding in some &slony1;-specific config
+tables and such.</para>
+
+<para>You can/should start slon processes any time after this step has
+run.</para></listitem>
+
+<listitem><para><filename>  store_paths.slonik</filename></para>
+
+<para> This is the second script to run; it indicates how the &lslon;s
+should intercommunicate.  It assumes that all &lslon;s can talk to all
+nodes, which may not be a valid assumption in a complexly-firewalled
+environment.  If that assumption is untrue, you will need to modify
+the script to fix the paths.</para></listitem>
+
+<listitem><para><filename>create_set.slonik</filename></para>
+
+<para> This sets up the replication set consisting of the whole bunch
+of tables and sequences that make up your application's database
+schema.</para>
+
+<para> When you run this script, all that happens is that triggers are
+added on the origin node (node #1) that start collecting updates;
+replication won't start until #5...</para>
+
+<para>There are two assumptions in this script that could be
+invalidated by circumstances:</para>
+
+<itemizedlist>
+     <listitem><para> That all of the tables and sequences have been
+     included.</para>
+
+     <para> This becomes invalid if new tables get added to your
+     schema and don't get added to the <envar>TABLES</envar>
+     list.</para> </listitem>
+
+     <listitem><para> That all tables have been defined with primary
+     keys.</para>
+
+     <para> Best practice is to always have and use true primary keys.
+     If you have tables that require choosing a candidate primary key
+     or that require creating a surrogate key using <xref
+     linkend="stmttableaddkey">, you will have to modify this script
+     by hand to accomodate that. </para></listitem>
+
+</itemizedlist>
+</listitem>
+
+<listitem><para> <filename> subscribe_set_2.slonik </filename>
+
+  <para> And 3, and 4, and 5, if you set the number of nodes
+  higher... </para>
+
+  <para> This is the step that <quote>fires up</quote>
+  replication.</para>
+
+  <para> The assumption that the script generator makes is that all
+  the subscriber nodes will want to subscribe directly to the origin
+  node.  If you plan to have <quote>sub-clusters,</quote> perhaps
+  where there is something of a <quote>master</quote> location at each
+  data centre, you may need to revise that.</para>
+
+  <para> The slon processes really ought to be running by the time you
+  attempt running this step.  To do otherwise would be rather
+  foolish.</para> </listitem>
+</itemizedlist>
+
+</sect2>
 </sect1>
 <!-- Keep this comment at the end of the file
 Local variables:
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.18
retrieving revision 1.19
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.18 -r1.19
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -8,6 +8,13 @@
 
 <itemizedlist>
 
+<listitem><para> Before submitting questions to any public forum as to
+why <quote>something mysterious</quote> has happened to your
+replication cluster, please run the <xref linkend="testslonystate">
+tool.  It may give some clues as to what is wrong, and the results are
+likely to be of some assistance in analyzing the problem. </para>
+</listitem>
+
 <listitem><para> <ulink url="http://slony.info/">http://slony.info/</ulink> - the official
 <quote>home</quote> of &slony1;</para></listitem>
 



More information about the Slony1-commit mailing list