CVS User Account cvsuser
Fri Dec 10 23:54:08 PST 2004
Log Message:
-----------
Fix up formatting of inclusions in documentation set

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        adminscripts.sgml (r1.2 -> r1.3)
        cluster.sgml (r1.2 -> r1.3)
        concepts.sgml (r1.2 -> r1.3)
        defineset.sgml (r1.2 -> r1.3)
        dropthings.sgml (r1.2 -> r1.3)
        failover.sgml (r1.2 -> r1.3)
        firstdb.sgml (r1.2 -> r1.3)
        help.sgml (r1.2 -> r1.3)
        installation.sgml (r1.2 -> r1.3)
        listenpaths.sgml (r1.2 -> r1.3)
        maintenance.sgml (r1.2 -> r1.3)
        monitoring.sgml (r1.2 -> r1.3)
        reshape.sgml (r1.2 -> r1.3)
        slonconfig.sgml (r1.2 -> r1.3)
        subscribenodes.sgml (r1.2 -> r1.3)

-------------- next part --------------
Index: defineset.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/defineset.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/defineset.sgml -Ldoc/adminguide/defineset.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/defineset.sgml
+++ doc/adminguide/defineset.sgml
@@ -32,22 +32,27 @@
 primary key, and use it.
 
 <listitem><para> If the table hasn't got a primary key, but has some
-<emphasis>candidate</emphasis> primary key, that is, some index on a combination
-of fields that is UNIQUE and NOT NULL, then you can specify the key,
-as in
-
-<para><command>SET ADD TABLE (set id = 1, origin = 1, id = 42, full qualified name = 'public.this_table', key = 'this_by_that', comment='this_table has this_by_that as a candidate primary key');
-</command>
-
-<para>	  Notice that while you need to specify the namespace for the table, you must <emphasis>not</emphasis> specify the namespace for the key, as it infers the namespace from the table.
+<emphasis>candidate</emphasis> primary key, that is, some index on a
+combination of fields that is UNIQUE and NOT NULL, then you can
+specify the key, as in
+
+<programlisting>
+SET ADD TABLE (set id = 1, origin = 1, id = 42, 
+               full qualified name = 'public.this_table', 
+               key = 'this_by_that', 
+     comment='this_table has this_by_that as a candidate primary key');
+</programlisting>
+
+<para> Notice that while you need to specify the namespace for the
+table, you must <emphasis>not</emphasis> specify the namespace for the
+key, as it infers the namespace from the table.
 
 <listitem><para> If the table hasn't even got a candidate primary key,
 you can ask Slony-I to provide one.  This is done by first using
 <command/TABLE ADD KEY/ to add a column populated using a Slony-I
 sequence, and then having the <command/SET ADD TABLE/ include the
 directive <option/key=serial/, to indicate that
-<application/Slony-I/'s own column should be
-used.</para></listitem>
+<application/Slony-I/'s own column should be used.</para></listitem>
 
 </itemizedlist>
 
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -25,9 +25,17 @@
 
 <para>The set of parameters for <function/add_node()/ are thus:
 
-<para> <literallayout><literal remap="tt"><inlinegraphic
-fileref="params.txt"
-format="linespecific"></literal></literallayout></para>
+<programlisting>
+my %PARAMS =   (host=> undef,		# Host name
+	   	dbname => 'template1',	# database name
+		port => 5432,		# Port number
+		user => 'postgres',	# user to connect as
+		node => undef,		# node number
+		password => undef,	# password for user
+		parent => 1,		# which node is parent to this node
+		noforward => undef	# shall this node be set up to forward results?
+);
+</programlisting>
      
 <sect2><title> Set configuration - cluster.set1, cluster.set2</title>
 
Index: help.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/help.sgml
+++ doc/adminguide/help.sgml
@@ -1,19 +1,46 @@
 <sect1 id="help"> <title/ More Slony-I Help /
 <para>If you are having problems with Slony-I, you have several options for help:
+
 <itemizedlist>
-<listitem><Para> <ulink url="http://slony.info/">http://slony.info/</ulink> - the official "home" of Slony
-<listitem><Para> Documentation on the Slony-I Site- Check the documentation on the Slony website: <ulink url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto </ulink>
-<listitem><Para> Other Documentation - There are several articles here <ulink url="http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication"> Varlena GeneralBits </ulink> that may be helpful.
-<listitem><Para> IRC - There are usually some people on #slony on irc.freenode.net who may be able to answer some of your questions. There is also a bot named "rtfm_please" that you may want to chat with.
-<listitem><Para> Mailing lists - The answer to your problem may exist in the Slony1-general mailing list archives, or you may choose to ask your question on the Slony1-general mailing list. The mailing list archives, and instructions for joining the list may be found <ulink url="http://gborg.postgresql.org/mailman/listinfo/slony1">here. </ulink>
 
-<listitem><Para> If your Russian is much better than your English, then <ulink url="http://kirov.lug.ru/wiki/Slony"> KirovOpenSourceCommunity:  Slony</ulink>may be the place to go
+<listitem><Para> <ulink
+url="http://slony.info/">http://slony.info/</ulink> - the official
+"home" of Slony
+
+<listitem><Para> Documentation on the Slony-I Site- Check the
+documentation on the Slony website: <ulink
+url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto
+</ulink>
+
+<listitem><Para> Other Documentation - There are several articles here
+<ulink url=
+"http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php#Replication">
+Varlena GeneralBits </ulink> that may be helpful.
+
+<listitem><Para> IRC - There are usually some people on #slony on
+irc.freenode.net who may be able to answer some of your
+questions. There is also a bot named "rtfm_please" that you may want
+to chat with.
+
+<listitem><Para> Mailing lists - The answer to your problem may exist
+in the Slony1-general mailing list archives, or you may choose to ask
+your question on the Slony1-general mailing list. The mailing list
+archives, and instructions for joining the list may be found <ulink
+url="http://gborg.postgresql.org/mailman/listinfo/slony1">here. </ulink>
+
+<listitem><Para> If your Russian is much better than your English,
+then <ulink url="http://kirov.lug.ru/wiki/Slony">
+KirovOpenSourceCommunity: Slony</ulink> may be the place to go
 </itemizedlist>
 
 <sect1><title/ Other Information Sources/
 <itemizedlist>
 
-<listitem><Para> <ulink url="http://comstar.dotgeek.org/postgres/slony-config/"> slony-config</ulink>  - A Perl tool for configuring Slony nodes using config files in an XML-based format that the tool transforms into a Slonik script
+<listitem><Para> <ulink
+url="http://comstar.dotgeek.org/postgres/slony-config/">
+slony-config</ulink> - A Perl tool for configuring Slony nodes using
+config files in an XML-based format that the tool transforms into a
+Slonik script
 
 </itemizedlist>
 
Index: cluster.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/cluster.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/cluster.sgml -Ldoc/adminguide/cluster.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/cluster.sgml
+++ doc/adminguide/cluster.sgml
@@ -1,4 +1,4 @@
-<sect1 id="cluster"> <title/Defining Slony-I Clusters/
+<sect1 id="cluster"> <title>Defining Slony-I Clusters</title>
 
 <para>A Slony-I cluster is the basic grouping of database instances in
 which replication takes place.  It consists of a set of PostgreSQL
@@ -11,12 +11,17 @@
 <para>For a simple install, it may be reasonable for the "master" to
 be node #1, and for the "slave" to be node #2.
 
-<para>Some planning should be done, in more complex cases, to ensure that the numbering system is kept sane, lest the administrators be driven insane.  The node numbers should be chosen to somehow correspond to the shape of the environment, as opposed to (say) the order in which nodes were initialized.
+<para>Some planning should be done, in more complex cases, to ensure
+that the numbering system is kept sane, lest the administrators be
+driven insane.  The node numbers should be chosen to somehow
+correspond to the shape of the environment, as opposed to (say) the
+order in which nodes were initialized.
 
 <para>It may be that in version 1.1, nodes will also have a "name"
 attribute, so that they may be given more mnemonic names.  In that
 case, the node numbers can be cryptic; it will be the node name that
 is used to organize the cluster.
+</sect1>
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: failover.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/failover.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/failover.sgml -Ldoc/adminguide/failover.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/failover.sgml
+++ doc/adminguide/failover.sgml
@@ -2,121 +2,124 @@
 
 <sect2><title/Foreword/
 
-<para>	 Slony-I is an asynchronous replication system.  Because of that, it
-	 is almost certain that at the moment the current origin of a set
-	 fails, the last transactions committed have not propagated to the
-	 subscribers yet.  They always fail under heavy load, and you know
-	 it.  Thus the goal is to prevent the main server from failing.
-	 The best way to do that is frequent maintenance.
-<para>
-	 Opening the case of a running server is not exactly what we
-	 all consider professional system maintenance.  And interestingly,
-	 those users who use replication for backup and failover
-	 purposes are usually the ones that have a very low tolerance for
-	 words like "downtime".  To meet these requirements, Slony-I has
-	 not only failover capabilities, but controlled master role transfer
-	 features too.
+<para> Slony-I is an asynchronous replication system.  Because of
+that, it is almost certain that at the moment the current origin of a
+set fails, the last transactions committed have not propagated to the
+subscribers yet.  They always fail under heavy load, and you know it.
+Thus the goal is to prevent the main server from failing.  The best
+way to do that is frequent maintenance.
+
+<para> Opening the case of a running server is not exactly what we all
+consider professional system maintenance.  And interestingly, those
+users who use replication for backup and failover purposes are usually
+the ones that have a very low tolerance for words like "downtime".  To
+meet these requirements, Slony-I has not only failover capabilities,
+but controlled master role transfer features too.
 
 <para>	 It is assumed in this document that the reader is familiar with
-	 the slonik utility and knows at least how to set up a simple
-	 2 node replication system with Slony-I.
+the slonik utility and knows at least how to set up a simple 2 node
+replication system with Slony-I.
 
-<sect2><title/ Switchover
+<sect2><title/ Switchover/
 
-<para>	 We assume a current "origin" as node1 (AKA master) with one 
-	 "subscriber" as node2 (AKA slave).  A web application on a third
-	 server is accessing the database on node1.  Both databases are
+<para> We assume a current <quote/origin/ as node1 (AKA master) with
+one <quote/subscriber/ as node2 (AKA slave).  A web application on a
+third server is accessing the database on node1.  Both databases are
 	 up and running and replication is more or less in sync.
 
 <itemizedlist>
 
-		<listitem><para>  At the time of this writing switchover to another server requires the application to reconnect to the database.  So in order to avoid	 any complications, we simply shut down the web server.  Users who use pg_pool for the applications database connections can shutdown	  the pool only.
-
+<listitem><para> At the time of this writing switchover to another
+server requires the application to reconnect to the database.  So in
+order to avoid any complications, we simply shut down the web server.
+Users who use <application/pg_pool/ for the applications database
+connections merely have to shut down the pool.
 
 		<listitem><para> A small slonik script executes the following commands:
-<para><command>
+
+<programlisting>
 	lock set (id = 1, origin = 1);
 	wait for event (origin = 1, confirmed = 2);
 	move set (id = 1, old origin = 1, new origin = 2);
 	wait for event (origin = 1, confirmed = 2);
-</command>
-
-<para>	 After these commands, the origin (master role) of data set 1
-	 is now on node2.  It is not simply transferred.  It is done
-	 in a fashion so that node1 is now a fully synchronized subscriber
-	 actively replicating the set.  So the two nodes completely switched
-	 roles.
+</programlisting>
 
-<listitem><Para>	 After reconfiguring the web application (or pgpool) to connect to	 the database on node2 instead, the web server is restarted and	 resumes normal operation.
+<para> After these commands, the origin (master role) of data set 1 is
+now on node2.  It is not simply transferred.  It is done in a fashion
+so that node1 is now a fully synchronized subscriber actively
+replicating the set.  So the two nodes completely switched roles.
+
+<listitem><Para> After reconfiguring the web application (or pgpool)
+to connect to the database on node2 instead, the web server is
+restarted and resumes normal operation.
 
 <para>	 Done in one shell script, that does the shutdown, slonik, move
-	 config files and startup all together, this entire procedure
-	 takes less than 10 seconds.
+config files and startup all together, this entire procedure takes
+less than 10 seconds.
 
 </itemizedlist>
+
 <para>	 It is now possible to simply shutdown node1 and do whatever is
 	 required.  When node1 is restarted later, it will start replicating
-	 again and eventually catch up after a while.  At this point the
-	 whole procedure is executed with exchanged node IDs and the
-	 original configuration is restored.
+again and eventually catch up after a while.  At this point the whole
+procedure is executed with exchanged node IDs and the original
+configuration is restored.
 
 <sect2><title/ Failover/
 
 <para>	 Because of the possibility of missing not-yet-replicated
-	 transactions that are committed, failover is the worst thing
-	 that can happen in a master-slave replication scenario.  If there
-	 is any possibility to bring back the failed server even if only
-	 for a few minutes, we strongly recommended that you follow the
-	 switchover procedure above.
-
-<para>	 Slony does not provide any automatic detection for failed systems.
-	 Abandoning committed transactions is a business decision that
-	 cannot be made by a database.  If someone wants to put the
-	 commands below into a script executed automatically from the
-	 network monitoring system, well ... its your data.
+transactions that are committed, failover is the worst thing that can
+happen in a master-slave replication scenario.  If there is any
+possibility to bring back the failed server even if only for a few
+minutes, we strongly recommend that you follow the switchover
+procedure above.
+
+<para> Slony does not provide any automatic detection for failed
+systems.  Abandoning committed transactions is a business decision
+that cannot be made by a database.  If someone wants to put the
+commands below into a script executed automatically from the network
+monitoring system, well ... its your data.
+
 <itemizedlist>
 <listitem><para>
 	The slonik command
-<para><command>
+<programlisting>
 	failover (id = 1, backup node = 2);
-</command>
+</programlisting>
 
 <para>	 causes node2 to assume the ownership (origin) of all sets that
-	 have node1 as their current origin.  In the case there would be
-	 more nodes, All direct subscribers of node1 are instructed that
-	 this is happening.  Slonik would also query all direct subscribers
-	 to figure out which node has the highest replication status
-	 (latest committed transaction) for each set, and the configuration
-	 would be changed in a way that node2 first applies those last
-	 minute changes before actually allowing write access to the
-	 tables.
+have node1 as their current origin.  In the case there would be more
+nodes, All direct subscribers of node1 are instructed that this is
+happening.  Slonik would also query all direct subscribers to figure
+out which node has the highest replication status (latest committed
+transaction) for each set, and the configuration would be changed in a
+way that node2 first applies those last minute changes before actually
+allowing write access to the tables.
 
 <para>	 In addition, all nodes that subscribed directly from node1 will
-	 now use node2 as data provider for the set.  This means that
-	 after the failover command succeeded, no node in the entire
-	 replication setup will receive anything from node1 any more.
-<listitem><para>
-
-	 Reconfigure and restart the application (or pgpool) to cause it
-	 to reconnect to node2.
+now use node2 as data provider for the set.  This means that after the
+failover command succeeded, no node in the entire replication setup
+will receive anything from node1 any more.  
+
+<listitem><para> Reconfigure and restart the application (or pgpool)
+to cause it to reconnect to node2.
+
+<listitem><para> After the failover is complete and node2 accepts
+write operations against the tables, remove all remnants of node1's
+configuration information with the slonik command
 
-<listitem><para>
-	 After the failover is complete and node2 accepts write operations
-	 against the tables, remove all remnants of node1's configuration
-	 information with the slonik command
-
-<para><command>
+<programlisting>
 	drop node (id = 1, event node = 2);
-</command>
+</programlisting>
 </itemizedlist>
 
 <sect2><title/After failover, getting back node1/
-<para>
-	 After the above failover, the data stored on node1 must be
+
+<para> After the above failover, the data stored on node1 must be
 	 considered out of sync with the rest of the nodes.  Therefore, the
-	 only way to get node1 back and transfer the master role to it is
-	 to rebuild it from scratch as a slave, let it catch up and then
-	 follow the switchover procedure.
+only way to get node1 back and transfer the master role to it is to
+rebuild it from scratch as a slave, let it catch up and then follow
+the switchover procedure.
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: concepts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/concepts.sgml
+++ doc/adminguide/concepts.sgml
@@ -1,56 +1,61 @@
-<sect1 id="concepts"> <title/Slony-I Concepts/
+<sect1 id="concepts"> <title>Slony-I Concepts</title>
 
 
 <para>In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:
 
 <itemizedlist>
-	<listitem><Para> Cluster
-	<listitem><Para> Node
-	<listitem><Para> Replication Set
-	<listitem><Para> Provider and Subscriber
+	<listitem><para> Cluster
+	<listitem><para> Node
+	<listitem><para> Replication Set
+	<listitem><para> Provider and Subscriber
 </itemizedlist>
 
-<sect2><title/Cluster/
+<sect2><title>Cluster</title>
 
 <para>In Slony-I terms, a Cluster is a named set of PostgreSQL database instances; replication takes place between those databases.
 
 <para>The cluster name is specified in each and every Slonik script via the directive:
-<command>
+<programlisting>
 cluster name = 'something';
-</command>
+</programlisting>
 
-<para>If the Cluster name is 'something', then Slony-I will create, in each database instance in the cluster, the namespace/schema '_something'.
+<para>If the Cluster name is <envar/something/, then Slony-I will
+create, in each database instance in the cluster, the namespace/schema
+<envar/_something/.
 
-<sect2><title/ Node/
+<sect2><title> Node</title>
 
 <para>A Slony-I Node is a named PostgreSQL database that will be participating in replication.  
 
 <para>It is defined, near the beginning of each Slonik script, using the directive:
-<command>
+<programlisting>
  NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';
-</command>
+</programlisting>
 
-<para>The CONNINFO information indicates a string argument that will ultimately be passed to the <function/PQconnectdb()/ libpq function. 
+<para>The CONNINFO information indicates a string argument that will ultimately be passed to the <function>PQconnectdb()</function> libpq function. 
 
 <para>Thus, a Slony-I cluster consists of:
 <itemizedlist>
 	<listitem><para> A cluster name
-	<listitem><Para> A set of Slony-I nodes, each of which has a namespace based on that cluster name
+	<listitem><para> A set of Slony-I nodes, each of which has a namespace based on that cluster name
 </itemizedlist>
 
-<sect2><title/ Replication Set/
+<sect2><title> Replication Set</title>
 
-<para>A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a Slony-I cluster.
+<para>A replication set is defined as a set of tables and sequences
+that are to be replicated between nodes in a Slony-I cluster.
 
-<para>You may have several sets, and the "flow" of replication does not need to be identical between those sets.
+<para>You may have several sets, and the <quote/flow/ of replication does
+not need to be identical between those sets.
 
-<sect2><title/ Provider and Subscriber/
-
-<para>Each replication set has some "master" node, which winds up
-being the <emphasis/only/ place where user applications are permitted
-to modify data in the tables that are being replicated.  That "master"
-may be considered the master "provider node;" it is the main place
-from which data is provided.
+<sect2><title> Provider and Subscriber</title>
+
+<para>Each replication set has some <quote>master</quote> node, which
+winds up being the <emphasis>only</emphasis> place where user
+applications are permitted to modify data in the tables that are being
+replicated.  That <quote>master</quote> may be considered the
+originating <quote>provider node;</quote> it is the main place from
+which data is provided.
 
 <para>Other nodes in the cluster will subscribe to the replication
 set, indicating that they want to receive the data.
@@ -60,6 +65,8 @@
 that is subscribed to the "master" may also behave as a "provider" to
 other nodes in the cluster.
 
+</sect1>
+
 <!-- Keep this comment at the end of the file
 Local variables:
 mode:sgml
Index: monitoring.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/monitoring.sgml
+++ doc/adminguide/monitoring.sgml
@@ -8,7 +8,7 @@
 
 <para>Here are some typical entries that you will probably run into in your logs:
 
-<para><command>
+<screen>
 CONFIG main: local node id = 1
 CONFIG main: loading current cluster configuration
 CONFIG storeNode: no_id=3 no_comment='Node 3'
@@ -16,19 +16,19 @@
 CONFIG storeListen: li_origin=3 li_receiver=1 li_provider=3
 CONFIG storeSet: set_id=1 set_origin=1 set_comment='Set 1'
 CONFIG main: configuration complete - starting threads
-</command>
+</screen>
 
 <sect2><title/DEBUG Notices/
 
 <para>Debug notices are always prefaced by the name of the thread that the notice originates from. You will see messages from the following threads:
 
-<para><command>
+<screen>
 localListenThread: This is the local thread that listens for events on the local node.
 remoteWorkerThread_X: The thread processing remote events.
 remoteListenThread_X: Listens for events on a remote node database.
 cleanupThread: Takes care of things like vacuuming, cleaning out the confirm and event tables, and deleting logs.
 syncThread: Generates sync events.
-</command>
+</screen>
 
 <para> WriteMe: I can't decide the format for the rest of this. I
 think maybe there should be a "how it works" page, explaining more
Index: maintenance.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/maintenance.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/maintenance.sgml -Ldoc/adminguide/maintenance.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/maintenance.sgml
+++ doc/adminguide/maintenance.sgml
@@ -4,49 +4,79 @@
 
 <itemizedlist>
 
-	<Listitem><para> Deletes old data from various tables in the
-	Slony-I cluster's namespace, notably entries in sl_log_1,
-	sl_log_2 (not yet used), and sl_seqlog.
-
-	<listitem><Para> Vacuum certain tables used by Slony-I.  As of
-	1.0.5, this includes pg_listener; in earlier versions, you
-	must vacuum that table heavily, otherwise you'll find
-	replication slowing down because Slony-I raises plenty of
-	events, which leads to that table having plenty of dead
-	tuples.
-
-	<para> In some versions (1.1, for sure; possibly 1.0.5) there is the option of not bothering to vacuum any of these tables if you are using something like pg_autovacuum to handle vacuuming of these tables.  Unfortunately, it has been quite possible for pg_autovacuum to not vacuum quite frequently enough, so you probably want to use the internal vacuums.  Vacuuming pg_listener "too often" isn't nearly as hazardous as not vacuuming it frequently enough.
-
-	<para>Unfortunately, if you have long-running transactions, vacuums cannot clear out dead tuples that are newer than the eldest transaction that is still running.  This will most notably lead to pg_listener growing large and will slow replication.
+<Listitem><para> Deletes old data from various tables in the Slony-I
+cluster's namespace, notably entries in sl_log_1, sl_log_2 (not yet
+used), and sl_seqlog.
+
+<listitem><Para> Vacuum certain tables used by Slony-I.  As of 1.0.5,
+this includes pg_listener; in earlier versions, you must vacuum that
+table heavily, otherwise you'll find replication slowing down because
+Slony-I raises plenty of events, which leads to that table having
+plenty of dead tuples.
+
+<para> In some versions (1.1, for sure; possibly 1.0.5) there is the
+option of not bothering to vacuum any of these tables if you are using
+something like pg_autovacuum to handle vacuuming of these tables.
+Unfortunately, it has been quite possible for pg_autovacuum to not
+vacuum quite frequently enough, so you probably want to use the
+internal vacuums.  Vacuuming pg_listener "too often" isn't nearly as
+hazardous as not vacuuming it frequently enough.
+
+<para>Unfortunately, if you have long-running transactions, vacuums
+cannot clear out dead tuples that are newer than the eldest
+transaction that is still running.  This will most notably lead to
+pg_listener growing large and will slow replication.
 
 </itemizedlist>
+
 <sect2><title/ Watchdogs: Keeping Slons Running/
 
-<para>There are a couple of "watchdog" scripts available that monitor things, and restart the slon processes should they happen to die for some reason, such as a network "glitch" that causes loss of connectivity.
+<para>There are a couple of "watchdog" scripts available that monitor
+things, and restart the slon processes should they happen to die for
+some reason, such as a network "glitch" that causes loss of
+connectivity.
 
 <para>You might want to run them...
 
 <sect2><title/Alternative to Watchdog: generate_syncs.sh/
 
-<para>A new script for Slony-I 1.1 is "generate_syncs.sh", which addresses the following kind of situation.
-
-<para>Supposing you have some possibly-flakey slon daemon that might not run all the time, you might return from a weekend away only to discover the following situation...
-
-<para>On Friday night, something went "bump" and while the database came back up, none of the slon daemons survived.  Your online application then saw nearly three days worth of heavy transactions.
+<para>A new script for Slony-I 1.1 is "generate_syncs.sh", which
+addresses the following kind of situation.
 
-<para>When you restart slon on Monday, it hasn't done a SYNC on the master since Friday, so that the next "SYNC set" comprises all of the updates between Friday and Monday.  Yuck.
+<para>Supposing you have some possibly-flakey slon daemon that might
+not run all the time, you might return from a weekend away only to
+discover the following situation...
+
+<para>On Friday night, something went "bump" and while the database
+came back up, none of the slon daemons survived.  Your online
+application then saw nearly three days worth of heavy transactions.
+
+<para>When you restart slon on Monday, it hasn't done a SYNC on the
+master since Friday, so that the next "SYNC set" comprises all of the
+updates between Friday and Monday.  Yuck.
+
+<para>If you run generate_syncs.sh as a cron job every 20 minutes, it
+will force in a periodic SYNC on the "master" server, which means that
+between Friday and Monday, the numerous updates are split into more
+than 100 syncs, which can be applied incrementally, making the cleanup
+a lot less unpleasant.
 
-<para>If you run generate_syncs.sh as a cron job every 20 minutes, it will force in a periodic SYNC on the "master" server, which means that between Friday and Monday, the numerous updates are split into more than 100 syncs, which can be applied incrementally, making the cleanup a lot less unpleasant.
-
-<para>Note that if SYNCs <emphasis/are/ running regularly, this script won't bother doing anything.
+<para>Note that if SYNCs <emphasis/are/ running regularly, this script
+won't bother doing anything.
 
 <sect2><title/ Log Files/
 
-<para>Slon daemons generate some more-or-less verbose log files, depending on what debugging level is turned on.  You might assortedly wish to:
+<para>Slon daemons generate some more-or-less verbose log files,
+depending on what debugging level is turned on.  You might assortedly
+wish to:
+
 <itemizedlist>
-	<listitem><Para> Use a log rotator like Apache rotatelogs to have a sequence of log files so that no one of them gets too big;
+
+<listitem><Para> Use a log rotator like Apache rotatelogs to have a
+sequence of log files so that no one of them gets too big;
 
 	<listitem><Para> Purge out old log files, periodically.
+
 </itemizedlist>
 
 <!-- Keep this comment at the end of the file
Index: installation.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/installation.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/installation.sgml -Ldoc/adminguide/installation.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/installation.sgml
+++ doc/adminguide/installation.sgml
@@ -2,10 +2,10 @@
 
 <para>You should have obtained the Slony-I source from the previous step. Unpack it.</para>
 
-<para><command>
+<screen>
 gunzip slony.tar.gz;
 tar xf slony.tar
-</command></para>
+</screen>
 
 <para> This will create a directory Slony-I under the current
 directory with the Slony-I sources.  Head into that that directory for
@@ -14,8 +14,10 @@
 <sect2><title> Short Version</title>
 
 <para>
-<command>./configure --with-pgsourcetree=/whereever/the/source/is </command></para>
-<para> <command> gmake all; gmake install </command></para></sect2>
+<screen>
+./configure --with-pgsourcetree=/whereever/the/source/is 
+gmake all; gmake install 
+</screen>
 
 <sect2><title> Configuration</title>
 
@@ -26,9 +28,9 @@
 
 <sect2><title> Example</title>
 
-<para> <command>
+<screen>
 ./configure --with-pgsourcetree=/usr/local/src/postgresql-7.4.3
-</command></para>
+</screen>
 
 <para>This script will run a number of tests to guess values for
 various dependent variables and try to detect some quirks of your
@@ -42,15 +44,18 @@
 
 <para>To start the build process, type
 
-<command>
+<screen>
 gmake all
-</command></para>
+</screen></para>
 
-<para> Be sure to use GNU make; on BSD systems, it is called gmake; on Linux, GNU make is typically the native "make", so the name of the command you type in may vary somewhat. The build may take anywhere from a few seconds to 2 minutes depending on how fast your hardware is at compiling things.  The last line displayed should be</para>
+<para> Be sure to use GNU make; on BSD systems, it is called gmake; on
+Linux, GNU make is typically the native "make", so the name of the
+command you type in may vary somewhat. The build may take anywhere
+from a few seconds to 2 minutes depending on how fast your hardware is
+at compiling things.  The last line displayed should be</para>
 
-<para> <command>
-All of Slony-I is successfully made.  Ready to install.
-</command></para></sect2>
+<para> <command> All of Slony-I is successfully made.  Ready to
+install.  </command></para></sect2>
 
 <sect2><title> Installing Slony-I</title>
 
Index: slonconfig.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonconfig.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/slonconfig.sgml -Ldoc/adminguide/slonconfig.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/slonconfig.sgml
+++ doc/adminguide/slonconfig.sgml
@@ -2,10 +2,9 @@
 
 <para>Slon parameters:
 
-<para> usage: slon [options] clustername conninfo
+<screen>
+usage: slon [options] clustername conninfo
 
-<para>
-<command>
 Options:
 -d debuglevel		 verbosity of logging (1..8)
 -s milliseconds	  SYNC check interval (default 10000)
@@ -14,7 +13,7 @@
 -c num				  how often to vacuum in cleanup cycles
 -p filename			slon pid file
 -f filename			slon configuration file
-</command>
+</screen>
 
 <itemizedlist>
 
Index: firstdb.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/firstdb.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/firstdb.sgml -Ldoc/adminguide/firstdb.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/firstdb.sgml
+++ doc/adminguide/firstdb.sgml
@@ -60,28 +60,28 @@
 
 <sect2><title/ Preparing the databases/
 
-<para><command>
+<programlisting>
 createdb -O $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
 createdb -O $PGBENCHUSER -h $SLAVEHOST $SLAVEDBNAME
 pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
-</command>
+</programlisting>
 
 <para>Because Slony-I depends on the databases having the pl/pgSQL procedural
 language installed, we better install it now.  It is possible that you have
 installed pl/pgSQL into the template1 database in which case you can skip this
 step because it's already installed into the $MASTERDBNAME.
 
-<command>
+<programlisting>
 createlang plpgsql -h $MASTERHOST $MASTERDBNAME
-</command>
+</programlisting>
 
 <para>Slony-I does not yet automatically copy table definitions from a master when a
 slave subscribes to it, so we need to import this data.  We do this with
 pg_dump.
 
-<para><command>
+<programlisting>
 pg_dump -s -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME | psql -U $REPLICATIONUSER -h $SLAVEHOST $SLAVEDBNAME
-</command>
+</programlisting>
 
 <para>To illustrate how Slony-I allows for on the fly replication subscription, lets
 start up pgbench.  If you run the pgbench application in the foreground of a
@@ -91,9 +91,9 @@
 
 <para>The typical command to run pgbench would look like:
 
-<para><command>
+<programlisting>
 pgbench -s 1 -c 5 -t 1000 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME
-</command>
+</programlisting>
 
 <para>This will run pgbench with 5 concurrent clients each processing 1000
 transactions against the pgbench database running on localhost as the pgbench
@@ -107,7 +107,7 @@
 databases.  The script to create the initial configuration for the simple
 master-slave setup of our pgbench database looks like this:
 
-<para><command>
+<programlisting>
 #!/bin/sh
 
 slonik <<_EOF_
@@ -166,36 +166,37 @@
 	store listen (origin=1, provider = 1, receiver =2);
 	store listen (origin=2, provider = 2, receiver =1);
 _EOF_
-</command>
-
+</programlisting>
 
 <para>Is the pgbench still running?  If not start it again.
 
-<para>At this point we have 2 databases that are fully prepared.  One is the master
-database in which bgbench is busy accessing and changing rows.  It's now time
-to start the replication daemons.
+<para>At this point we have 2 databases that are fully prepared.  One
+is the master database in which bgbench is busy accessing and changing
+rows.  It's now time to start the replication daemons.
 
 <para>On $MASTERHOST the command to start the replication engine is
 
-<para><command>
+<programlisting>
 slon $CLUSTERNAME "dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST"
-</command>
+</programlisting>
 
 <para>Likewise we start the replication system on node 2 (the slave)
 
-<para><command>
+<programlisting>
 slon $CLUSTERNAME "dbname=$SLAVEDBNAME user=$REPLICATIONUSER host=$SLAVEHOST"
-</command>
-
-<para>Even though we have the slon running on both the master and slave and they are
-both spitting out diagnostics and other messages, we aren't replicating any
-data yet.  The notices you are seeing is the synchronization of cluster
-configurations between the 2 slon processes.
+</programlisting>
 
-<para>To start replicating the 4 pgbench tables (set 1) from the master (node id 1)
-the the slave (node id 2), execute the following script.
+<para>Even though we have the <application/slon/ running on both the
+master and slave, and they are both spitting out diagnostics and other
+messages, we aren't replicating any data yet.  The notices you are
+seeing is the synchronization of cluster configurations between the 2
+<application/slon/ processes.
+
+<para>To start replicating the 4 pgbench tables (set 1) from the
+master (node id 1) the the slave (node id 2), execute the following
+script.
 
-<para><command>
+<programlisting>
 #!/bin/sh
 slonik <<_EOF_
 	 # ----
@@ -217,29 +218,31 @@
 	 # ----
 	 subscribe set ( id = 1, provider = 1, receiver = 2, forward = no);
 _EOF_
-</command>
+</programlisting>
 
-<para>Any second here, the replication daemon on $SLAVEHOST will start to copy the
-current content of all 4 replicated tables.  While doing so, of course, the
-pgbench application will continue to modify the database.  When the copy
-process is finished, the replication daemon on $SLAVEHOST will start to catch
-up by applying the accumulated replication log.  It will do this in little
-steps, 10 seconds worth of application work at a time.  Depending on the
-performance of the two systems involved, the sizing of the two databases, the
-actual transaction load and how well the two databases are tuned and
-maintained, this catchup process can be a matter of minutes, hours, or
-eons.
-
-<para>You have now successfully set up your first basic master/slave replication
-system, and the 2 databases once the slave has caught up contain identical
-data.  That's the theory.  In practice, it's good to check that the datasets
+<para>Any second now, the replication daemon on $SLAVEHOST will start
+to copy the current content of all 4 replicated tables.  While doing
+so, of course, the pgbench application will continue to modify the
+database.  When the copy process is finished, the replication daemon
+on <envar/$SLAVEHOST/ will start to catch up by applying the
+accumulated replication log.  It will do this in little steps, 10
+seconds worth of application work at a time.  Depending on the
+performance of the two systems involved, the sizing of the two
+databases, the actual transaction load and how well the two databases
+are tuned and maintained, this catchup process can be a matter of
+minutes, hours, or eons.
+
+<para>You have now successfully set up your first basic master/slave
+replication system, and the 2 databases should, once the slave has
+caught up, contain identical data.  That's the theory, at least.  In
+practice, it's good to build confidence by verifying that the datasets
 are in fact the same.
 
 <para>The following script will create ordered dumps of the 2 databases and compare
 them.  Make sure that pgbench has completed it's testing, and that your slon
 sessions have caught up.
 
-<para><command>
+<programlisting>
 #!/bin/sh
 echo -n "**** comparing sample1 ... "
 psql -U $REPLICATIONUSER -h $MASTERHOST $MASTERDBNAME >dump.tmp.1.$$ <<_EOF_
@@ -272,9 +275,11 @@
 else
 	 echo "FAILED - see $CLUSTERNAME.diff for database differences"
 fi
-</command>
+</programlisting>
 
-<para>Note that there is somewhat more sophisticated documentation of the process in the Slony-I source code tree in a file called slony-I-basic-mstr-slv.txt.
+<para>Note that there is somewhat more sophisticated documentation of
+the process in the Slony-I source code tree in a file called
+slony-I-basic-mstr-slv.txt.
 
 <para>If this script returns "FAILED" please contact the developers at
 <ulink url="http://slony.org/"> http://slony.org/</ulink>
Index: subscribenodes.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/subscribenodes.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/subscribenodes.sgml -Ldoc/adminguide/subscribenodes.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/subscribenodes.sgml
+++ doc/adminguide/subscribenodes.sgml
@@ -1,10 +1,16 @@
 <sect1 id="subscribenodes"> <title/ Subscribing Nodes/
 
-<para>Before you subscribe a node to a set, be sure that you have slons running for both the master and the new subscribing node. If you don't have slons running, nothing will happen, and you'll beat your head against a wall trying to figure out what's going on.
+<para>Before you subscribe a node to a set, be sure that you have
+<application/slon/s running for both the master and the new
+subscribing node. If you don't have slons running, nothing will
+happen, and you'll beat your head against a wall trying to figure out
+what is going on.
+
+<para>Subscribing a node to a set is done by issuing the slonik
+command <command/subscribe set/. It may seem tempting to try to
+subscribe several nodes to a set within a single try block like this:
 
-<para>Subscribing a node to a set is done by issuing the slonik command "subscribe set". It may seem tempting to try to subscribe several nodes to a set within the same try block like this:
-
-<para> <command>
+<programlisting>
 try {
 		  echo 'Subscribing sets';
 		  subscribe set (id = 1, provider=1, receiver=2, forward=yes);
@@ -14,42 +20,57 @@
 		  echo 'Could not subscribe the sets!';
 		  exit -1;
 }
-</command>
+</programlisting>
+
 
-<para> You are just asking for trouble if you try to subscribe sets like that. The proper procedure is to subscribe one node at a time, and to check the logs and databases before you move onto subscribing the next node to the set. It is also worth noting that success within the above slonik try block does not imply that nodes 2, 3, and 4 have all been successfully subscribed. It merely guarantees that the slonik commands were received by the slon running on the master node.
+<para> You are just asking for trouble if you try to subscribe sets in
+that fashion. The proper procedure is to subscribe one node at a time,
+and to check the logs and databases before you move onto subscribing
+the next node to the set. It is also worth noting that the
+<quote/success/ within the above slonik try block does not imply that
+nodes 2, 3, and 4 have all been successfully subscribed. It merely
+indicates that the slonik commands were successfully received by the
+<application/slon/ running on the master node.
 
 <para>A typical sort of problem that will arise is that a cascaded
 subscriber is looking for a provider that is not ready yet.  In that
 failure case, that subscriber node will <emphasis/never/ pick up the
-subscriber.  It will get "stuck" waiting for a past event to take
-place.  The other nodes will be convinced that it is successfully
+subscriber.  It will get <quote/stuck/ waiting for a past event to
+take place.  The other nodes will be convinced that it is successfully
 subscribed (because no error report ever made it back to them); a
-request to unsubscribe the node will be "blocked" because the node is
-stuck on the attempt to subscribe it.
+request to unsubscribe the node will be <quote/blocked/ because the
+node is stuck on the attempt to subscribe it.
 
-<para>When you subscribe a node to a set, you should see something like this in your slony logs for the master node:
+<para>When you subscribe a node to a set, you should see something
+like this in your slony logs for the master node:
 
-<para> <command>
+<screen>
 DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET
-</command>
+</screen>
 
 <para>You should also start seeing log entries like this in the slony logs for the subscribing node:
 
-<para><command>
+<screen>
 DEBUG2 remoteWorkerThread_1: copy table public.my_table
-</command>
+</screen>
 
-<para>It may take some time for larger tables to be copied from the master node to the new subscriber. If you check the pg_stat_activity table on the master node, you should see a query that is copying the table to stdout.
+<para>It may take some time for larger tables to be copied from the
+master node to the new subscriber. If you check the pg_stat_activity
+table on the master node, you should see a query that is copying the
+table to stdout.
 
-<para>The table sl_subscribe on both the master, and the new subscriber should have entries for the new subscription:
+<para>The table <envar/sl_subscribe/ on both the master, and the new
+subscriber should contain entries for the new subscription:
 
-<para><Command>
+<screen>
  sub_set | sub_provider | sub_receiver | sub_forward | sub_active
 ---------+--------------+--------------+-------------+------------
 	1	  |				1 |				2 | t			  | t
-</command>
+</screen>
 
-<para>A final test is to insert a row into a table on the master node, and to see if the row is copied to the new subscriber. 
+<para>A final test is to insert a row into one of the replicated
+tables on the master node, and verify that the row is copied to the
+new subscriber.
 
 <!-- Keep this comment at the end of the file
 Local variables:
Index: listenpaths.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/listenpaths.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/listenpaths.sgml -Ldoc/adminguide/listenpaths.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/listenpaths.sgml
+++ doc/adminguide/listenpaths.sgml
@@ -1,28 +1,60 @@
 <sect1 id="listenpaths"> <title/ Slony Listen Paths/
 
-<para>If you have more than two or three nodes, and any degree of usage of cascaded subscribers (_e.g._ - subscribers that are subscribing through a subscriber node), you will have to be fairly careful about the configuration of "listen paths" via the Slonik STORE LISTEN and DROP LISTEN statements that control the contents of the table sl_listen.
-
-<para>The "listener" entries in this table control where each node expects to listen in order to get events propagated from other nodes.  You might think that nodes only need to listen to the "parent" from whom they are getting updates, but in reality, they need to be able to receive messages from _all_ nodes in order to be able to conclude that SYNCs have been received everywhere, and that, therefore, entries in sl_log_1 and sl_log_2 have been applied everywhere, and can therefore be purged.
+<para>If you have more than two or three nodes, and any degree of
+usage of cascaded subscribers (_e.g._ - subscribers that are
+subscribing through a subscriber node), you will have to be fairly
+careful about the configuration of "listen paths" via the Slonik STORE
+LISTEN and DROP LISTEN statements that control the contents of the
+table sl_listen.
+
+<para>The "listener" entries in this table control where each node
+expects to listen in order to get events propagated from other nodes.
+You might think that nodes only need to listen to the "parent" from
+whom they are getting updates, but in reality, they need to be able to
+receive messages from _all_ nodes in order to be able to conclude that
+SYNCs have been received everywhere, and that, therefore, entries in
+sl_log_1 and sl_log_2 have been applied everywhere, and can therefore
+be purged.
 
 <sect2><title/ How Listening Can Break/
 
-<para>On one occasion, I had a need to drop a subscriber node (#2) and recreate it.  That node was the data provider for another subscriber (#3) that was, in effect, a "cascaded slave."  Dropping the subscriber node initially didn't work, as slonik informed me that there was a dependant node.  I repointed the dependant node to the "master" node for the subscription set, which, for a while, replicated without difficulties.
-
-<para>I then dropped the subscription on "node 2," and started resubscribing it.  That raised the Slony-I SET_SUBSCRIPTION event, which started copying tables.  At that point in time, events stopped propagating to "node 3," and while it was in perfectly OK shape, no events were making it to it.
-
-<para>The problem was that node #3 was expecting to receive events from node #2, which was busy processing the SET_SUBSCRIPTION event, and was not passing anything else on.
-
-<para>We dropped the listener rules that caused node #3 to listen to node 2, replacing them with rules where it expected its events to come from node  #1 (the "master" provider node for the replication set).  At that moment, "as if by magic," node #3 started replicating again, as it discovered a place to get SYNC events.
+<para>On one occasion, I had a need to drop a subscriber node (#2) and
+recreate it.  That node was the data provider for another subscriber
+(#3) that was, in effect, a "cascaded slave."  Dropping the subscriber
+node initially didn't work, as slonik informed me that there was a
+dependant node.  I repointed the dependant node to the "master" node
+for the subscription set, which, for a while, replicated without
+difficulties.
+
+<para>I then dropped the subscription on "node 2," and started
+resubscribing it.  That raised the Slony-I <command/SET_SUBSCRIPTION/
+event, which started copying tables.  At that point in time, events
+stopped propagating to "node 3," and while it was in perfectly OK
+shape, no events were making it to it.
+
+<para>The problem was that node #3 was expecting to receive events
+from node #2, which was busy processing the <command/SET_SUBSCRIPTION/ event,
+and was not passing anything else on.
+
+<para>We dropped the listener rules that caused node #3 to listen to
+node 2, replacing them with rules where it expected its events to come
+from node #1 (the "master" provider node for the replication set).  At
+that moment, "as if by magic," node #3 started replicating again, as
+it discovered a place to get <command/SYNC/ events.
 
 <sect2><title/How The Listen Configuration Should Look/
 
-<para>The simple cases tend to be simple to cope with.  We'll look at a fairly complex set of nodes.
+<para>The simple cases tend to be simple to cope with.  We'll look at
+a fairly complex set of nodes.
 
-<para>Consider a set of nodes, 1 thru 6, where 1 is the "master," where 2-4 subscribe directly to the master, and where 5 subscribes to 2, and 6 subscribes to 5.
+<para>Consider a set of nodes, 1 thru 6, where 1 is the "master,"
+where 2-4 subscribe directly to the master, and where 5 subscribes to
+2, and 6 subscribes to 5.
 
-<para>Here is a "listener network" that indicates where each node should listen for messages coming from each other node:
+<para>Here is a "listener network" that indicates where each node
+should listen for messages coming from each other node:
 
-<para><command>
+<screen>
 		 1|	2|	3|	4|	5|	6|
 --------------------------------------------
 	1	0	 2	 3	 4	 2	 2 
@@ -31,15 +63,19 @@
 	4	1	 1	 1	 0	 1	 1 
 	5	2	 2	 2	 2	 0	 6 
 	6	5	 5	 5	 5	 5	 0 
-</command>
+</screen>
 
-<para>Row 2 indicates all of the listen rules for node 2; it gets events for nodes 1, 3, and 4 throw node 1, and gets events for nodes 5 and 6 from node 5.
+<para>Row 2 indicates all of the listen rules for node 2; it gets
+events for nodes 1, 3, and 4 throw node 1, and gets events for nodes 5
+and 6 from node 5.
 
-<para>The row of 5's at the bottom, for node 6, indicate that node 6 listens to node 5 to get events from nodes 1-5.
+<para>The row of 5's at the bottom, for node 6, indicate that node 6
+listens to node 5 to get events from nodes 1-5.
 
-<para>The set of slonik SET LISTEN statements to express this "listener network" are as follows:
+<para>The set of slonik <command/SET LISTEN/ statements to express
+this <quote/listener network/ are as follows:
 
-<para><command>
+<programlisting>
 		store listen (origin = 1, receiver = 2, provider = 1);
 		store listen (origin = 1, receiver = 3, provider = 1);
 		store listen (origin = 1, receiver = 4, provider = 1);
@@ -70,40 +106,86 @@
 		store listen (origin = 6, receiver = 3, provider = 1);
 		store listen (origin = 6, receiver = 4, provider = 1);
 		store listen (origin = 6, receiver = 5, provider = 6);
-</command>
+</programlisting>
 
 <para>How we read these listen statements is thus...
 
-<para>When on the "receiver" node, look to the "provider" node to provide events coming from the "origin" node.
+<para>When on the "receiver" node, look to the "provider" node to
+provide events coming from the "origin" node.
 
-<para>The tool "init_cluster.pl" in the "altperl" scripts produces optimized listener networks in both the tabular form shown above as well as in the form of Slonik statements.
+<para>The tool <filename/init_cluster.pl/ in the <filename/altperl/
+scripts produces optimized listener networks in both the tabular form
+shown above as well as in the form of Slonik statements.
 
 <para>There are three "thorns" in this set of roses:
 <itemizedlist>
-<listitem><para> If you change the shape of the node set, so that the nodes subscribe differently to things, you need to drop sl_listen entries and create new ones to indicate the new preferred paths between nodes.  There is no automated way at this point to do this "reshaping."
-
-<listitem><para> If you _don't_ change the sl_listen entries, events will likely continue to propagate so long as all of the nodes continue to run well.  The problem will only be noticed when a node is taken down, "orphaning" any nodes that are listening through it.
 
-<listitem><para> You might have multiple replication sets that have _different_ shapes for their respective trees of subscribers.  There won't be a single "best" listener configuration in that case.
+<listitem><para> If you change the shape of the node set, so that the
+nodes subscribe differently to things, you need to drop sl_listen
+entries and create new ones to indicate the new preferred paths
+between nodes.  There is no automated way at this point to do this
+"reshaping."
+
+<listitem><para> If you <emphasis/don't/ change the sl_listen entries,
+events will likely continue to propagate so long as all of the nodes
+continue to run well.  The problem will only be noticed when a node is
+taken down, "orphaning" any nodes that are listening through it.
+
+<listitem><para> You might have multiple replication sets that have
+<emphasis/different/ shapes for their respective trees of subscribers.  There
+won't be a single "best" listener configuration in that case.
+
+<listitem><para> In order for there to be an sl_listen path, there
+<emphasis/must/ be a series of sl_path entries connecting the origin
+to the receiver.  This means that if the contents of sl_path do not
+express a "connected" network of nodes, then some nodes will not be
+reachable.  This would typically happen, in practice, when you have
+two sets of nodes, one in one subnet, and another in another subnet,
+where there are only a couple of "firewall" nodes that can talk
+between the subnets.  Cut out those nodes and the subnets stop
+communicating.
 
-	<listitem><para> In order for there to be an sl_listen path, there _must_ be a series of sl_path entries connecting the origin to the receiver.  This means that if the contents of sl_path do not express a "connected" network of nodes, then some nodes will not be reachable.  This would typically happen, in practice, when you have two sets of nodes, one in one subnet, and another in another subnet, where there are only a couple of "firewall" nodes that can talk between the subnets.  Cut out those nodes and the subnets stop communicating.
 </itemizedlist>
 
 <sect2><title/Open Question/
 
-<para>I am not certain what happens if you have multiple listen path entries for one path, that is, if you set up entries allowing a node to listen to multiple receivers to get events from a particular origin.  Further commentary on that would be appreciated!
+<para>I am not certain what happens if you have multiple listen path
+entries for one path, that is, if you set up entries allowing a node
+to listen to multiple receivers to get events from a particular
+origin.  Further commentary on that would be appreciated!
+
+<note><para> Actually, I do have answers to this; the remainder of
+this document should be re-presented based on the fact that Slony-I
+1.1 will include a "heuristic" to generate the listener paths
+automatically. </note>
 
 <sect2><title/ Generating listener entries via heuristics/
 
-<para>It ought to be possible to generate sl_listen entries dynamically, based on the following heuristics.  Hopefully this will take place in version 1.1, eliminating the need to configure this by hand.
+<para>It ought to be possible to generate sl_listen entries
+dynamically, based on the following heuristics.  Hopefully this will
+take place in version 1.1, eliminating the need to configure this by
+hand.
+
+<para>Configuration will (tentatively) be controlled based on two data
+sources:
 
-<para>Configuration will (tentatively) be controlled based on two data sources:
 <itemizedlist>
-<listitem><para> sl_subscribe entries are the first, most vital control as to what listens to what; we know there must be a "listen" entry for a subscriber node to listen to its provider for events from the provider, and there should be direct "listening" taking place between subscriber and provider.
 
-<listitem><para> sl_path entries are the second indicator; if sl_subscribe has not already indicated "how to listen," then a node may listen directly to the event's origin if there is a suitable sl_path entry
+<listitem><para> sl_subscribe entries are the first, most vital
+control as to what listens to what; we know there must be a "listen"
+entry for a subscriber node to listen to its provider for events from
+the provider, and there should be direct "listening" taking place
+between subscriber and provider.
+
+<listitem><para> sl_path entries are the second indicator; if
+sl_subscribe has not already indicated "how to listen," then a node
+may listen directly to the event's origin if there is a suitable
+sl_path entry
+
+<listitem><para> If there is no guidance thus far based on the above
+data sources, then nodes can listen indirectly if there is an sl_path
+entry that points to a suitable sl_listen entry...
 
-<listitem><para> If there is no guidance thus far based on the above data sources, then nodes can listen indirectly if there is an sl_path entry that points to a suitable sl_listen entry...
 </itemizedlist>
 
 <para> A stored procedure would run on each node, rewriting sl_listen
Index: dropthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/dropthings.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/dropthings.sgml -Ldoc/adminguide/dropthings.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/dropthings.sgml
+++ doc/adminguide/dropthings.sgml
@@ -62,11 +62,11 @@
 table you want to get rid of, which you can find in sl_table, and then
 run the following three queries, on each host:
 
-<para><command>
+<programlisting>
   select _slonyschema.alterTableRestore(40);
   select _slonyschema.tableDropKey(40);
   delete from _slonyschema.sl_table where tab_id = 40;
-</command>
+</programlisting>
 
 <para>The schema will obviously depend on how you defined the Slony-I
 cluster.  The table ID, in this case, 40, will need to change to the
@@ -91,10 +91,10 @@
 to replicate the two sequences identified with Sequence IDs 93 and 59
 are thus:
 
-<para><command>
+<programlisting>
 delete from _oxrsorg.sl_seqlog where seql_seqid in (93, 59);
 delete from _oxrsorg.sl_sequence where seq_id in (93,59);
-</command>
+</programlisting>
 
 <para> Those two queries could be submitted to all of the nodes via
 <function/ddlscript()/ / <command/EXECUTE SCRIPT/, thus eliminating
Index: reshape.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reshape.sgml,v
retrieving revision 1.2
retrieving revision 1.3
diff -Ldoc/adminguide/reshape.sgml -Ldoc/adminguide/reshape.sgml -u -w -r1.2 -r1.3
--- doc/adminguide/reshape.sgml
+++ doc/adminguide/reshape.sgml
@@ -1,18 +1,39 @@
 <sect1 id="reshape"> <title/Reshaping a Cluster/
 
-<para>If you rearrange the nodes so that they serve different purposes, this will likely lead to the subscribers changing a bit.
+<para>If you rearrange the nodes so that they serve different
+purposes, this will likely lead to the subscribers changing a bit.
 
 <para>This will require doing several things:
 <itemizedlist>
 
-<listitem><para> If you want a node that is a subscriber to become the "master" provider for a particular replication set, you will have to issue the slonik MOVE SET operation to change that "master" provider node.
-
-<listitem><para> You may subsequently, or instead, wish to modify the subscriptions of other nodes.  You might want to modify a node to get its data from a different provider, or to change it to turn forwarding on or off.  This can be accomplished by issuing the slonik SUBSCRIBE SET operation with the new subscription information for the node; Slony-I will change the configuration.
-
-<listitem><para> If the directions of data flows have changed, it is doubtless appropriate to issue a set of DROP LISTEN operations to drop out obsolete paths between nodes and SET LISTEN to add the new ones.  At present, this is not changed automatically; at some point, MOVE SET and SUBSCRIBE SET might change the paths as a side-effect.  See SlonyListenPaths for more information about this.  In version 1.1 and later, it is likely that the generation of sl_listen entries will be entirely automated, where they will be regenerated when changes are made to sl_path or sl_subscribe, thereby making it unnecessary to even think about SET LISTEN.
+<listitem><para> If you want a node that is a subscriber to become the
+"master" provider for a particular replication set, you will have to
+issue the slonik MOVE SET operation to change that "master" provider
+node.
+
+<listitem><para> You may subsequently, or instead, wish to modify the
+subscriptions of other nodes.  You might want to modify a node to get
+its data from a different provider, or to change it to turn forwarding
+on or off.  This can be accomplished by issuing the slonik SUBSCRIBE
+SET operation with the new subscription information for the node;
+Slony-I will change the configuration.
+
+<listitem><para> If the directions of data flows have changed, it is
+doubtless appropriate to issue a set of DROP LISTEN operations to drop
+out obsolete paths between nodes and SET LISTEN to add the new ones.
+At present, this is not changed automatically; at some point, MOVE SET
+and SUBSCRIBE SET might change the paths as a side-effect.  See
+SlonyListenPaths for more information about this.  In version 1.1 and
+later, it is likely that the generation of sl_listen entries will be
+entirely automated, where they will be regenerated when changes are
+made to sl_path or sl_subscribe, thereby making it unnecessary to even
+think about SET LISTEN.
 
 </itemizedlist>
-<para> The "altperl" toolset includes a "init_cluster.pl" script that is quite up to the task of creating the new SET LISTEN commands; it isn't smart enough to know what listener paths should be dropped.
+
+<para> The "altperl" toolset includes a "init_cluster.pl" script that
+is quite up to the task of creating the new SET LISTEN commands; it
+isn't smart enough to know what listener paths should be dropped.
 
 <!-- Keep this comment at the end of the file
 Local variables:


More information about the Slony1-commit mailing list