CVS User Account cvsuser
Fri Nov 19 23:41:49 PST 2004
Log Message:
-----------
Numerous changes to FAQ and to other documentation in the "admin guide"

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        SlonyDDLChanges.txt (r1.1 -> r1.2)
        SlonyIAdministration.txt (r1.1 -> r1.2)
        SlonyMaintenance.txt (r1.1 -> r1.2)
        SlonyStartSlons.txt (r1.1 -> r1.2)

-------------- next part --------------
Index: SlonyIAdministration.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyIAdministration.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyIAdministration.txt -Ldoc/adminguide/SlonyIAdministration.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyIAdministration.txt
+++ doc/adminguide/SlonyIAdministration.txt
@@ -1,4 +1,4 @@
-%META:TOPICINFO{author="guest" date="1098325777" format="1.0" version="1.6"}%
+%META:TOPICINFO{author="guest" date="1100276070" format="1.0" version="1.7"}%
 %META:TOPICPARENT{name="WebHome"}%
 ---++ Slony-I Replication
 
@@ -12,7 +12,7 @@
 	* SlonyDefineCluster - defining the network of nodes (includes some "best practices" on numbering them)
 	* SlonyDefineSet - defining sets of tables/sequences to be replicated (make sure YOU define a primary key!)
 	* SlonyAdministrationScripts - the "altperl" tools
-	* SlonyStartSlons - watchdog script; where should slon run?
+	* SlonyStartSlons - About the slon daemons - where should slon run?
 	* SlonySlonConfiguration - what are the options, how should they be chosen
 	* SlonySubscribeNodes 
 	* SlonyMonitoring - what to expect in the logs, scripts to monitor things
Index: SlonyDDLChanges.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyDDLChanges.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyDDLChanges.txt -Ldoc/adminguide/SlonyDDLChanges.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyDDLChanges.txt
+++ doc/adminguide/SlonyDDLChanges.txt
@@ -1,10 +1,10 @@
-%META:TOPICINFO{author="guest" date="1097252100" format="1.0" version="1.2"}%
+%META:TOPICINFO{author="guest" date="1100558820" format="1.0" version="1.3"}%
 %META:TOPICPARENT{name="SlonyIAdministration"}%
 ---+++ Database Schema Changes (DDL)
 
 When changes are made to the database schema, e.g. adding fields to a table, it is necessary for this to be handled rather carefully, otherwise different nodes may get rather deranged because they disagree on how particular tables are built.
 
-If you pass the changes through Slony-I via the EXECUTE SCRIPT (slonik) / ddlscript(set,script,node) (stored function), this allows you to be certain that the changes take effect at the same point in the transaction streams on all of the nodes.
+If you pass the changes through Slony-I via the EXECUTE SCRIPT (slonik) / ddlscript(set,script,node) (stored function), this allows you to be certain that the changes take effect at the same point in the transaction streams on all of the nodes.  That may not be too important if you can take something of an outage to do schema changes, but if you want to do upgrades that take place while transactions are still firing their way through your systems, it's necessary.
 
 It's worth making a couple of comments on "special things" about EXECUTE SCRIPT:
 
@@ -17,3 +17,4 @@
 Unfortunately, this nonetheless implies that the use of the DDL facility is somewhat fragile and dangerous.  Making DDL changes should not be done in a sloppy or cavalier manner.  If your applications do not have fairly stable SQL schemas, then using Slony-I for replication is likely to be fraught with trouble and frustration.
 
 There is an article on how to manage Slony schema changes here: [[http://www.varlena.com/varlena/GeneralBits/88.php][http://www.varlena.com/varlena/GeneralBits/88.php]]
+
Index: SlonyMaintenance.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyMaintenance.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyMaintenance.txt -Ldoc/adminguide/SlonyMaintenance.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyMaintenance.txt
+++ doc/adminguide/SlonyMaintenance.txt
@@ -1,4 +1,4 @@
-%META:TOPICINFO{author="guest" date="1099541824" format="1.0" version="1.4"}%
+%META:TOPICINFO{author="guest" date="1100558890" format="1.0" version="1.5"}%
 %META:TOPICPARENT{name="SlonyIAdministration"}%
 ---++ Slony-I Maintenance
 
@@ -18,6 +18,20 @@
 
 You might want to run them...
 
+---++ Alternative to Watchdog: generate_syncs.sh
+
+A new script for Slony-I 1.1 is "generate_syncs.sh", which addresses the following kind of situation.
+
+Supposing you have some possibly-flakey slon daemon that might not run all the time, you might return from a weekend away only to discover the following situation...
+
+On Friday night, something went "bump" and while the database came back up, none of the slon daemons survived.  Your online application then saw nearly three days worth of heavy transactions.
+
+When you restart slon on Monday, it hasn't done a SYNC on the master since Friday, so that the next "SYNC set" comprises all of the updates between Friday and Monday.  Yuck.
+
+If you run generate_syncs.sh as a cron job every 20 minutes, it will force in a periodic SYNC on the "master" server, which means that between Friday and Monday, the numerous updates are split into more than 100 syncs, which can be applied incrementally, making the cleanup a lot less unpleasant.
+
+Note that if SYNCs _are_ running regularly, this script won't bother doing anything.
+
 ---++ Log Files
 
 Slon daemons generate some more-or-less verbose log files, depending on what debugging level is turned on.  You might assortedly wish to:
Index: SlonyStartSlons.txt
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/SlonyStartSlons.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -Ldoc/adminguide/SlonyStartSlons.txt -Ldoc/adminguide/SlonyStartSlons.txt -u -w -r1.1 -r1.2
--- doc/adminguide/SlonyStartSlons.txt
+++ doc/adminguide/SlonyStartSlons.txt
@@ -1,12 +1,12 @@
-%META:TOPICINFO{author="guest" date="1097257140" format="1.0" version="1.1"}%
+%META:TOPICINFO{author="guest" date="1100276940" format="1.0" version="1.2"}%
 %META:TOPICPARENT{name="SlonyIAdministration"}%
 ---++ Slon Daemons
 
 The programs that actually perform Slony-I replication are the "slon" daemons.
 
-You need to run one "slon" instance for each node in a Slony-I cluster.  It is not essential that these daemons run on any particular host, but there are some principles worth considering:
+You need to run one "slon" instance for each node in a Slony-I cluster, whether you consider that node a "master" or a "slave."  Since a MOVE SET or FAILOVER can switch the roles of nodes, slon needs to be able to function for both providers and subscribers.  It is not essential that these daemons run on any particular host, but there are some principles worth considering:
 
-	* Each slon needs to be able to communicate quickly with the database whose "node controller" it is.  Therefore, if a Slony-I cluster runs across some form of Wide Area Network, the slon processes should run on or nearby the databases each is controlling.  If you break this rule, there is no immediate disaster, but the added latency introduced to monitoring events on the slon's "own node" will cause it to replicate in a somewhat less timely manner.
+	* Each slon needs to be able to communicate quickly with the database whose "node controller" it is.  Therefore, if a Slony-I cluster runs across some form of Wide Area Network, each slon process should run on or nearby the databases each is controlling.  If you break this rule, no particular disaster should ensue, but the added latency introduced to monitoring events on the slon's "own node" will cause it to replicate in a _somewhat_ less timely manner.
 
 	* The fastest results would be achieved by having each slon run on the database server that it is servicing.  If it runs somewhere within a fast local network, performance will not be noticeably degraded.
 
@@ -14,6 +14,7 @@
 
 There are two "watchdog" scripts currently available:
 
-	* tools/altperl/slon_watchdog.pl  - an "early" version that basically wraps a loop around the invocation of slon, restarting any time it may fail
+	* tools/altperl/slon_watchdog.pl  - an "early" version that basically wraps a loop around the invocation of slon, restarting any time it  falls over
 	* tools/altperl/slon_watchdog2.pl - a somewhat more intelligent version that periodically polls the database, checking to see if a SYNC has taken place recently.  We have had VPN connections that occasionally fall over without signalling the application, so that the slon stops working, but doesn't actually die; this polling accounts for that...
 
+The "slon_watchdog2.pl" script is probably _usually_ the preferable thing to run.  It is not preferable to run it whilst subscribing a very large replication set where it is expected to take many hours to do the initial COPY SET.  The problem that will come up in that case is that it will figure that since it hasn't done a SYNC in 2 hours, something's broken and it needs to restart slon, thereby restarting the COPY SET event.


More information about the Slony1-commit mailing list