Mon Jan 24 17:35:40 PST 2005
- Previous message: [Slony1-commit] By cbbrowne: Added FAQ entry, revised intro (textual revisions), fixed
- Next message: [Slony1-commit] By cbbrowne: Apply changes from CVS HEAD admin guide into the stable
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Log Message: ----------- Plenty of modifications to documentation; lots of textual changes. Modified Files: -------------- slony1-engine/doc/adminguide: addthings.sgml (r1.6 -> r1.7) adminscripts.sgml (r1.10 -> r1.11) concepts.sgml (r1.7 -> r1.8) faq.sgml (r1.12 -> r1.13) help.sgml (r1.7 -> r1.8) intro.sgml (r1.6 -> r1.7) monitoring.sgml (r1.8 -> r1.9) prerequisites.sgml (r1.7 -> r1.8) reference.sgml (r1.5 -> r1.6) slonik_ref.sgml (r1.8 -> r1.9) slony.sgml (r1.8 -> r1.9) -------------- next part -------------- Index: addthings.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v retrieving revision 1.6 retrieving revision 1.7 diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.6 -r1.7 --- doc/adminguide/addthings.sgml +++ doc/adminguide/addthings.sgml @@ -20,15 +20,19 @@ linkend="stmtmergeset">MERGE SET</link></command>.</para> <para>Up to and including 1.0.2, there was a potential problem where -<<<<<<< addthings.sgml -if <command><link linkend="stmtmergeset"> MERGE SET</link></command> -======= -if <command><link linkend="stmtmergeset">MERGE SET</link></command> ->>>>>>> 1.5 -is issued while other subscription-related events are pending, it is +if <command><link linkend="stmtmergeset">MERGE SET</link></command> is +issued while other subscription-related events are pending, it is possible for things to get pretty confused on the nodes where other things were pending. This problem was resolved in 1.0.5.</para> +<para> Note that if you add nodes, you will need to add both <link +linkend="stmtstorepath">STORE PATH</link> statements to indicate how +nodes communicate with one another, and <link +linkend="stmtstorelisten">STORE LISTEN</link> statements to +configuration the <quote>communications network</quote> that results +from that. See the section on <link linkend="listenpaths"> Listen +Paths </link> for more details on the latter. + <para>It is suggested that you be very deliberate when adding such things. For instance, submitting multiple subscription requests for a particular set in one <link linkend="slonik"> slonik </link> script Index: adminscripts.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v retrieving revision 1.10 retrieving revision 1.11 diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.10 -r1.11 --- doc/adminguide/adminscripts.sgml +++ doc/adminguide/adminscripts.sgml @@ -22,9 +22,10 @@ <sect2><title>Node/Cluster Configuration - cluster.nodes</title> -<para>The UNIX environment variable <envar>SLONYNODES</envar> is used to -determine what Perl configuration file will be used to control the -shape of the nodes in a <productname>Slony-I</productname> cluster.</para> +<para>The UNIX environment variable <envar>SLONYNODES</envar> is used +to determine what Perl configuration file will be used to control the +shape of the nodes in a <productname>Slony-I</productname> +cluster.</para> <para>What variables are set up. <itemizedlist> @@ -55,6 +56,7 @@ password => undef, # password for user parent => 1, # which node is parent to this node noforward => undef # shall this node be set up to forward results? + sslmode => undef # SSL mode argument - determine priority of SSL usage = disable,allow,prefer,require ); </programlisting> </sect2> @@ -65,15 +67,20 @@ objects will be contained in a particular replication set.</para> <para>Unlike <envar>SLONYNODES</envar>, which is essential for -<emphasis>all</emphasis> of the <link linkend="slonik">slonik</link>-generating scripts, this only needs to be set when running -<filename>create_set.pl</filename>, as that is the only script used to -control what tables will be in a particular replication set.</para> +<emphasis>all</emphasis> of the <link +linkend="slonik">slonik</link>-generating scripts, this only needs to +be set when running <filename>create_set.pl</filename>, as that is the +only script used to control what tables will be in a particular +replication set.</para> <para>What variables are set up.</para> <itemizedlist> <listitem><para>$TABLE_ID = 44;</para> <para> Each table must be identified by a unique number; this variable controls where numbering starts</para> </listitem> +<listitem><para>$SEQUENCE_ID = 17;</para> +<para> Each sequence must be identified by a unique number; this variable controls where numbering starts</para> +</listitem> <listitem><para>@PKEYEDTABLES</para> <para> An array of names of tables to be replicated that have a @@ -221,6 +228,8 @@ Listen Path Generation</link> for more details on how this works.</para> </sect2> + + </sect1> </article> <!-- Keep this comment at the end of the file Index: slonik_ref.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/slonik_ref.sgml -Ldoc/adminguide/slonik_ref.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/slonik_ref.sgml +++ doc/adminguide/slonik_ref.sgml @@ -85,7 +85,7 @@ <para> Those commands are grouped together into one transaction per participating node. </para> -<!-- ************************************************************ --></sect3></sect2></sect1></article> +<!-- ************************************************************ --> <reference id="hdrcmds"> <title>Slonik Preamble Commands</title> @@ -1814,6 +1814,10 @@ </application> process for the duration of the SQL script execution.</para> + <para> If a table's columns are modified, it is very important + that the triggers be regenerated, otherwise they may be + inappropriate for the new form of the table schema.</para> + </refsect1> <refsect1><title>Example</title> <programlisting> Index: intro.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/intro.sgml,v retrieving revision 1.6 retrieving revision 1.7 diff -Ldoc/adminguide/intro.sgml -Ldoc/adminguide/intro.sgml -u -w -r1.6 -r1.7 --- doc/adminguide/intro.sgml +++ doc/adminguide/intro.sgml @@ -1,15 +1,15 @@ <!-- $Id$ --> <article id="introduction"> <title>Introduction to <productname>Slony-I</productname></title> -<sect1> -<title>Why yet another replication system?</title> +<sect1><title>Introduction to <productname>Slony-I</productname></title> +<sect2><title>Why yet another replication system?</title> <para><productname>Slony-I</productname> was born from an idea to create a replication system that was not tied to a specific version of PostgreSQL, which is allowed to be started and stopped on an existing -database with out the need for a dump/reload cycle.</para> </sect1> +database with out the need for a dump/reload cycle.</para> </sect2> -<sect1> <title>What <productname>Slony-I</productname> is</title> +<sect2> <title>What <productname>Slony-I</productname> is</title> <para><productname>Slony-I</productname> is a <quote>master to multiple slaves</quote> replication system supporting cascading and @@ -53,45 +53,46 @@ <quote>log shipping.</quote></para> <para> But <productname>Slony-I</productname>, by only having a single -origin for each set, is quite unsuitable for <emphasis> really -</emphasis> asynchronous multiway replication. For those that could -use some sort of <quote>asynchronous multimaster replication with -conflict resolution</quote> akin to what is provided by <productname> -Lotus <trademark>Notes</trademark></productname> or the -<quote>syncing</quote> protocols found on PalmOS systems, you will +origin for each set, is quite unsuitable for +<emphasis>really</emphasis> asynchronous multiway replication. For +those that could use some sort of <quote>asynchronous multimaster +replication with conflict resolution</quote> akin to what is provided +by <productname> Lotus <trademark>Notes</trademark></productname> or +the <quote>syncing</quote> protocols found on PalmOS systems, you will really need to look elsewhere. These sorts of replication models are -not without merit, but they represent problems that -<productname>Slony-I</productname> does not attempt to address.</para> - -</sect1> +not without merit, but they represent <emphasis>different</emphasis> +replication scenarios that <productname>Slony-I</productname> does not +attempt to address.</para> +</sect2> -<sect1><title> <productname>Slony-I</productname> is not</title> +<sect2><title> What <productname>Slony-I</productname> is not</title> <para><productname>Slony-I</productname> is not a network management system.</para> <para> <productname>Slony-I</productname> does not have any -functionality within it to detect a node failure, or to automatically +functionality within it to detect a node failure, nor to automatically promote a node to a master or other data origin. It is quite possible that you may need to do that; that will require that you combine some network tools that evaluate <emphasis> to your satisfaction -</emphasis> which nodes are <quote>live</quote> and which nodes are -<quote>dead</quote> along with some local policy to determine what to -do under those circumstances. <productname>Slony-I</productname> does -not dictate policy to you.</para> - -<para><productname>Slony-I</productname> is not multi-master; it's not -a connection broker, and it doesn't make you coffee and toast in the -morning.</para> +</emphasis> which nodes you consider <quote>live</quote> and which +nodes you consider <quote>dead</quote> along with some local policy to +determine what to do under those circumstances. +<productname>Slony-I</productname> does not dictate any of that policy +to you.</para> + +<para><productname>Slony-I</productname> is not multi-master; it is +not a connection broker, and it doesn't make you coffee and toast in +the morning.</para> <para>That being said, the plan is for a subsequent system, <productname>Slony-II</productname>, to provide <quote>multimaster</quote> capabilities. But that is a separate project, and expectations for <productname>Slony-I</productname> -should not be based on hopes for future projects.</para></sect1> +should not be based on hopes for future projects.</para></sect2> -<sect1><title> Why doesn't <productname>Slony-I</productname> do +<sect2><title> Why doesn't <productname>Slony-I</productname> do automatic fail-over/promotion? </title> @@ -111,9 +112,9 @@ <productname>Slony-I</productname> an unacceptable choice.</para> <para>As a result, let <productname>Slony-I</productname> do what it -does best: provide database replication.</para></sect1> +does best: provide database replication.</para></sect2> -<sect1><title> Current Limitations</title> +<sect2><title> Current Limitations</title> <para><productname>Slony-I</productname> does not automatically propagate schema changes, nor does it have any ability to replicate @@ -125,9 +126,10 @@ <para>There is a capability for <productname>Slony-I</productname> to propagate DDL changes if you submit them as scripts via the -<application>slonik</application> <command>EXECUTE SCRIPT</command> -operation. That is not <quote>automatic;</quote> you have to -construct an SQL DDL script and submit it.</para> +<application>slonik</application> <command> <link +linkend="stmtddlscript"> EXECUTE SCRIPT </link></command> operation. +That is not <quote>automatic;</quote> you have to construct an SQL DDL +script and submit it.</para> <para>If you have those sorts of requirements, it may be worth exploring the use of <application>PostgreSQL</application> 8.0 @@ -151,10 +153,10 @@ <para>There are a number of distinct models for database replication; it is impossible for one replication system to be all things to all -people.</para></sect1> +people.</para></sect2> -<sect1 id="slonylistenercosts"><title> <productname>Slony-I</productname> Communications -Costs</title> +<sect1 id="slonylistenercosts"><title> +<productname>Slony-I</productname> Communications Costs</title> <para>The cost of communications grows in a quadratic fashion in several directions as the number of replication nodes in a cluster Index: help.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/help.sgml,v retrieving revision 1.7 retrieving revision 1.8 diff -Ldoc/adminguide/help.sgml -Ldoc/adminguide/help.sgml -u -w -r1.7 -r1.8 --- doc/adminguide/help.sgml +++ doc/adminguide/help.sgml @@ -10,11 +10,10 @@ <listitem><para> <ulink url="http://slony.info/">http://slony.info/</ulink> - the official -"home" of Slony</para></listitem> +<quote>home</quote> of Slony</para></listitem> <listitem><para> Documentation on the Slony-I Site- Check the -documentation on the Slony website: <ulink -url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto +documentation on the Slony website: <ulink url="http://gborg.postgresql.org/project/slony1/genpage.php?howto_idx">Howto </ulink></para></listitem> <listitem><para> Other Documentation - There are several articles here @@ -34,14 +33,14 @@ <listitem><para> If your Russian is much better than your English, then <ulink url="http://kirov.lug.ru/wiki/Slony"> -KirovOpenSourceCommunity: Slony</ulink> may be the place to go</para></listitem> -</itemizedlist> +KirovOpenSourceCommunity: Slony</ulink> may be the place to +go.</para></listitem> </itemizedlist></para> <sect2><title> Other Information Sources</title> <itemizedlist> -<listitem><para> <ulink url= -"http://comstar.dotgeek.org/postgres/slony-config/"> +<listitem><para> <ulink +url="http://comstar.dotgeek.org/postgres/slony-config/"> slony-config</ulink> - A Perl tool for configuring Slony nodes using config files in an XML-based format that the tool transforms into a Slonik script</para></listitem> Index: reference.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/reference.sgml,v retrieving revision 1.5 retrieving revision 1.6 diff -Ldoc/adminguide/reference.sgml -Ldoc/adminguide/reference.sgml -u -w -r1.5 -r1.6 --- doc/adminguide/reference.sgml +++ doc/adminguide/reference.sgml @@ -1,7 +1,2 @@ <!-- $Id$ --> -<chapter> - <title><productname>Slony-I</productname> binaries</title> - &slon; - &slonik; - &slonikref; Index: concepts.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/concepts.sgml,v retrieving revision 1.7 retrieving revision 1.8 diff -Ldoc/adminguide/concepts.sgml -Ldoc/adminguide/concepts.sgml -u -w -r1.7 -r1.8 --- doc/adminguide/concepts.sgml +++ doc/adminguide/concepts.sgml @@ -49,27 +49,31 @@ <sect2><title> Replication Set</title> <para>A replication set is defined as a set of tables and sequences -that are to be replicated between nodes in a <productname>Slony-I</productname> cluster.</para> +that are to be replicated between nodes in a +<productname>Slony-I</productname> cluster.</para> -<para>You may have several sets, and the <quote>flow</quote> of replication does -not need to be identical between those sets.</para> +<para>You may have several sets, and the <quote>flow</quote> of +replication does not need to be identical between those sets.</para> </sect2> + <sect2><title> Origin, Providers and Subscribers</title> <para>Each replication set has some origin node, which is the <emphasis>only</emphasis> place where user applications are permitted to modify data in the tables that are being replicated. This might -also be termed the <quote>master provider</quote>; it is the main place from -which data is provided.</para> +also be termed the <quote>master provider</quote>; it is the main +place from which data is provided.</para> -<para>Other nodes in the cluster will subscribe to the replication -set, indicating that they want to receive the data.</para> +<para>Other nodes in the cluster subscribe to the replication set, +indicating that they want to receive the data.</para> -<para>The origin node will never be considered a <quote>subscriber.</quote> -(Ignoring the case where the cluster is reshaped, and the origin is -moved to another node.) But <productname>Slony-I</productname> supports the notion of cascaded -subscriptions, that is, a node that is subscribed to the origin may -also behave as a <quote>provider</quote> to other nodes in the cluster.</para> +<para>The origin node will never be considered a +<quote>subscriber.</quote> (Ignoring the case where the cluster is +reshaped, and the origin is expressly shifted to another node.) But +<productname>Slony-I</productname> supports the notion of cascaded +subscriptions, that is, a node that is subscribed to some set may also +behave as a <quote>provider</quote> to other nodes in the cluster for +that replication set.</para> </sect2> </sect1> Index: monitoring.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/monitoring.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/monitoring.sgml -Ldoc/adminguide/monitoring.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/monitoring.sgml +++ doc/adminguide/monitoring.sgml @@ -109,7 +109,7 @@ <para> If you have broken applications that hold connections open, that has several unsalutory effects as <link -linkend="longtxnsareevil"> described in the FAQ</ulink>. +linkend="longtxnsareevil"> described in the FAQ</link>. </itemizedlist> Index: prerequisites.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/prerequisites.sgml,v retrieving revision 1.7 retrieving revision 1.8 diff -Ldoc/adminguide/prerequisites.sgml -Ldoc/adminguide/prerequisites.sgml -u -w -r1.7 -r1.8 --- doc/adminguide/prerequisites.sgml +++ doc/adminguide/prerequisites.sgml @@ -1,4 +1,4 @@ -<!-- $Id$ --> +<!-- $Id --> <article id="requirements"> <title>System Requirements</title> @@ -36,8 +36,15 @@ interested party from volunteering to do the port.</para> <sect2> -<title> Software needed</title> -<para> +<title> Slony-I Software Dependancies</title> + +<para> At present, <productname>Slony-I</productname> <emphasis>as +well as <productname>PostgreSQL</productname></emphasis> need to be +able to be compiled from source at your site. + +<para> In order to compile <productname>Slony-I</productname>, you +need to have the following tools. + <itemizedlist> <listitem><para> GNU make. Other make programs will not work. GNU make is often installed under the name <command>gmake</command>; this @@ -66,17 +73,29 @@ source code at your local GNU mirror (see <ulink url="http://www.gnu.org/order/ftp.html"> http://www.gnu.org/order/ftp.html </ulink> for a list) or at <ulink -url="ftp://ftp.gnu.org/gnu"> ftp://ftp.gnu.org/gnu </ulink> .)</para></listitem> +url="ftp://ftp.gnu.org/gnu"> ftp://ftp.gnu.org/gnu </ulink> +.)</para></listitem> <listitem><para> If you need to obtain PostgreSQL source, you can download it from your favorite PostgreSQL mirror. See <ulink url="http://www.postgresql.org/mirrors-www.html"> -http://www.postgresql.org/mirrors-www.html </ulink> for a list.</para></listitem> -</itemizedlist> -</para> +http://www.postgresql.org/mirrors-www.html </ulink> for a +list.</para></listitem> </itemizedlist> </para> + <para>Also check to make sure you have sufficient disk space. You will need approximately 5MB for the source tree during build and -installation.</para></sect2> +installation.</para> + +<note><para>There are changes afoot for version 1.1 that ought to make +it possible to compile <productname>Slony-I</productname> separately +from <productname>PostgreSQL</productname>, which should make it +practical for the makers of distributions of +<productname>Linux</productname> and +<productname>FreeBSD</productname> to include precompiled binary +packages for <productname>Slony-I</productname>, but until that +happens, you need to be prepared to use versions of all this software +that you compile yourself.</para></note> +</sect2> <sect2> <title> Getting <productname>Slony-I</productname>Source</title> @@ -93,10 +112,11 @@ <para> All the servers used within the replication cluster need to have their Real Time Clocks in sync. This is to ensure that slon -doesn't error with messages indicating that slave is already ahead of -the master during replication. We recommend you have ntpd running on -all nodes, with subscriber nodes using the <quote>master</quote> -provider node as their time server.</para> +doesn't generate errors with messages indicating that a subscriber is +already ahead of its provider during replication. We recommend you +have <application>ntpd</application> running on all nodes, where +subscriber nodes using the <quote>master</quote> provider host as +their time server.</para> <para> It is possible for <productname>Slony-I</productname> to function even in the face of there being some time discrepancies, but @@ -106,6 +126,37 @@ <para> See <ulink url="http://www.ntp.org/"> www.ntp.org </ulink> for more details about NTP (Network Time Protocol).</para> +<para> Some users have reported problems that have been traced to +their locales indicating the use of some time zone that +<productname>PostgreSQL</productname> did not recognize. + +<itemizedlist> + +<listitem><para> On <productname>AIX</productname>, +<command><envar>TZ</envar>=CUT0</command> was unrecognized, leading to +timestamps pulled from system calls causing it to +break.</para> + +<para> <command>CUT0</command> is a variant way of describing +<command>UTC</command> +</listitem> + +<listitem><para> Some countries' timezones are not yet included in +<productname> PostgreSQL </productname>. </para></listitem> + +</itemizedlist> + +<para> In any case, what commonly seems to be the <quote>best +practice</quote> with <productname>Slony-I</productname> (and, for +that matter, <productname> PostgreSQL </productname>) is for the +postmaster user and/or the user under which +<application>slon</application> runs to use +<command><envar>TZ</envar>=UTC</command> or +<command><envar>TZ</envar>=GMT</command>. Those timezones are +<emphasis>sure</emphasis> to be supported on any platform, and have +the merit over <quote>local</quote> timezones that times never wind up +leaping around due to Daylight Savings Time.</para> + </sect2> <sect2><title> Network Connectivity</title> @@ -114,54 +165,63 @@ another have <emphasis>bidirectional</emphasis> network communications to the PostgreSQL instances. That is, if node B is replicating data from node A, it is necessary that there be a path from A to B and from -B to A. It is recommended that all nodes in a +B to A. It is recommended that, as much as possible, all nodes in a <productname>Slony-I</productname> cluster allow this sort of bidirection communications from any node in the cluster to any other node in the cluster.</para> -<para>Note that the network addresses need to be consistent across all -of the nodes. Thus, if there is any need to use a -<quote>public</quote> address for a node, to allow remote/VPN access, -that <quote>public</quote> address needs to be able to be used -consistently throughout the <productname>Slony-I</productname> -cluster, as the address is propagated throughout the cluster in table +<para>Note that the network addresses must be consistent across all of +the nodes. Thus, if there is any need to use a <quote>public</quote> +address for a node, to allow remote/VPN access, that +<quote>public</quote> address needs to be able to be used consistently +throughout the <productname>Slony-I</productname> cluster, as the +address is propagated throughout the cluster in table <envar>sl_path</envar>.</para> <para>A possible workaround for this, in environments where firewall -rules are particularly difficult to implement, may be to establish SSH -Tunnels that are created on each host that allow remote access through -IP address 127.0.0.1, with a different port for each destination.</para> +rules are particularly difficult to implement, may be to establish +<ulink url="http://www.brandonhutchinson.com/ssh_tunnelling.html"> SSH +Tunnels </ulink> that are created on each host that allow remote +access through a local IP address such as 127.0.0.1, using a different +port for each destination.</para> <para> Note that <application>slonik</application> and the <application>slon</application> instances need no special connections -or protocols to communicate with one another; they just need to be -able to get access to the <application>PostgreSQL</application> -databases, connecting as a <quote>superuser</quote>.</para> +or protocols to communicate with one another; they merely need access +to the <application>PostgreSQL</application> databases, connecting as +a <quote>superuser</quote>.</para> -<para> An implication of the communications model is that the entire +<para> An implication of this communications model is that the entire extended network in which a <productname>Slony-I</productname> cluster operates must be able to be treated as being secure. If there is a -remote location where you cannot trust the +remote location where you cannot trust one of the databases that is a <productname>Slony-I</productname> node to be considered -<quote>secured,</quote> this represents a vulnerability that adversely -the security of the entire cluster. In effect, the security policies -throughout the cluster can only be considered as stringent as those -applied at the <emphasis>weakest</emphasis> link. Running a -full-blown <productname>Slony-I</productname> node at a branch -location that can't be kept secure compromises security for the -cluster.</para> +<quote>secure,</quote> this represents a vulnerability that can +adversely affect the security of the entire cluster. As a +<quote>peer-to-peer</quote> system, <emphasis>any</emphasis> of the +hosts is able to introduce replication events that will affect the +entire cluster. Therefore, the security policies throughout the +cluster can only be considered as stringent as those applied at the +<emphasis>weakest</emphasis> link. Running a +<productname>Slony-I</productname> node at a branch location that +can't be kept secure compromises security for the cluster as a +whole.</para> <para>In the future plans is a feature whereby updates for a particular replication set would be serialized via a scheme called -<quote>log shipping.</quote> The data stored in sl_log_1 and sl_log_2 -would be written out to log files on disk. These files could be -transmitted in any manner desired, whether via scp, FTP, burning them -onto DVD-ROMs and mailing them, or even by recording them on a USB -<quote>flash device</quote> and attaching them to birds, allowing a -sort of <quote>avian transmission protocol.</quote> This will allow -one way communications so that <quote>subscribers</quote> that use log -shipping would have no need for access to other +<quote>log shipping.</quote> The data stored in +<envar>sl_log_1</envar> and <envar>sl_log_2</envar> would be written +out to log files on disk. These files could be transmitted in any +manner desired, whether via scp, FTP, burning them onto DVD-ROMs and +mailing them, or, at the frivolous end of the spectrum, by recording +them on a USB <quote>flash device</quote> and attaching them to birds, +allowing some equivalent to <ulink +url="http://www.faqs.org/rfcs/rfc1149.html"> transmission of IP +datagrams on avian carriers - RFC 1149.</ulink> But whatever the +transmission mechanism, this will allow one way communications such +that subscribers that use log shipping have no need of access to other <productname>Slony-I</productname> nodes.</para> + </sect2> </sect1> </article> Index: faq.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v retrieving revision 1.12 retrieving revision 1.13 diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.12 -r1.13 --- doc/adminguide/faq.sgml +++ doc/adminguide/faq.sgml @@ -406,9 +406,9 @@ avoided, even on "slave" nodes.</para> </answer></qandaentry> -<qandaentry> -<question><para>I started doing a backup using <application>pg_dump</application>, -and suddenly Slony stops</para></question> +<qandaentry> <question><para>I started doing a backup using +<application>pg_dump</application>, and suddenly Slony +stops</para></question> <answer><para>Ouch. What happens here is a conflict between: <itemizedlist> @@ -619,12 +619,12 @@ DELETE 6 </screen></para> -<para>General <quote>due diligance</quote> dictates starting with a -<command>BEGIN</command>, looking at the contents of sl_confirm -before, ensuring that only the expected records are purged, and then, -only after that, confirming the change with a -<command>COMMIT</command>. If you delete confirm entries for the -wrong node, that could ruin your whole day.</para> +<para>General <quote>due diligence</quote> dictates starting with a +<command>BEGIN</command>, looking at the contents of +<envar>sl_confirm</envar> before, ensuring that only the expected +records are purged, and then, only after that, confirming the change +with a <command>COMMIT</command>. If you delete confirm entries for +the wrong node, that could ruin your whole day.</para> <para>You'll need to run this on each node that remains...</para> @@ -835,6 +835,17 @@ table's OID into the <envar>pg_trigger</envar> or <envar>pg_rewrite</envar> <envar>tgrelid</envar> column on the affected node.</para></answer> + +<answer><para> This implies that if you plan to draw backups from a +subscriber node, you will need to draw the schema from the origin +node. It is straightforward to do this: </para> + +<screen> +% pg_dump -h originnode.example.info -p 5432 --schema-only --schema=public ourdb > schema_backup.sql +% pg_dump -h subscribernode.example.info -p 5432 --data-only --schema=public ourdb > data_backup.sql +</screen> + +</answer> </qandaentry> <qandaentry> <question><para> After notification of a subscription on @@ -979,10 +990,12 @@ </qandaentry> -<qandaentry> +<qandaentry id="neededexecddl"> -<question> -<para> Behaviour - all the nodes fall behind, and all the logs have the following error message repeating in them (when I encountered it, there was a nice long sql statement above each entry): +<question> <para> Behaviour - all the subscriber nodes start to fall +behind the origin, and all the logs on the subscriber nodes have the +following error message repeating in them (when I encountered it, +there was a nice long SQL statement above each entry): <screen> ERROR remoteWorkerThread_1: helper 1 finished with error @@ -990,34 +1003,35 @@ </screen> </question> -<answer> -<para> -Cause: you have likely issued alter table statements directly on the -databases instead of using the slonik <link linkend="stmtddlscript"> -execute script </link> command. +<answer> <para> Cause: you have likely issued <command>alter +table</command> statements directly on the databases instead of using +the slonik <link linkend="stmtddlscript"> <command>EXECUTE +SCRIPT</command> </link> command. <para>The solution is to rebuild the trigger on the affected table and fix the entries in <envar>sl_log_1 </envar> by hand. <itemizedlist> -<listitem><para> -You'll need to identify from either the slony logs, or the postgres db logs exactly what the statement is that is causing the error. -<listitem><para> -You need to fix the Slony-defined triggers on the table in question. This is done with the following procedure. +<listitem><para> You'll need to identify from either the slon logs, or +the PostgreSQL database logs exactly which statement it is that is +causing the error. + +<listitem><para> You need to fix the Slony-defined triggers on the +table in question. This is done with the following procedure. <screen> BEGIN; -LOCK TABLE <table_name>; +LOCK TABLE table_name; SELECT _oxrsorg.altertablerestore(tab_id);--tab_id is _slony_schema.sl_table.tab_id SELECT _oxrsorg.altertableforreplication(tab_id);--tab_id is _slony_schema.sl_table.tab_id COMMIT; </screen> <para>You then need to find the rows in <envar> sl_log_1 </envar> that -have bad entries and fix them. You may want to take down the slons -for all nodes except the master, that way if you make a mistake, it -won't propgate through to the slave. +have bad entries and fix them. You may want to take down the slon +daemons for all nodes except the master; that way, if you make a +mistake, it won't immediately propagate through to the subscribers. <para> Here is an example: @@ -1059,6 +1073,92 @@ </qandaentry> +<qandaentry> <question><para> After notification of a subscription on +<emphasis>another</emphasis> node, replication falls over on one of +the subscribers, with the following error message: + +<screen> +ERROR remoteWorkerThread_1: "begin transaction; set transaction isolation level serializable; lock table "_livesystem".sl_config_lock; select "_livesystem".enableSubscription(25506, 1, 501); notify "_livesystem_Event"; notify "_livesystem_Confirm"; insert into "_livesystem".sl_event (ev_origin, ev_seqno, ev_timestamp, ev_minxid, ev_maxxid, ev_xip, ev_type , ev_data1, ev_data2, ev_data3, ev_data4 ) values ('1', '4896546', '2005-01-23 16:08:55.037395', '1745281261', '1745281262', '', 'ENABLE_SUBSCRIPTION', '25506', '1', '501', 't'); insert into "_livesystem".sl_confirm (con_origin, con_received, con_seqno, con_timestamp) values (1, 4, '4896546', CURRENT_TIMESTAMP); commit transaction;" PGRES_FATAL_ERROR ERROR: insert or update on table "sl_subscribe" violates foreign key constraint "sl_subscribe-sl_path-ref" +DETAIL: Key (sub_provider,sub_receiver)=(1,501) is not present in table "sl_path". +</screen> + +<para> This is then followed by a series of failed syncs as the +<application> <link linkend="slon"> slon </link> </application> shuts +down: + +<screen> +DEBUG2 remoteListenThread_1: queue event 1,4897517 SYNC +DEBUG2 remoteListenThread_1: queue event 1,4897518 SYNC +DEBUG2 remoteListenThread_1: queue event 1,4897519 SYNC +DEBUG2 remoteListenThread_1: queue event 1,4897520 SYNC +DEBUG2 remoteWorker_event: ignore new events due to shutdown +DEBUG2 remoteListenThread_1: queue event 1,4897521 SYNC +DEBUG2 remoteWorker_event: ignore new events due to shutdown +DEBUG2 remoteListenThread_1: queue event 1,4897522 SYNC +DEBUG2 remoteWorker_event: ignore new events due to shutdown +DEBUG2 remoteListenThread_1: queue event 1,4897523 SYNC +</screen> + +</para></question> + +<answer><para> If you see a <application> <link linkend="slon"> slon +</link> </application> shutting down with <emphasis>ignore new events +due to shutdown</emphasis> log entries, you typically need to step +back in the log to <emphasis>before</emphasis> they started failing to +see indication of the root cause of the problem. +</para></answer> + +<answer><para> In this particular case, the problem was that some of +the <link linkend="stmtstorepath"> <command>STORE PATH </command> +</link> commands had not yet made it to node 4 before the <link +linkend="stmtsubscribeset"> <command>SUBSCRIBE SET </command> </link> +command propagated. </para> + +<para>This demonstrates yet another example of the need to not do +things in a rush; you need to be sure things are working right +<emphasis>before</emphasis> making further configuration changes. +</para></answer> + +</qandaentry> + +<qandaentry> <question><para> I can do a <command>pg_dump</command> +and load the data back in much faster than the <command>SUBSCRIBE +SET</command> runs. Why is that? </para></question> + +<answer><para> <productname>Slony-I</productname> depends on there +being an already existant index on the primary key, and leaves all +indexes alone whilst using the <productname>PostgreSQL</productname> +<command>COPY</command> command to load the data. Further hurting +performane, the <command>COPY SET</command> event starts by deleting +the contents of tables, which potentially leaves a lot of dead tuples +</para> + +<para> When you use <command>pg_dump</command> to dump the contents of +a database, and then load that, creation of indexes is deferred until +the very end. It is <emphasis>much</emphasis> more efficient to +create indexes against the entire table, at the end, than it is to +build up the index incrementally as each row is added to the +table.</para> + +<para> Unfortunately, dropping and recreating indexes <quote>on the +fly,</quote> as it were, has proven thorny. Doing it automatically +hasn't been implemented. </para></answer> + +<answer><para> If you can drop unnecessary indices while the +<command>COPY</command> takes place, that will improve performance +quite a bit. If you can <command>TRUNCATE</command> tables that +contain data that is about to be eliminated, that will improve +performance <emphasis>a lot.</emphasis> </para></answer> + +<answer><para> There is a TODO item for implementation in +<productname>PostgreSQL</productname> that adds a new option, +something like <option>BULKLOAD</option>, which would defer revising +indexes until the end, and regenerating indexes in bulk. That will +likely not be available until <productname>PostgreSQL</productname> +8.1, but it should substantially improve performance once available. +</para></answer> +</qandaentry> + </qandaset> <!-- Keep this comment at the end of the file Local variables: Index: slony.sgml =================================================================== RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/slony.sgml,v retrieving revision 1.8 retrieving revision 1.9 diff -Ldoc/adminguide/slony.sgml -Ldoc/adminguide/slony.sgml -u -w -r1.8 -r1.9 --- doc/adminguide/slony.sgml +++ doc/adminguide/slony.sgml @@ -29,11 +29,6 @@ &defineset; </part> - <part id="commandreferencec"> - <title>Core <productname>Slony-I</productname> Programs</title> - &reference; - </part> - <part id="slonyadmin"> <title><productname>Slony-I</productname> Administration</title> @@ -63,6 +58,16 @@ </article> </part> + <part id="commandreferencec"> + <title>Core <productname>Slony-I</productname> Programs</title> + <chapter> + <title><productname>Slony-I</productname> binaries</title> + &slon; + &slonik; + </chapter> + &slonikref; + </part> + <part id="developer"> <title>Slony-I Internals</title> &schemadoc;
- Previous message: [Slony1-commit] By cbbrowne: Added FAQ entry, revised intro (textual revisions), fixed
- Next message: [Slony1-commit] By cbbrowne: Apply changes from CVS HEAD admin guide into the stable
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-commit mailing list