CVS User Account cvsuser
Fri Oct 6 13:19:44 PDT 2006
Log Message:
-----------
Add documentation for slony1_extract_schema.sh tool

Modified Files:
--------------
    slony1-engine/doc/adminguide:
        addthings.sgml (r1.22 -> r1.23)
        adminscripts.sgml (r1.39 -> r1.40)
        ddlchanges.sgml (r1.28 -> r1.29)
        faq.sgml (r1.65 -> r1.66)

-------------- next part --------------
Index: addthings.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/addthings.sgml,v
retrieving revision 1.22
retrieving revision 1.23
diff -Ldoc/adminguide/addthings.sgml -Ldoc/adminguide/addthings.sgml -u -w -r1.22 -r1.23
--- doc/adminguide/addthings.sgml
+++ doc/adminguide/addthings.sgml
@@ -217,10 +217,10 @@
 </para>
 
 <para> If you do not have a perfectly clean SQL script to add in the
-tables, then run the tool <command> slony1_extract_schema.sh</command>
-from the <filename>tools</filename> directory to get the user schema
-from the origin node with all &slony1; <quote>cruft</quote>
-removed.  </para>
+tables, then run the tool <link linkend="extractschema"> <command>
+slony1_extract_schema.sh</command> </link> from the
+<filename>tools</filename> directory to get the user schema from the
+origin node with all &slony1; <quote>cruft</quote> removed.  </para>
 </listitem>
 
 <listitem><para> If the node had been a failed node, you may need to
Index: adminscripts.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v
retrieving revision 1.39
retrieving revision 1.40
diff -Ldoc/adminguide/adminscripts.sgml -Ldoc/adminguide/adminscripts.sgml -u -w -r1.39 -r1.40
--- doc/adminguide/adminscripts.sgml
+++ doc/adminguide/adminscripts.sgml
@@ -402,11 +402,39 @@
 <para> In effect, you could run this every five minutes, and it would
 launch any missing &lslon; processes. </para>
 </sect2>
+
+<sect2 id="extractschema"><title> <filename> slony1_extract_schema.sh </filename> </title>
+
+<indexterm><primary>script - slony1_extract_schema.sh</primary></indexterm>
+
+<para> You may find that you wish to create a new node some time well
+after creating a cluster.  The script <filename>
+slony1_extract_schema.sh </filename> will help you with this.</para>
+
+<para> A command line might look like the following:</para>
+
+<para><command> PGPORT=5881 PGHOST=master.int.example.info ./slony1_extract_schema.sh payroll payroll temppayroll </command> </para>
+
+<para> It performs the following:</para>
+
+<itemizedlist>
+<listitem><para> It dumps the origin node's schema, including the data in the &slony1; cluster schema. </para>
+
+<para> Note that the extra environment variables <envar>PGPORT</envar>
+and <envar>PGHOST</envar> to indicate additional information about
+where the database resides. </para></listitem>
+
+<listitem><para> This data is loaded into the freshly created temporary database, <envar>temppayroll</envar> </para> </listitem>
+<listitem><para> The table and sequence OIDs in &slony1; tables are corrected   to point to the temporary database's configuration.  </para> </listitem>
+<listitem><para>  A slonik script is run to perform <xref linkend="stmtuninstallnode"> on the temporary database.   This eliminates all the special &slony1; tables, schema, and removes &slony1; triggers from replicated tables. </para> </listitem>
+<listitem><para> Finally, <application>pg_dump</application> is run against the temporary database, delivering a copy of the cleaned up schema to standard output. </para> </listitem>
+</itemizedlist>
+
+</sect2>
 <sect2><title> slony-cluster-analysis </title>
 
 <indexterm><primary>script - slony-cluster-analysis</primary></indexterm>
 
-
 <para> If you are running a lot of replicated databases, where there
 are numerous &slony1; clusters, it can get painful to track and
 document this.  The following tools may be of some assistance in this.</para>
Index: ddlchanges.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/ddlchanges.sgml,v
retrieving revision 1.28
retrieving revision 1.29
diff -Ldoc/adminguide/ddlchanges.sgml -Ldoc/adminguide/ddlchanges.sgml -u -w -r1.28 -r1.29
--- doc/adminguide/ddlchanges.sgml
+++ doc/adminguide/ddlchanges.sgml
@@ -251,6 +251,13 @@
 should catch up quickly enough once the index is
 created.</para></listitem>
 
+<listitem><para> &slony1; stores the <quote>primary index</quote> name
+in <xref linkend="table.sl-table">, and uses that name to control what
+columns are considered the <quote>key columns</quote> when the log
+trigger is attached.  It would be plausible to drop that index and
+replace it with another primary key candidate, but changing the name
+of the primary key candidate would break things. </para> </listitem>
+
 </itemizedlist></para></sect2>
 
 <sect2><title> Testing DDL Changes </title>
Index: faq.sgml
===================================================================
RCS file: /usr/local/cvsroot/slony1/slony1-engine/doc/adminguide/faq.sgml,v
retrieving revision 1.65
retrieving revision 1.66
diff -Ldoc/adminguide/faq.sgml -Ldoc/adminguide/faq.sgml -u -w -r1.65 -r1.66
--- doc/adminguide/faq.sgml
+++ doc/adminguide/faq.sgml
@@ -1416,8 +1416,9 @@
 <emphasis>essential functionality of <xref linkend="stmtsetdroptable">
 involves the functionality in <function>droptable_int()</function>.
 You can fiddle this by hand by finding the table ID for the table you
-want to get rid of, which you can find in <xref linkend="table.sl-table">, and then run the
-following three queries, on each host:</emphasis>
+want to get rid of, which you can find in <xref
+linkend="table.sl-table">, and then run the following three queries,
+on each host:</emphasis>
 
 <programlisting>
   select _slonyschema.alterTableRestore(40);



More information about the Slony1-commit mailing list