From cbbrowne at lists.slony.info Fri Jun 5 10:54:49 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Fri Jun 5 10:54:52 2009 Subject: [Slony1-commit] slony1-www/content frontpage.txt Message-ID: <20090605175449.EA0B7290439@main.slony.info> Update of /home/cvsd/slony1/slony1-www/content In directory main.slony.info:/tmp/cvs-serv3544/content Modified Files: frontpage.txt Log Message: Add in link to RPMs Index: frontpage.txt =================================================================== RCS file: /home/cvsd/slony1/slony1-www/content/frontpage.txt,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -d -r1.34 -r1.35 *** frontpage.txt 11 May 2009 19:57:24 -0000 1.34 --- frontpage.txt 5 Jun 2009 17:54:47 -0000 1.35 *************** *** 5,13 **** release notes. This version fixes quite a number of issues found in early use of version 2.0. - --- - Slony-I 1.2.16 Released

See the "news" area for more details, including a copy of the release notes. This version fixes issues relating to FAILOVER. --- Slony-I 2.0.1 Released --- 5,15 ---- release notes. This version fixes quite a number of issues found in early use of version 2.0.

See the "news" area for more details, including a copy of the release notes. This version fixes issues relating to FAILOVER. + +

Source RPMs (SRPMs) are available + here --- Slony-I 1.2.16 Released --- Slony-I 2.0.1 Released From cbbrowne at lists.slony.info Fri Jun 5 12:10:26 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Fri Jun 5 12:10:27 2009 Subject: [Slony1-commit] slony1-engine/tests/testseqnames README generate_dml.sh init_add_tables.ik init_schema.sql Message-ID: <20090605191026.887AF290BF1@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/tests/testseqnames In directory main.slony.info:/tmp/cvs-serv11036 Modified Files: Tag: REL_2_0_STABLE README generate_dml.sh init_add_tables.ik init_schema.sql Log Message: Update sequence test to validate that big ID #'s do not cause grief Index: README =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/tests/testseqnames/README,v retrieving revision 1.1 retrieving revision 1.1.6.1 diff -C2 -d -r1.1 -r1.1.6.1 *** README 15 Nov 2005 21:25:34 -0000 1.1 --- README 5 Jun 2009 19:10:24 -0000 1.1.6.1 *************** *** 3,4 **** --- 3,8 ---- This test involves creating some sequences with wacky names involving StudlyCaps, spaces, and ".". + + It also creates a Large Number of sequences, to validate that + we don't break down with either large quantities of them, or + if the IDs are large numbers Index: generate_dml.sh =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/tests/testseqnames/generate_dml.sh,v retrieving revision 1.5.2.1 retrieving revision 1.5.2.2 diff -C2 -d -r1.5.2.1 -r1.5.2.2 *** generate_dml.sh 28 Apr 2009 21:48:20 -0000 1.5.2.1 --- generate_dml.sh 5 Jun 2009 19:10:24 -0000 1.5.2.2 *************** *** 25,29 **** GENDATA="$mktmp/generate.data" echo "" > ${GENDATA} ! numrows=$(random_number 50 1000) i=0; trippoint=`expr $numrows / 20` --- 25,29 ---- GENDATA="$mktmp/generate.data" echo "" > ${GENDATA} ! numrows=$(random_number 25 35) i=0; trippoint=`expr $numrows / 20` *************** *** 45,48 **** --- 45,57 ---- echo "select nextval('\"Schema.name\".\"a.periodic.sequence\"');" >> $GENDATA echo "select nextval('\"Studly Spacey Schema\".\"user\"');" >> $GENDATA + for d4 in 8 3 9 0 6 7 1 4 5 2; do + for d2 in 0 2 1 3 9 5 6 4 8 7; do + for d1 in 0 1; do + for d3 in 5 2 1 6 4 8 3 9 0 7 ; do + echo "select nextval('public.seq40${d1}${d2}${d3}${d4}');" >> $GENDATA + done + done + done + done if [ ${i} -ge ${numrows} ]; then break; Index: init_add_tables.ik =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/tests/testseqnames/init_add_tables.ik,v retrieving revision 1.2 retrieving revision 1.2.2.1 diff -C2 -d -r1.2 -r1.2.2.1 *** init_add_tables.ik 18 Apr 2007 19:26:54 -0000 1.2 --- init_add_tables.ik 5 Jun 2009 19:10:24 -0000 1.2.2.1 *************** *** 7,8 **** --- 7,2009 ---- set add sequence (set id = 1, origin = 1, id = 3, fully qualified name = '"Schema.name"."a.periodic.sequence"'); + set add sequence (set id = 1, origin = 1, id = 23400000, fully qualified name = 'public.seq400000'); + set add sequence (set id = 1, origin = 1, id = 23400001, fully qualified name = 'public.seq400001'); + set add sequence (set id = 1, origin = 1, id = 23400002, fully qualified name = 'public.seq400002'); + set add sequence (set id = 1, origin = 1, id = 23400003, fully qualified name = 'public.seq400003'); + set add sequence (set id = 1, origin = 1, id = 23400004, fully qualified name = 'public.seq400004'); + set add sequence (set id = 1, origin = 1, id = 23400005, fully qualified name = 'public.seq400005'); + set add sequence (set id = 1, origin = 1, id = 23400006, fully qualified name = 'public.seq400006'); [...1974 lines suppressed...] + set add sequence (set id = 1, origin = 1, id = 23401981, fully qualified name = 'public.seq401981'); + set add sequence (set id = 1, origin = 1, id = 23401982, fully qualified name = 'public.seq401982'); + set add sequence (set id = 1, origin = 1, id = 23401983, fully qualified name = 'public.seq401983'); + set add sequence (set id = 1, origin = 1, id = 23401984, fully qualified name = 'public.seq401984'); + set add sequence (set id = 1, origin = 1, id = 23401985, fully qualified name = 'public.seq401985'); + set add sequence (set id = 1, origin = 1, id = 23401986, fully qualified name = 'public.seq401986'); + set add sequence (set id = 1, origin = 1, id = 23401987, fully qualified name = 'public.seq401987'); + set add sequence (set id = 1, origin = 1, id = 23401988, fully qualified name = 'public.seq401988'); + set add sequence (set id = 1, origin = 1, id = 23401989, fully qualified name = 'public.seq401989'); + set add sequence (set id = 1, origin = 1, id = 23401990, fully qualified name = 'public.seq401990'); + set add sequence (set id = 1, origin = 1, id = 23401991, fully qualified name = 'public.seq401991'); + set add sequence (set id = 1, origin = 1, id = 23401992, fully qualified name = 'public.seq401992'); + set add sequence (set id = 1, origin = 1, id = 23401993, fully qualified name = 'public.seq401993'); + set add sequence (set id = 1, origin = 1, id = 23401994, fully qualified name = 'public.seq401994'); + set add sequence (set id = 1, origin = 1, id = 23401995, fully qualified name = 'public.seq401995'); + set add sequence (set id = 1, origin = 1, id = 23401996, fully qualified name = 'public.seq401996'); + set add sequence (set id = 1, origin = 1, id = 23401997, fully qualified name = 'public.seq401997'); + set add sequence (set id = 1, origin = 1, id = 23401998, fully qualified name = 'public.seq401998'); + set add sequence (set id = 1, origin = 1, id = 23401999, fully qualified name = 'public.seq401999'); + set add sequence (set id = 1, origin = 1, id = 23402000, fully qualified name = 'public.seq402000'); Index: init_schema.sql =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/tests/testseqnames/init_schema.sql,v retrieving revision 1.2 retrieving revision 1.2.2.1 diff -C2 -d -r1.2 -r1.2.2.1 *** init_schema.sql 18 Apr 2007 19:26:54 -0000 1.2 --- init_schema.sql 5 Jun 2009 19:10:24 -0000 1.2.2.1 *************** *** 27,28 **** --- 27,2029 ---- create sequence "Studly Spacey Schema"."user"; create sequence "Schema.name"."a.periodic.sequence"; + create sequence public.seq400000; + create sequence public.seq400001; + create sequence public.seq400002; + create sequence public.seq400003; + create sequence public.seq400004; + create sequence public.seq400005; + create sequence public.seq400006; [...1974 lines suppressed...] + create sequence public.seq401981; + create sequence public.seq401982; + create sequence public.seq401983; + create sequence public.seq401984; + create sequence public.seq401985; + create sequence public.seq401986; + create sequence public.seq401987; + create sequence public.seq401988; + create sequence public.seq401989; + create sequence public.seq401990; + create sequence public.seq401991; + create sequence public.seq401992; + create sequence public.seq401993; + create sequence public.seq401994; + create sequence public.seq401995; + create sequence public.seq401996; + create sequence public.seq401997; + create sequence public.seq401998; + create sequence public.seq401999; + create sequence public.seq402000; From cbbrowne at lists.slony.info Tue Jun 9 14:38:29 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Tue Jun 9 14:38:31 2009 Subject: [Slony1-commit] slony1-engine/doc/adminguide adminscripts.sgml Message-ID: <20090609213829.E3411148228@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/doc/adminguide In directory main.slony.info:/tmp/cvs-serv4394/doc/adminguide Modified Files: Tag: REL_2_0_STABLE adminscripts.sgml Log Message: Add in a slonik configuration dump tool, that will be helpful when doing upgrades from 1.2 to 2.0, along with documentation. Index: adminscripts.sgml =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v retrieving revision 1.52.2.1 retrieving revision 1.52.2.2 diff -C2 -d -r1.52.2.1 -r1.52.2.2 *** adminscripts.sgml 15 Dec 2008 23:30:58 -0000 1.52.2.1 --- adminscripts.sgml 9 Jun 2009 21:38:27 -0000 1.52.2.2 *************** *** 759,762 **** --- 759,812 ---- + slonikconfdump.sh + + slonik configuration dump + + The tool tools/slonikconfdump.sh was + created to help dump out a &lslonik; script to duplicate the + configuration of a functioning &slony1; cluster. + + It dumps out: + + + Cluster name + Node connection information Note that it uses the first value it finds (e.g. - for the lowest numbered client node). + Nodes + Sets + Tables + Sequences + Subscriptions + + + It may be run as follows: + + chris@dba2:Slony-I/CMD/slony1-2.0/tools> SLONYCLUSTER=slony_regress1 PGDATABASE=slonyregress1 bash slonikconfdump.sh + # building slonik config files for cluster slony_regress1 + # generated by: slonikconfdump.sh + # Generated on: Tue Jun 9 17:34:12 EDT 2009 + cluster name=slony_regress1; + include ; # Draw in ADMIN CONNINFO lines + node 1 admin conninfo='dbname=slonyregress1 host=localhost user=chris port=7083'; + node 2 admin conninfo='dbname=slonyregress2 host=localhost user=chris port=7083'; + init cluster (id=1, comment='Regress test node'); + store node (id=2, comment='node 2'); + store path (server=1, client=2, conninfo='dbname=slonyregress1 host=localhost user=chris port=7083', connretry=10); + store path (server=2, client=1, conninfo='dbname=slonyregress2 host=localhost user=chris port=7083', connretry=10); + create set (id=1, origin=1, comment='All test1 tables'); + set add table (id=1, set id=1, origin=1, fully qualified name='"public"."table1"', comment='accounts table, key='table1_pkey'); + set add table (id=2, set id=1, origin=1, fully qualified name='"public"."table2"', comment='public.table2, key='table2_id_key'); + set add table (id=4, set id=1, origin=1, fully qualified name='"public"."table4"', comment='a table of many types, key='table4_pkey'); + set add table (id=5, set id=1, origin=1, fully qualified name='"public"."table5"', comment='a table with composite PK strewn across the table, key='table5_pkey'); + subscribe set (id=1, provider=1, receiver=2, forward=YES); + chris@dba2:Slony-I/CMD/slony1-2.0/tools> + + + The output should be reviewed before it is applied elsewhere; + particular attention should be paid to the ADMIN + CONNINFO statements, as it picks the first value that it + sees for each node; in a complex environment, it may not pull out the + right value. + + Using per-directory sticky tag `REL_2_0_STABLE' From cbbrowne at lists.slony.info Fri Jun 12 13:34:01 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Fri Jun 12 13:34:02 2009 Subject: [Slony1-commit] slony1-engine/tests/testomitcopy README gen_weak_user.sh generate_dml.sh init_add_tables.ik init_cluster.ik init_create_set.ik init_data.sql init_schema.sql init_subscribe_set.ik schema.diff settings.ik Message-ID: <20090612203401.92DA9290BF5@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/tests/testomitcopy In directory main.slony.info:/tmp/cvs-serv24740 Added Files: Tag: REL_2_0_STABLE README gen_weak_user.sh generate_dml.sh init_add_tables.ik init_cluster.ik init_create_set.ik init_data.sql init_schema.sql init_subscribe_set.ik schema.diff settings.ik Log Message: Add OMIT COPY test to 2.0 --- NEW FILE: settings.ik --- NUMCLUSTERS=${NUMCLUSTERS:-"1"} NUMNODES=${NUMNODES:-"2"} ORIGINNODE=1 WORKERS=${WORKERS:-"1"} --- NEW FILE: init_cluster.ik --- init cluster (id=1, comment = 'Regress test node'); echo 'update functions on node 1 after initializing it'; update functions (id=1); --- NEW FILE: gen_weak_user.sh --- weakuser=$1; for i in 1 2 3 4 5; do echo "grant select on table public.table${i} to ${weakuser};" echo "grant select on table public.table${i}_id_seq to ${weakuser};" done --- NEW FILE: generate_dml.sh --- . support_funcs.sh init_dml() { echo "init_dml()" } begin() { echo "begin()" } rollback() { echo "rollback()" } commit() { echo "commit()" } generate_initdata() { numrows=$(random_number 50 1000) i=0; trippoint=`expr $numrows / 20` j=0; percent=0 status "generating ${numrows} transactions of random data" percent=`expr $j \* 5` status "$percent %" GENDATA="$mktmp/generate.data" echo "" > ${GENDATA} while : ; do txtalen=$(random_number 1 100) txta=$(random_string ${txtalen}) txta=`echo ${txta} | sed -e "s/\\\\\\\/\\\\\\\\\\\\\\/g" -e "s/'/''/g"` txtblen=$(random_number 1 100) txtb=$(random_string ${txtblen}) txtb=`echo ${txtb} | sed -e "s/\\\\\\\/\\\\\\\\\\\\\\/g" -e "s/'/''/g"` ra=$(random_number 1 9) rb=$(random_number 1 9) rc=$(random_number 1 9) echo "INSERT INTO table1(data) VALUES ('${txta}');" >> $GENDATA echo "INSERT INTO table2(table1_id,data) SELECT id, '${txtb}' FROM table1 WHERE data='${txta}';" >> $GENDATA echo "INSERT INTO table3(table2_id) SELECT id FROM table2 WHERE data ='${txtb}';" >> $GENDATA echo "INSERT INTO table4(numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol,bitcol) values ('${ra}${rb}.${rc}','${ra}.${rb}${rc}','(${ra},${rb})','((${ra},${ra}),(${rb},${rb}),(${rc},${rc}),(${ra},${rc}))','((${ra},${rb}),(${rc},${ra}),(${rb},${rc}),(${rc},${rb}))','<(${ra},${rb}),${rc}>','192.168.${ra}.${rb}${rc}','08:00:2d:0${ra}:0${rb}:0${rc}',X'${ra}${rb}${rc}');" >> $GENDATA echo "INSERT INTO table5(d1,d2,d3,d4,d5,d6,d7,d8,d9,d10,d11) values ('${txta}${ra}','${txta}${rb}','${txta}${rc}','${txtb}${ra}','${txtb}${rb}','${txtb}${rc}','${txtb}${ra}','${txtb}${rb}','${txtb}${rc}','${txtb}${ra}','${txtb}${rb}');" >> $GENDATA if [ ${i} -ge ${numrows} ]; then break; else i=$((${i} +1)) working=`expr $i % $trippoint` if [ $working -eq 0 ]; then j=`expr $j + 1` percent=`expr $j \* 5` status "$percent %" fi fi done status "done" } do_initdata() { originnode=${ORIGINNODE:-"1"} eval db=\$DB${originnode} eval host=\$HOST${originnode} eval user=\$USER${originnode} eval port=\$PORT${originnode} generate_initdata status "run updateReloid() - equivalent to REPAIR NODE" $pgbindir/psql -h $host -p $port -d $db -U $user -c "select \"_${CLUSTER1}\".updateReloid(1, 0);" 1> $mktmp/reloidtest.log 2> $mktmp/reloidtest.log status "loading data" $pgbindir/psql -h $host -p $port -d $db -U $user < $mktmp/generate.data 1> $mktmp/initdata.log 2> $mktmp/initdata.log if [ $? -ne 0 ]; then warn 3 "do_initdata failed, see $mktmp/initdata.log for details" fi status "data load complete" wait_for_catchup status "done" } --- NEW FILE: init_add_tables.ik --- set add table (id=1, set id=1, origin=1, fully qualified name = 'public.table1', comment='accounts table'); set add table (id=2, set id=1, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); set add table (id=3, set id=1, origin=1, fully qualified name = 'public.table4', comment='a table of many types'); set add table (id=4, set id=1, origin=1, fully qualified name = 'public.table5', comment='a table with composite PK strewn across the table'); --- NEW FILE: init_create_set.ik --- create set (id=1, origin=1, comment='All test1 tables'); --- NEW FILE: init_data.sql --- INSERT INTO table1(data) VALUES ('placeholder 1'); INSERT INTO table1(data) VALUES ('placeholder 2'); INSERT INTO table2(table1_id,data) VALUES (1,'placeholder 1'); INSERT INTO table2(table1_id,data) VALUES (2,'placeholder 2'); INSERT INTO table4(numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol,bitcol) values ('74.0','7.40','(7,4)','((7,7),(4,4),(0,0),(7,0))','((7,4),(0,7),(4,0),(0,4))','<(7,4),0>','192.168.7.40','08:00:2d:07:04:00',X'740'); INSERT INTO table4(numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol,bitcol) values ('93.1','9.31','(9,3)','((9,9),(3,3),(1,1),(9,1))','((9,3),(1,9),(3,1),(1,3))','<(9,3),1>','192.168.9.31','08:00:2d:09:03:01',X'931'); --- NEW FILE: init_schema.sql --- CREATE TABLE table1( id SERIAL PRIMARY KEY, data TEXT ); CREATE TABLE table2( id SERIAL UNIQUE NOT NULL, table1_id INT4 REFERENCES table1(id) ON UPDATE CASCADE ON DELETE CASCADE, data TEXT ); create table table3 ( id serial NOT NULL, id2 integer ); create unique index no_good_candidate_pk on table3 (id, id2); create table table4 ( id serial primary key, numcol numeric(12,4), -- 1.23 realcol real, -- (1.23) ptcol point, -- (1,2) pathcol path, -- ((1,1),(2,2),(3,3),(4,4)) polycol polygon, -- ((1,1),(2,2),(3,3),(4,4)) circcol circle, -- <(1,2>,3> ipcol inet, -- "192.168.1.1" maccol macaddr, -- "04:05:06:07:08:09" bitcol bit varying(20) -- X'123' ); create table table5 ( id serial, d1 text, d2 text, id2 serial, d3 text, d4 text, d5 text, d6 text, id3 serial, d7 text, d8 text, d9 text, d10 text, d11 text, primary key(id, id2, id3) ); --- NEW FILE: schema.diff --- SELECT id,data FROM table1 ORDER BY id SELECT id,table1_id,data FROM table2 ORDER BY id SELECT id,numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol, bitcol from table4 order by id SELECT id,d1,d2,id2,d3,d4,d5,d6,id3,d7,d8,d9,d10,d11 from table5 order by id,id2,id3 --- NEW FILE: README --- $Id: README,v 1.1.2.1 2009-06-12 20:33:59 cbbrowne Exp $ This test validates the OMIT COPY functionality added to SUBSCRIBE SET It creates three simple tables as one replication set, and replicates them from one database to another. The tables are of the several interesting types: 1. table1 has a formal primary key 2. table2 lacks a formal primary key, but has a candidate primary key 3. table4 which has columns of all sorts of vaguely esoteric types to exercise that points, paths, bitmaps, mac addresses, and inet types replicate properly. 4. table5 has a composite primary key (on id1,id2,id3) where the primary key attributes are strewn throughout the table. This is to make sure we have a case that exercises the logic that changed with bug #18. --- NEW FILE: init_subscribe_set.ik --- subscribe set (id = 1, provider = 1, receiver = 2, forward = yes, omit copy=true); echo 'sleep a couple of seconds...'; sleep (seconds = 2); echo 'done sleeping...'; From cbbrowne at lists.slony.info Fri Jun 12 13:34:46 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Fri Jun 12 13:34:48 2009 Subject: [Slony1-commit] slony1-engine/tests/testomitcopy README gen_weak_user.sh generate_dml.sh init_add_tables.ik init_cluster.ik init_create_set.ik init_data.sql init_schema.sql init_subscribe_set.ik schema.diff settings.ik Message-ID: <20090612203446.A9973290BF5@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/tests/testomitcopy In directory main.slony.info:/tmp/cvs-serv24772 Added Files: README gen_weak_user.sh generate_dml.sh init_add_tables.ik init_cluster.ik init_create_set.ik init_data.sql init_schema.sql init_subscribe_set.ik schema.diff settings.ik Log Message: Add OMIT COPY tests to HEAD --- NEW FILE: settings.ik --- NUMCLUSTERS=${NUMCLUSTERS:-"1"} NUMNODES=${NUMNODES:-"2"} ORIGINNODE=1 WORKERS=${WORKERS:-"1"} --- NEW FILE: init_cluster.ik --- init cluster (id=1, comment = 'Regress test node'); echo 'update functions on node 1 after initializing it'; update functions (id=1); --- NEW FILE: gen_weak_user.sh --- weakuser=$1; for i in 1 2 3 4 5; do echo "grant select on table public.table${i} to ${weakuser};" echo "grant select on table public.table${i}_id_seq to ${weakuser};" done --- NEW FILE: generate_dml.sh --- . support_funcs.sh init_dml() { echo "init_dml()" } begin() { echo "begin()" } rollback() { echo "rollback()" } commit() { echo "commit()" } generate_initdata() { numrows=$(random_number 50 1000) i=0; trippoint=`expr $numrows / 20` j=0; percent=0 status "generating ${numrows} transactions of random data" percent=`expr $j \* 5` status "$percent %" GENDATA="$mktmp/generate.data" echo "" > ${GENDATA} while : ; do txtalen=$(random_number 1 100) txta=$(random_string ${txtalen}) txta=`echo ${txta} | sed -e "s/\\\\\\\/\\\\\\\\\\\\\\/g" -e "s/'/''/g"` txtblen=$(random_number 1 100) txtb=$(random_string ${txtblen}) txtb=`echo ${txtb} | sed -e "s/\\\\\\\/\\\\\\\\\\\\\\/g" -e "s/'/''/g"` ra=$(random_number 1 9) rb=$(random_number 1 9) rc=$(random_number 1 9) echo "INSERT INTO table1(data) VALUES ('${txta}');" >> $GENDATA echo "INSERT INTO table2(table1_id,data) SELECT id, '${txtb}' FROM table1 WHERE data='${txta}';" >> $GENDATA echo "INSERT INTO table3(table2_id) SELECT id FROM table2 WHERE data ='${txtb}';" >> $GENDATA echo "INSERT INTO table4(numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol,bitcol) values ('${ra}${rb}.${rc}','${ra}.${rb}${rc}','(${ra},${rb})','((${ra},${ra}),(${rb},${rb}),(${rc},${rc}),(${ra},${rc}))','((${ra},${rb}),(${rc},${ra}),(${rb},${rc}),(${rc},${rb}))','<(${ra},${rb}),${rc}>','192.168.${ra}.${rb}${rc}','08:00:2d:0${ra}:0${rb}:0${rc}',X'${ra}${rb}${rc}');" >> $GENDATA echo "INSERT INTO table5(d1,d2,d3,d4,d5,d6,d7,d8,d9,d10,d11) values ('${txta}${ra}','${txta}${rb}','${txta}${rc}','${txtb}${ra}','${txtb}${rb}','${txtb}${rc}','${txtb}${ra}','${txtb}${rb}','${txtb}${rc}','${txtb}${ra}','${txtb}${rb}');" >> $GENDATA if [ ${i} -ge ${numrows} ]; then break; else i=$((${i} +1)) working=`expr $i % $trippoint` if [ $working -eq 0 ]; then j=`expr $j + 1` percent=`expr $j \* 5` status "$percent %" fi fi done status "done" } do_initdata() { originnode=${ORIGINNODE:-"1"} eval db=\$DB${originnode} eval host=\$HOST${originnode} eval user=\$USER${originnode} eval port=\$PORT${originnode} generate_initdata status "run updateReloid() - equivalent to REPAIR NODE" $pgbindir/psql -h $host -p $port -d $db -U $user -c "select \"_${CLUSTER1}\".updateReloid(1, 0);" 1> $mktmp/reloidtest.log 2> $mktmp/reloidtest.log status "loading data" $pgbindir/psql -h $host -p $port -d $db -U $user < $mktmp/generate.data 1> $mktmp/initdata.log 2> $mktmp/initdata.log if [ $? -ne 0 ]; then warn 3 "do_initdata failed, see $mktmp/initdata.log for details" fi status "data load complete" wait_for_catchup status "done" } --- NEW FILE: init_add_tables.ik --- set add table (id=1, set id=1, origin=1, fully qualified name = 'public.table1', comment='accounts table'); set add table (id=2, set id=1, origin=1, fully qualified name = 'public.table2', key='table2_id_key'); set add table (id=3, set id=1, origin=1, fully qualified name = 'public.table4', comment='a table of many types'); set add table (id=4, set id=1, origin=1, fully qualified name = 'public.table5', comment='a table with composite PK strewn across the table'); --- NEW FILE: init_create_set.ik --- create set (id=1, origin=1, comment='All test1 tables'); --- NEW FILE: init_data.sql --- INSERT INTO table1(data) VALUES ('placeholder 1'); INSERT INTO table1(data) VALUES ('placeholder 2'); INSERT INTO table2(table1_id,data) VALUES (1,'placeholder 1'); INSERT INTO table2(table1_id,data) VALUES (2,'placeholder 2'); INSERT INTO table4(numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol,bitcol) values ('74.0','7.40','(7,4)','((7,7),(4,4),(0,0),(7,0))','((7,4),(0,7),(4,0),(0,4))','<(7,4),0>','192.168.7.40','08:00:2d:07:04:00',X'740'); INSERT INTO table4(numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol,bitcol) values ('93.1','9.31','(9,3)','((9,9),(3,3),(1,1),(9,1))','((9,3),(1,9),(3,1),(1,3))','<(9,3),1>','192.168.9.31','08:00:2d:09:03:01',X'931'); --- NEW FILE: init_schema.sql --- CREATE TABLE table1( id SERIAL PRIMARY KEY, data TEXT ); CREATE TABLE table2( id SERIAL UNIQUE NOT NULL, table1_id INT4 REFERENCES table1(id) ON UPDATE CASCADE ON DELETE CASCADE, data TEXT ); create table table3 ( id serial NOT NULL, id2 integer ); create unique index no_good_candidate_pk on table3 (id, id2); create table table4 ( id serial primary key, numcol numeric(12,4), -- 1.23 realcol real, -- (1.23) ptcol point, -- (1,2) pathcol path, -- ((1,1),(2,2),(3,3),(4,4)) polycol polygon, -- ((1,1),(2,2),(3,3),(4,4)) circcol circle, -- <(1,2>,3> ipcol inet, -- "192.168.1.1" maccol macaddr, -- "04:05:06:07:08:09" bitcol bit varying(20) -- X'123' ); create table table5 ( id serial, d1 text, d2 text, id2 serial, d3 text, d4 text, d5 text, d6 text, id3 serial, d7 text, d8 text, d9 text, d10 text, d11 text, primary key(id, id2, id3) ); --- NEW FILE: schema.diff --- SELECT id,data FROM table1 ORDER BY id SELECT id,table1_id,data FROM table2 ORDER BY id SELECT id,numcol,realcol,ptcol,pathcol,polycol,circcol,ipcol,maccol, bitcol from table4 order by id SELECT id,d1,d2,id2,d3,d4,d5,d6,id3,d7,d8,d9,d10,d11 from table5 order by id,id2,id3 --- NEW FILE: README --- $Id: README,v 1.2 2009-06-12 20:34:44 cbbrowne Exp $ This test validates the OMIT COPY functionality added to SUBSCRIBE SET It creates three simple tables as one replication set, and replicates them from one database to another. The tables are of the several interesting types: 1. table1 has a formal primary key 2. table2 lacks a formal primary key, but has a candidate primary key 3. table4 which has columns of all sorts of vaguely esoteric types to exercise that points, paths, bitmaps, mac addresses, and inet types replicate properly. 4. table5 has a composite primary key (on id1,id2,id3) where the primary key attributes are strewn throughout the table. This is to make sure we have a case that exercises the logic that changed with bug #18. --- NEW FILE: init_subscribe_set.ik --- subscribe set (id = 1, provider = 1, receiver = 2, forward = yes, omit copy=true); echo 'sleep a couple of seconds...'; sleep (seconds = 2); echo 'done sleeping...'; From cbbrowne at lists.slony.info Fri Jun 12 15:42:15 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Fri Jun 12 15:42:18 2009 Subject: [Slony1-commit] slony1-engine/tools slonikconfdump.sh Message-ID: <20090612224215.AAB362902D3@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/tools In directory main.slony.info:/tmp/cvs-serv1425/tools Modified Files: Tag: REL_2_0_STABLE slonikconfdump.sh Log Message: Revised slonik dump script based on input from our DBA group... It now uses tsort to determine the subscription ordering, so should work even with fairly sophisticated cascaded subscriptions Index: slonikconfdump.sh =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/tools/Attic/slonikconfdump.sh,v retrieving revision 1.1.2.2 retrieving revision 1.1.2.3 diff -C2 -d -r1.1.2.2 -r1.1.2.3 *** slonikconfdump.sh 10 Jun 2009 21:11:56 -0000 1.1.2.2 --- slonikconfdump.sh 12 Jun 2009 22:42:13 -0000 1.1.2.3 *************** *** 12,21 **** echo "cluster name=${SLONYCLUSTER};" ! echo "include ; # Draw in ADMIN CONNINFO lines" Q="select distinct pa_server from ${SS}.sl_path order by pa_server;" ! PATHS=`psql -qtA -F ":" -c "${Q}"` for svr in `echo ${PATHS}`; do SQ="select pa_conninfo from ${SS}.sl_path where pa_server=${svr} order by pa_client asc limit 1;" ! conninfo=`psql -qtA -F ":" -c "${SQ}"` echo "node ${svr} admin conninfo='${conninfo}';" done --- 12,42 ---- echo "cluster name=${SLONYCLUSTER};" ! function RQ () { ! local QUERY=$1 ! RESULTSET=`psql -qtA -F ":" -R " " -c "${QUERY}"` ! echo ${RESULTSET} ! } ! function argn () { ! local V=$1 ! local n=$2 ! local res=`echo ${V} | cut -d : -f ${n}` ! echo $res ! } ! ! function arg1 () { ! echo `argn "$1" 1` ! } ! function arg2 () { ! echo `argn "$1" 2` ! } ! function arg3 () { ! echo `argn "$1" 3` ! } ! Q="select distinct pa_server from ${SS}.sl_path order by pa_server;" ! PATHS=`RQ "${Q}"` for svr in `echo ${PATHS}`; do SQ="select pa_conninfo from ${SS}.sl_path where pa_server=${svr} order by pa_client asc limit 1;" ! conninfo=`RQ "${SQ}"` echo "node ${svr} admin conninfo='${conninfo}';" done *************** *** 23,81 **** Q="select no_id, no_comment from ${SS}.sl_node order by no_id limit 1;" ! NODE1=`psql -qtA -F ":" -c "${Q}"` ! nn=`echo ${NODE1} | cut -d : -f 1` ! comment=`echo ${NODE1} | cut -d : -f 2-` echo "init cluster (id=${nn}, comment='${comment}');" Q="select no_id from ${SS}.sl_node order by no_id offset 1;" ! NODES=`psql -qtA -F ":" -c "${Q}"` for node in `echo ${NODES}`; do CQ="select no_comment from ${SS}.sl_node where no_id = ${node};" ! comment=`psql -qtA -c "${CQ}"` echo "store node (id=${node}, comment='${comment}');" done - #slonyregress1=# select * from sl_path; - # pa_server | pa_client | pa_conninfo | pa_connretry - #-----------+-----------+----------------------------------------------------------+-------------- - # 2 | 1 | dbname=slonyregress2 host=localhost user=chris port=7083 | 10 - # 1 | 2 | dbname=slonyregress1 host=localhost user=chris port=7083 | 10 - #(2 rows) - Q="select pa_server, pa_client, pa_connretry from ${SS}.sl_path order by pa_server, pa_client;" ! PATHS=`psql -qtA -F ":" -R " " -c "${Q}"` for sc in `echo $PATHS`; do ! server=`echo $sc | cut -d : -f 1` ! client=`echo $sc | cut -d : -f 2` ! retry=`echo $sc | cut -d : -f 3` Q2="select pa_conninfo from ${SS}.sl_path where pa_server=${server} and pa_client=${client};" ! conninfo=`psql -qtA -c "${Q2}"` echo "store path (server=${server}, client=${client}, conninfo='${conninfo}', connretry=${retry});" done Q="select set_id, set_origin from ${SS}.sl_set order by set_id;" ! SETS=`psql -qtA -F ":" -R " " -c "${Q}"` for sc in `echo ${SETS}`; do ! set=`echo ${sc} | cut -d : -f 1` ! origin=`echo ${sc} | cut -d : -f 2` Q2="select set_comment from ${SS}.sl_set where set_id=${set};" ! comment=`psql -qtA -c "${Q2}"` echo "create set (id=${set}, origin=${origin}, comment='${comment}');" done Q="select tab_id,tab_set, set_origin from ${SS}.sl_table, ${SS}.sl_set where tab_set = set_id order by tab_id;" ! TABS=`psql -qtA -F ":" -R " " -c "${Q}"` for tb in `echo ${TABS}`; do ! tab=`echo ${tb} | cut -d : -f 1` ! set=`echo ${tb} | cut -d : -f 2` ! origin=`echo ${tb} | cut -d : -f 3` RQ="select tab_relname from ${SS}.sl_table where tab_id = ${tab};" ! relname=`psql -qtA -c "${RQ}"` NSQ="select tab_nspname from ${SS}.sl_table where tab_id = ${tab};" ! nsp=`psql -qtA -c "${NSQ}"` IDX="select tab_idxname from ${SS}.sl_table where tab_id = ${tab};" ! idx=`psql -qtA -c "${IDX}"` COM="select tab_comment from ${SS}.sl_table where tab_id = ${tab};" ! comment=`psql -qtA -c "${COM}"` echo "set add table (id=${tab}, set id=${set}, origin=${origin}, fully qualified name='\"${nsp}\".\"${relname}\"', comment='${comment}, key='${idx}');" done --- 44,95 ---- Q="select no_id, no_comment from ${SS}.sl_node order by no_id limit 1;" ! NODE1=`RQ "${Q}"` ! nn=`arg1 "${NODE1}"` ! comment=`arg2 ${NODE1}` echo "init cluster (id=${nn}, comment='${comment}');" Q="select no_id from ${SS}.sl_node order by no_id offset 1;" ! NODES=`RQ "${Q}"` for node in `echo ${NODES}`; do CQ="select no_comment from ${SS}.sl_node where no_id = ${node};" ! comment=`RQ "${CQ}"` echo "store node (id=${node}, comment='${comment}');" done Q="select pa_server, pa_client, pa_connretry from ${SS}.sl_path order by pa_server, pa_client;" ! PATHS=`RQ "${Q}"` for sc in `echo $PATHS`; do ! server=`arg1 $sc` ! client=`arg2 $sc` ! retry=`arg3 $sc` Q2="select pa_conninfo from ${SS}.sl_path where pa_server=${server} and pa_client=${client};" ! conninfo=`RQ "${Q2}"` echo "store path (server=${server}, client=${client}, conninfo='${conninfo}', connretry=${retry});" done Q="select set_id, set_origin from ${SS}.sl_set order by set_id;" ! SETS=`RQ "${Q}"` for sc in `echo ${SETS}`; do ! set=`arg1 ${sc}` ! origin=`arg2 ${sc}` Q2="select set_comment from ${SS}.sl_set where set_id=${set};" ! comment=`RQ "${Q2}"` echo "create set (id=${set}, origin=${origin}, comment='${comment}');" done Q="select tab_id,tab_set, set_origin from ${SS}.sl_table, ${SS}.sl_set where tab_set = set_id order by tab_id;" ! TABS=`RQ "${Q}"` for tb in `echo ${TABS}`; do ! tab=`arg1 ${tb}` ! set=`arg2 ${tb}` ! origin=`arg3 ${tb}` RQ="select tab_relname from ${SS}.sl_table where tab_id = ${tab};" ! relname=`RQ "${RQ}"` NSQ="select tab_nspname from ${SS}.sl_table where tab_id = ${tab};" ! nsp=`RQ "${NSQ}"` IDX="select tab_idxname from ${SS}.sl_table where tab_id = ${tab};" ! idx=`RQ "${IDX}"` COM="select tab_comment from ${SS}.sl_table where tab_id = ${tab};" ! comment=`RQ "${COM}"` echo "set add table (id=${tab}, set id=${set}, origin=${origin}, fully qualified name='\"${nsp}\".\"${relname}\"', comment='${comment}, key='${idx}');" done *************** *** 83,107 **** Q="select seq_id,seq_set,set_origin from ${SS}.sl_sequence, ${SS}.sl_set where seq_set = set_id order by seq_id;" ! SEQS=`psql -qtA -F ":" -R " " -c "${Q}"` for sq in `echo ${SEQS}`; do ! seq=`echo ${sq} | cut -d : -f 1` ! set=`echo ${sq} | cut -d : -f 2` ! origin=`echo ${sq} | cut -d : -f 3` ! RQ="select seq_relname from ${SS}.sl_sequence where seq_id = ${seq};" ! relname=`psql -qtA -c "${RQ}"` NSQ="select seq_nspname from ${SS}.sl_sequence where seq_id = ${seq};" ! nsp=`psql -qtA -c "${NSQ}"` COM="select seq_comment from ${SS}.sl_sequence where seq_id = ${seq};" ! comment=`psql -qtA -c "${COM}"` echo "set add sequence(id=${seq}, set id=${set}, origin=${origin}, fully qualified name='\"${nsp}\".\"${relname}\"', comment='${comment}');" done ! Q="select sub_set,sub_provider,sub_receiver,case when sub_forward then 'YES' else 'NO' end from ${SS}.sl_subscribe order by sub_set, sub_provider, sub_receiver;" ! SUBS=`psql -qtA -F ":" -R " " -c "${Q}"` ! for sb in `echo ${SUBS}`; do ! set=`echo ${sb} | cut -d : -f 1` ! prov=`echo ${sb} | cut -d : -f 2` ! recv=`echo ${sb} | cut -d : -f 3` ! forw=`echo ${sb} | cut -d : -f 4` ! echo "subscribe set (id=${set}, provider=${prov}, receiver=${recv}, forward=${forw});" done --- 97,133 ---- Q="select seq_id,seq_set,set_origin from ${SS}.sl_sequence, ${SS}.sl_set where seq_set = set_id order by seq_id;" ! SEQS=`RQ "${Q}"` for sq in `echo ${SEQS}`; do ! seq=`arg1 ${sq}` ! set=`arg2 ${sq}` ! origin=`arg3 ${sq}` ! RELQ="select seq_relname from ${SS}.sl_sequence where seq_id = ${seq};" ! relname=`RQ "${RELQ}"` NSQ="select seq_nspname from ${SS}.sl_sequence where seq_id = ${seq};" ! nsp=`RQ "${NSQ}"` COM="select seq_comment from ${SS}.sl_sequence where seq_id = ${seq};" ! comment=`RQ "${COM}"` echo "set add sequence(id=${seq}, set id=${set}, origin=${origin}, fully qualified name='\"${nsp}\".\"${relname}\"', comment='${comment}');" done ! Q="select set_id, set_origin from ${SS}.sl_set;" ! SETS=`RQ "${Q}"` ! for seti in `echo ${SETS}`; do ! set=`arg1 ${seti}` ! origins=`arg2 ${seti}` ! SUBQ="select sub_provider, sub_receiver from ${SS}.sl_subscribe where sub_set=${set};" ! # We use tsort to determine a feasible ordering for subscriptions ! SUBRES=`psql -qtA -F " " -c "${SUBQ}" | tsort | egrep -v "^${origin}\$"` ! for recv in `echo ${SUBRES}`; do ! SF="select sub_provider, sub_forward from ${SS}.sl_subscribe where sub_set=${set} and sub_receiver=${recv};" ! SR=`RQ "${SF}"` ! prov=`arg1 ${SR}` ! forw=`arg2 ${SR}` ! echo "subscribe set (id=${set}, provider=${prov}, receiver=${recv}, forward=${forw}, omit copy=true);" ! if [ $prov != $origin ]; then ! echo "wait for event (origin=$provider, confirmed=$origin, wait on=$provider, timeout=0);" ! fi ! echo "sync (id=$origin);" ! echo "wait for event (origin=$origin, confirmed=ALL, wait on=$origin, timeout=0);" ! done done From cbbrowne at lists.slony.info Fri Jun 12 15:42:15 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Fri Jun 12 15:42:18 2009 Subject: [Slony1-commit] slony1-engine/doc/adminguide adminscripts.sgml Message-ID: <20090612224215.B6CFC290BE9@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/doc/adminguide In directory main.slony.info:/tmp/cvs-serv1425/doc/adminguide Modified Files: Tag: REL_2_0_STABLE adminscripts.sgml Log Message: Revised slonik dump script based on input from our DBA group... It now uses tsort to determine the subscription ordering, so should work even with fairly sophisticated cascaded subscriptions Index: adminscripts.sgml =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/adminscripts.sgml,v retrieving revision 1.52.2.3 retrieving revision 1.52.2.4 diff -C2 -d -r1.52.2.3 -r1.52.2.4 *** adminscripts.sgml 10 Jun 2009 21:11:56 -0000 1.52.2.3 --- adminscripts.sgml 12 Jun 2009 22:42:13 -0000 1.52.2.4 *************** *** 778,785 **** Subscriptions ! Note that the subscriptions are ordered by set, then by ! provider, then by receiver. This ordering does not necessarily ! indicate the order in which subscriptions need to be ! applied. --- 778,783 ---- Subscriptions ! Note that the subscriptions are ordered topologically, using ! tsort *************** *** 811,817 **** CONNINFO, as it picks the first value that it sees for each node; in a complex environment, where visibility of nodes may vary ! from subnet to subnet, it may not pick the right value. In addition, ! SUBSCRIBE SET statements do not necessarily ! indicate the order in which subscriptions need to be applied. --- 809,813 ---- CONNINFO, as it picks the first value that it sees for each node; in a complex environment, where visibility of nodes may vary ! from subnet to subnet, it may not pick the right value. From cbbrowne at lists.slony.info Wed Jun 17 14:37:40 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Wed Jun 17 14:37:44 2009 Subject: [Slony1-commit] slony1-engine/src/slon remote_worker.c Message-ID: <20090617213740.7B65D290363@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/src/slon In directory main.slony.info:/tmp/cvs-serv2472/src/slon Modified Files: Tag: REL_2_0_STABLE remote_worker.c Log Message: Add in OMIT COPY option to SUBSCRIBE SET in support of upgrading from elder Slony-I versions. Index: remote_worker.c =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/src/slon/remote_worker.c,v retrieving revision 1.176 retrieving revision 1.176.2.1 diff -C2 -d -r1.176 -r1.176.2.1 *** remote_worker.c 29 Aug 2008 21:06:45 -0000 1.176 --- remote_worker.c 17 Jun 2009 21:37:38 -0000 1.176.2.1 *************** *** 1181,1184 **** --- 1181,1185 ---- int sub_receiver = (int) strtol(event->ev_data3, NULL, 10); char *sub_forward = event->ev_data4; + char *omit_copy = event->ev_data5; if (sub_receiver == rtcfg_nodeid) *************** *** 1186,1192 **** slon_appendquery(&query1, ! "select %s.subscribeSet_int(%d, %d, %d, '%q'); ", rtcfg_namespace, ! sub_set, sub_provider, sub_receiver, sub_forward); need_reloadListen = true; } --- 1187,1193 ---- slon_appendquery(&query1, ! "select %s.subscribeSet_int(%d, %d, %d, '%q', '%q'); ", rtcfg_namespace, ! sub_set, sub_provider, sub_receiver, sub_forward, omit_copy); need_reloadListen = true; } *************** *** 2430,2440 **** char seqbuf[64]; char *copydata = NULL; struct timeval tv_start; struct timeval tv_start2; struct timeval tv_now; - slon_log(SLON_INFO, "copy_set %d\n", set_id); gettimeofday(&tv_start, NULL); /* * Lookup the provider nodes conninfo --- 2431,2459 ---- char seqbuf[64]; char *copydata = NULL; + bool omit_copy = false; + char *v_omit_copy = event->ev_data5; struct timeval tv_start; struct timeval tv_start2; struct timeval tv_now; gettimeofday(&tv_start, NULL); + if (strcmp(v_omit_copy, "f") == 0) { + omit_copy = false; + } else { + if (strcmp(v_omit_copy, "t") == 0) { + omit_copy = true; + } else { + slon_log(SLON_ERROR, "copy_set %d - omit_copy not in (t,f)- [%s]\n", set_id, v_omit_copy); + } + } + slon_log(SLON_INFO, "copy_set %d - omit=%s - bool=%d\n", set_id, v_omit_copy, omit_copy); + + if (omit_copy) { + slon_log(SLON_INFO, "omit is TRUE\n"); + } else { + slon_log(SLON_INFO, "omit is FALSE\n"); + } + /* * Lookup the provider nodes conninfo *************** *** 2859,2862 **** --- 2878,2886 ---- * Begin a COPY from stdin for the table on the local DB */ + if (omit_copy) { + slon_log(SLON_CONFIG, "remoteWorkerThread_%d: " + "COPY of table %s suppressed due to OMIT COPY option\n", + node->no_id, tab_fqname); + } else { slon_log(SLON_CONFIG, "remoteWorkerThread_%d: " "Begin COPY of table %s\n", *************** *** 3148,3152 **** } } ! gettimeofday(&tv_now, NULL); slon_log(SLON_CONFIG, "remoteWorkerThread_%d: " --- 3172,3176 ---- } } ! } gettimeofday(&tv_now, NULL); slon_log(SLON_CONFIG, "remoteWorkerThread_%d: " From cbbrowne at lists.slony.info Wed Jun 17 14:37:40 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Wed Jun 17 14:37:44 2009 Subject: [Slony1-commit] slony1-engine/src/backend slony1_funcs.sql Message-ID: <20090617213740.63D71290250@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/src/backend In directory main.slony.info:/tmp/cvs-serv2472/src/backend Modified Files: Tag: REL_2_0_STABLE slony1_funcs.sql Log Message: Add in OMIT COPY option to SUBSCRIBE SET in support of upgrading from elder Slony-I versions. Index: slony1_funcs.sql =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/src/backend/slony1_funcs.sql,v retrieving revision 1.145.2.10 retrieving revision 1.145.2.11 diff -C2 -d -r1.145.2.10 -r1.145.2.11 *** slony1_funcs.sql 28 Apr 2009 19:25:36 -0000 1.145.2.10 --- slony1_funcs.sql 17 Jun 2009 21:37:38 -0000 1.145.2.11 *************** *** 4100,4106 **** -- ---------------------------------------------------------------------- ! -- FUNCTION subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward) -- ---------------------------------------------------------------------- ! create or replace function @NAMESPACE@.subscribeSet (int4, int4, int4, bool) returns bigint as $$ --- 4100,4106 ---- -- ---------------------------------------------------------------------- ! -- FUNCTION subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) -- ---------------------------------------------------------------------- ! create or replace function @NAMESPACE@.subscribeSet (int4, int4, int4, bool, bool) returns bigint as $$ *************** *** 4110,4113 **** --- 4110,4114 ---- p_sub_receiver alias for $3; p_sub_forward alias for $4; + p_omit_copy alias for $5; v_set_origin int4; v_ev_seqno int8; *************** *** 4119,4122 **** --- 4120,4125 ---- lock table @NAMESPACE@.sl_config_lock; + raise notice 'subscribe set: omit_copy=%', p_omit_copy; + -- ---- -- Check that this is called on the provider node *************** *** 4162,4166 **** v_ev_seqno := @NAMESPACE@.createEvent('_@CLUSTERNAME@', 'SUBSCRIBE_SET', p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, ! case p_sub_forward when true then 't' else 'f' end); -- ---- --- 4165,4171 ---- v_ev_seqno := @NAMESPACE@.createEvent('_@CLUSTERNAME@', 'SUBSCRIBE_SET', p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, ! case p_sub_forward when true then 't' else 'f' end, ! case p_omit_copy when true then 't' else 'f' end ! ); -- ---- *************** *** 4168,4186 **** -- ---- perform @NAMESPACE@.subscribeSet_int(p_sub_set, p_sub_provider, ! p_sub_receiver, p_sub_forward); return v_ev_seqno; end; $$ language plpgsql; ! comment on function @NAMESPACE@.subscribeSet (int4, int4, int4, bool) is ! 'subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward) Makes sure that the receiver is not the provider, then stores the ! subscription, and publishes the SUBSCRIBE_SET event to other nodes.'; ! -- ---------------------------------------------------------------------- ! -- FUNCTION subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward) ! -- ---------------------------------------------------------------------- ! create or replace function @NAMESPACE@.subscribeSet_int (int4, int4, int4, bool) returns int4 as $$ --- 4173,4194 ---- -- ---- perform @NAMESPACE@.subscribeSet_int(p_sub_set, p_sub_provider, ! p_sub_receiver, p_sub_forward, p_omit_copy); return v_ev_seqno; end; $$ language plpgsql; ! comment on function @NAMESPACE@.subscribeSet (int4, int4, int4, bool, bool) is ! 'subscribeSet (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) Makes sure that the receiver is not the provider, then stores the ! subscription, and publishes the SUBSCRIBE_SET event to other nodes. ! If omit_copy is true, then no data copy will be done. ! '; ! ! -- ------------------------------------------------------------------------------------------- ! -- FUNCTION subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) ! -- ------------------------------------------------------------------------------------------- ! create or replace function @NAMESPACE@.subscribeSet_int (int4, int4, int4, bool, bool) returns int4 as $$ *************** *** 4190,4193 **** --- 4198,4202 ---- p_sub_receiver alias for $3; p_sub_forward alias for $4; + p_omit_copy alias for $5; v_set_origin int4; v_sub_row record; *************** *** 4198,4201 **** --- 4207,4212 ---- lock table @NAMESPACE@.sl_config_lock; + raise notice 'subscribe set: omit_copy=%', p_omit_copy; + -- ---- -- Provider change is only allowed for active sets *************** *** 4261,4265 **** perform @NAMESPACE@.createEvent('_@CLUSTERNAME@', 'ENABLE_SUBSCRIPTION', p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, ! case p_sub_forward when true then 't' else 'f' end); perform @NAMESPACE@.enableSubscription(p_sub_set, p_sub_provider, p_sub_receiver); --- 4272,4278 ---- perform @NAMESPACE@.createEvent('_@CLUSTERNAME@', 'ENABLE_SUBSCRIPTION', p_sub_set::text, p_sub_provider::text, p_sub_receiver::text, ! case p_sub_forward when true then 't' else 'f' end, ! case p_omit_copy when true then 't' else 'f' end ! ); perform @NAMESPACE@.enableSubscription(p_sub_set, p_sub_provider, p_sub_receiver); *************** *** 4275,4280 **** $$ language plpgsql; ! comment on function @NAMESPACE@.subscribeSet_int (int4, int4, int4, bool) is ! 'subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward) Internal actions for subscribing receiver sub_receiver to subscription --- 4288,4293 ---- $$ language plpgsql; ! comment on function @NAMESPACE@.subscribeSet_int (int4, int4, int4, bool, bool) is ! 'subscribeSet_int (sub_set, sub_provider, sub_receiver, sub_forward, omit_copy) Internal actions for subscribing receiver sub_receiver to subscription *************** *** 4405,4409 **** -- ---------------------------------------------------------------------- ! -- FUNCTION enableSubscription (sub_set, sub_provider, sub_receiver) -- ---------------------------------------------------------------------- create or replace function @NAMESPACE@.enableSubscription (int4, int4, int4) --- 4418,4422 ---- -- ---------------------------------------------------------------------- ! -- FUNCTION enableSubscription (sub_set, sub_provider, sub_receiver, omit_copy) -- ---------------------------------------------------------------------- create or replace function @NAMESPACE@.enableSubscription (int4, int4, int4) *************** *** 4427,4433 **** enableSubscription_int (sub_set, sub_provider, sub_receiver).'; ! -- ---------------------------------------------------------------------- -- FUNCTION enableSubscription_int (sub_set, sub_provider, sub_receiver) ! -- ---------------------------------------------------------------------- create or replace function @NAMESPACE@.enableSubscription_int (int4, int4, int4) returns int4 --- 4440,4446 ---- enableSubscription_int (sub_set, sub_provider, sub_receiver).'; ! -- ----------------------------------------------------------------------------------- -- FUNCTION enableSubscription_int (sub_set, sub_provider, sub_receiver) ! -- ----------------------------------------------------------------------------------- create or replace function @NAMESPACE@.enableSubscription_int (int4, int4, int4) returns int4 From cbbrowne at lists.slony.info Wed Jun 17 14:37:40 2009 From: cbbrowne at lists.slony.info (Chris Browne) Date: Wed Jun 17 14:37:44 2009 Subject: [Slony1-commit] slony1-engine/doc/adminguide adminscripts.sgml intro.sgml slonik_ref.sgml slonyupgrade.sgml Message-ID: <20090617213740.99AD82903B6@main.slony.info> Update of /home/cvsd/slony1/slony1-engine/doc/adminguide In directory main.slony.info:/tmp/cvs-serv2472/doc/adminguide Modified Files: Tag: REL_2_0_STABLE adminscripts.sgml intro.sgml slonik_ref.sgml slonyupgrade.sgml Log Message: Add in OMIT COPY option to SUBSCRIBE SET in support of upgrading from elder Slony-I versions. Index: slonyupgrade.sgml =================================================================== RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/slonyupgrade.sgml,v retrieving revision 1.10 retrieving revision 1.10.2.1 diff -C2 -d -r1.10 -r1.10.2.1 *** slonyupgrade.sgml 15 Oct 2008 21:51:54 -0000 1.10 --- slonyupgrade.sgml 17 Jun 2009 21:37:38 -0000 1.10.2.1 *************** *** 254,257 **** --- 254,364 ---- + + Upgrading to &slony1; version 2 + + The version 2 branch is substantially + different from earlier releases, dropping support for versions of + &postgres; prior to 8.3, as in version 8.3, support for a + session replication role was added, thereby eliminating + the need for system catalog hacks as well as the + not-entirely-well-supported xxid data type. + + As a result of the replacement of the xxid type + with a (native-to-8.3) &postgres; transaction XID type, the &lslonik; + command is quite inadequate to + the process of upgrading earlier versions of &slony1; to version + 2. + + In version 2.0.2, we have added a new option to , OMIT COPY, which + allows taking an alternative approach to upgrade which amounts to: + + + Uninstall old version of &slony1; + When &slony1; uninstalls itself, catalog corruptions are fixed back up. + Install &slony1; version 2 + Resubscribe, with OMIT COPY + + + There is a large foot gun here: during + part of the process, &slony1; is not installed in any form, and if an + application updates one or another of the databases, the + resubscription, omitting copying data, will be left with data + out of sync. + + The administrator must take care; &slony1; + has no way to help ensure the integrity of the data during this + process. + + + The following process is suggested to help make the upgrade + process as safe as possible, given the above risks. + + + + Use to generate a + &lslonik; script to recreate the replication cluster. + + Be sure to verify the statements, + as the values are pulled are drawn from the PATH configuration, which + may not necessarily be suitable for running &lslonik;. + + This step may be done before the application outage. + + + Determine what triggers have configuration on subscriber nodes. + + + As discussed in , the handling has + fundamentally changed between &slony1; 1.2 and 2.0. + + Generally speaking, what needs to happen is to query + sl_table on each node, and, for any triggers found in + sl_table, it is likely to be appropriate to set up a + script indicating either ENABLE REPLICA TRIGGER or + ENABLE ALWAYS TRIGGER for these triggers. + + This step may be done before the application outage. + + + Begin an application outage during which updates should no longer be applied to the database. + + To ensure that applications cease to make changes, it would be appropriate to lock them out via modifications to pg_hba.conf + + Ensure replication is entirely caught up, via examination of the sl_status view, and any application data that may seem appropriate. + + Shut down &lslon; processes. + + Uninstall the old version of &slony1; from the database. + + This involves running a &lslonik; script that runs against each node in the cluster. + + + + Ensure new &slony1; binaries are in place. + + A convenient way to handle this is to have old and new in different directories alongside two &postgres; builds, stop the postmaster, repoint to the new directory, and restart the postmaster. + + + Run the script that reconfigures replication as generated earlier. + + This script should probably be split into two portions to be run separately: + + Firstly, set up nodes, paths, sets, and such + At this point, start up &lslon; processes + Then, run the portion which runs + + + Splitting the script as described above is left as an exercise for the reader. + + + If there were triggers that needed to be activated on subscriber nodes, this is the time to activate them. + At this point, the cluster should be back up and running, ready to be reconfigured so that applications may access it again. + + + +