Steve Singer ssinger at ca.afilias.info
Wed Oct 16 07:48:38 PDT 2013
On 10/16/2013 10:06 AM, Sebastien Marchand wrote:
>> CLUSTER NAME = repli_local110 ;
>> NODE 101 ADMIN CONNINFO = 'dbname=RPP2 host=192.168.0.101 port=5432
>> user=slony password=password';
>> NODE 110 ADMIN CONNINFO = 'dbname=RPP2 host=192.168.0.110 port=5432
>> user=slony password=password';
>
> It's the file repli_local110.preamble but for email i copy this file in my
> script :)

Sometimes details are important.

Copy the preamble file inline into your script and it will work (like 
you did in your email)

I can reproduce your issue with the preamble moved to a different file. 
  This is a bug, it shouldn't be hard to fix, thanks for reporting it. 
  I suspect the fix will be included in 2.2.1 due in the next few weeks 
(since this has been a busy week for 2.2.0 bugs - ie #318 and #320).




>
> -----Message d'origine-----
> De : Steve Singer [mailto:ssinger at ca.afilias.info]
> Envoyé : mercredi 16 octobre 2013 16:03
> À : Sebastien Marchand
> Cc : slony1-general at lists.slony.info
> Objet : Re: [Slony1-general] Problem slony 2.2 and pg 9.3 double replication
>
> On 10/16/2013 09:55 AM, Sebastien Marchand wrote:
>> Sorry I forgot to tell you that I have 2 differents replications .
>>
>>> SERVER 1			SERVER 2
>>> SCHEMA A	------->		SCHEMA A  (CLUSTER NAME =
>> repli_general  )
>>> SCHEMA B	<-------		SCHEMA B   (CLUSTER NAME =
>> repli_local110 )
>>
>> Now one script i use (slony-ctl ) :
>
>
> One of your scripts below references
> INCLUDE </home/slony1-ctl/slony-ctl/etc/repli_local110.preamble>
>
> but you don't include that file.
>
> Since the error you pasted references "_sloncluster.sl_local_node_id"
> not "repli_local110.sl_local_node_id"  I'm wondering if the error is that
> your setting the clustername to the wrong value?
>
>
>
>
>>
>> #!/bin/bash
>> ## EN: Slonik script for initialising replication
>> /usr/pgsql-9.3/bin/slonik <<_EOF_
>>       ## EN: Preamble file contains all useful connexions information
>> 	
>> CLUSTER NAME = repli_local110 ;
>> NODE 101 ADMIN CONNINFO = 'dbname=RPP2 host=192.168.0.101 port=5432
>> user=slony password=password';
>> NODE 110 ADMIN CONNINFO = 'dbname=RPP2 host=192.168.0.110 port=5432
>> user=slony password=password';
>>
>>       #--
>>       # Init the first node.  This creates the schema _repli_local110
>>       # containing all replication system specific database objects.
>>       #--
>>       try{
>>       	init cluster ( id=101, comment = 'repli_local110 : 101 - DATABASE -
>> 192.168.0.101');
>>       }
>>       on error {
>> 	echo 'Cluster 101 : ko' ;
>> 	exit 1;
>> 	}
>>       on success { echo 'Cluster 101 : OK' ; }
>>
>>       # The other nodes.
>>
>>      #--
>>      # Create the node 110 (slave) and tell the 2 nodes how to connect to
>>      # each other and how they should listen for events.
>>      #--
>>
>>      try{
>>          store node (id = 110,
>>           EVENT NODE = 101,
>>           comment = 'DB DATABASE - Host 192.168.0.110 - Cluster
>> repli_local110');
>>       }
>>       on error {
>> 	echo 'Store node 110 ko';
>> 	exit 1;
>> 	}
>>       on success {
>> 	echo 'Store node 110 OK';
>> 	}
>>
>>       store path (server = 101, client = 110, conninfo='dbname=DATABASE
>> host=192.168.0.101 port=5432 user=slony password=123');
>>
>>       store path (server = 110, client = 101, conninfo='dbname=DATABASE
>> host=192.168.0.110 port=5432 user=slony password=123');
>>
>> 	
>>       #--
>>       # Slony-I organizes tables into sets.  The smallest unit a node can
>>       # subscribe is a set. The master or origin of the set is node 110.
>>       #--
>>       try{
>>           create set (id=1, origin=110, comment='Tables in DATABASE');
>> 	# Tables
>>
>> 		set add table (set id=1, origin=101, id=67, fully qualified
>> name = 'local110.test_tata2', comment='table local110.test_tata2');
>>
>> 	# Sequences
>>       }
>>       on error {
>> 	echo 'Creation Set : ko';
>> 	exit 1;
>> 	}
>>       on success {
>> 	echo 'Creation Set : OK';
>> 	}
>>       exit 0;
>> _EOF_
>>
>> if [ $? -ne 0 ]; then
>>       error "CREATE SET" 3
>>       exit 1
>> fi
>>
>> ## EN: We start daemons. All on the same machine, which is
>> # the one on which that script is executed.
>> /usr/pgsql-9.3/bin/slon -f /home/slony1-ctl/slony-ctl/etc/slon.cfg
>> repli_local110 "dbname=DATABASE host=192.168.0.101 port=5432 user=slony
>> password=123"
>>> /home/slony1-ctl/slony-ctl/logs/repli_local110_DATABASE_101.log 2>&1 &
>> echo "daemon slon pour DATABASE - 192.168.0.101 demarré sur national101"
>> ssh/usr/pgsql-9.3/bin/slon -f /home/slony1-ctl/slony-ctl/etc/slon.cfg
>> repli_local110 "dbname=DATABASE host=192.168.0.110 port=5432 user=slony
>> password=123" >/home/slony/logs/repli_local110_DATABASE_110.log 2>&1 &
>> echo "daemon slon pour DATABASE - 192.168.0.110 demarré sur national101"
>>
>> ## EN: And now, subscription
>>
>> echo "Souscription Set 1 par 101"
>> /usr/pgsql-9.3/bin/slonik <<_EOF_
>>       # ----
>>       # <preamble> file
>>       # ----
>>       INCLUDE </home/slony1-ctl/slony-ctl/etc/repli_local110.preamble>
>>
>>       # ----
>>       # Node 101 subscribes set 1
>>       # ----
>>       try{
>>       	subscribe set ( id = 1, provider = 110, receiver = 101, forward =
>> yes);
>>       }
>>       on error {
>> 	echo 'Souscription ko';
>> 	exit 1;
>> 	}
>>       on success {
>> 	echo 'Souscription OK';
>> 	}
>> _EOF_
>>
>>
>> if [ $? -ne 0 ]; then
>>       error "SUBSCRIBE SET" 3
>>       exit 1
>> fi
>> exit 0
>>
>>
>>
>>
>> -----Message d'origine-----
>> De : Steve Singer [mailto:ssinger at ca.afilias.info]
>> Envoyé : mercredi 16 octobre 2013 15:29
>> À : Sebastien Marchand
>> Cc : slony1-general at lists.slony.info
>> Objet : Re: [Slony1-general] Problem slony 2.2 and pg 9.3 double
> replication
>>
>> On 10/16/2013 09:17 AM, Sebastien Marchand wrote:
>>> Hi,
>>>
>>> I want do this :
>>>
>>> SERVER 1			SERVER 2
>>> SCHEMA A	------->		SCHEMA A
>>> SCHEMA B	<-------		SCHEMA B
>>>
>>> That works in slony 2.0.7/pg 9.0
>>>
>>> But with  slony 2.2 and postgresql 9.3
>>>
>>> SERVER 1			SERVER 2	
>>> SCHEMA A	------->		SCHEMA A	=> OK
>>> SCHEMA B	<-------		SCHEMA B	=> NOK
>>>
>>> I've got this error
>>>
>>> <stdin>:55: waiting for event (101,5000000004) to be  confirmed on
>>> node 110
>>>
>>> Pg error :
>>>
>>> 2013-10-15 10:55:10.950 CEST >ERROR:  relation
>>> "_sloncluster.sl_local_node_id" does not exist at character 30
>>> 2013-10-15 10:55:10.950 CEST >STATEMENT:  select last_value::int4 from
>>> "_sloncluster".sl_local_node_id
>>>
>>> Sorry for my english... :(
>>>
>>
>> The way you would do this is with two replication sets
>>
>> clustername=sloncluster;
>> node 1 admin conninfo=.....
>> node 2 admin conninfo=.....
>>
>> init cluster(id=1);
>> store node(id=2, event node=2);
>> store path(server=1,client=2,conninfo=....);
>> store path(server=2,client=1,conninfo=.....);
>>
>> create set(id=1,origin=1);
>> set add table(set id=1, tables='schemaA\.*'); create set(id=2, origin=2);
>> set add table(set id=1, tables='schemaB\.*');
>>
>> subscribe set(set id=1, provider,1,receiver=2); sync(id=1); wait for
>> event(origin=1, wait on=1,confirmed=2); subscribe set(set id=2,
>> provider=2,receiver=1); sync(id=2); wait for event(origin=2,wait
>> on=2,confirmed=1);
>>
>>
>> or something similiar to above should work the same in 2.1 and 2.2  (in
>> 2.0 you would need explicit 'set add table' statements for each table and
>> maybe a few more 'wait for event' statements in the middle of the script).
>>
>> Are you doing something significantly different?
>>
>>
>>
>



More information about the Slony1-general mailing list