Melvin Davidson mdavidson
Tue Oct 11 21:48:11 PDT 2005
Thanks to all.
Problem 1 solved by dropping the schema as suggested.

Problem 2 solved by actually creating the database on the slave. DUH :(
Melvin

Jan Wieck wrote:

> On 10/11/2005 12:15 AM, Zahid Iqbal wrote:
>
>> Regarding the first problem, the slony control schema i.e. "_slony_rep"
>> is already created in the master database. So first remove the schema by
>> issuing the following command via pgAdmin/psql.
>>
>> drop schema _slony_rep cascaded;
>
>
> Note that doing this on a node that is subscribed to any set will 
> leave you with a corrupted database catalog. Please refer to the 
> slonik documentation how to properly uninstall Slony from a database.
>
>
> Jan
>
>
>
>>
>> And then execute the script that should work.
>>
>> In a specific replication configuration, the "init cluster" command is
>> executed only against the master node and this marks the specific node
>> as the master. After that you only need to add other nodes as the slave
>> by using "store node" slonik command that adds the specific node as the
>> slave node. So you don't need to run the script on the slave as node 2
>> is already joined as the slave via slony_test.sh that is run against
>> master node.
>> Thanks,
>> Zahid
>>
>> -----Original Message-----
>> From: slony1-general-bounces at gborg.postgresql.org
>> [mailto:slony1-general-bounces at gborg.postgresql.org] On Behalf Of Melvin
>> Davidson
>> Sent: Saturday, October 01, 2005 12:36 AM
>> To: slony1-general at gborg.postgresql.org
>> Subject: [Slony1-general] Slony-I implementation problem
>>
>> I am having two problems implementing Slony-I across a network and 
>> was hoping perhaps
>> someone can help me out.
>>
>> The first problem is that the slonik on the master complains that 
>> there is no admin conninfo
>> for the slave node.
>>
>> I have the following env variables in both master and slave
>>
>> CLUSTERNAME=slony_rep
>> MASTERDBNAME=important_db
>> SLAVEDBNAME=important_db
>> MASTERHOST=mas_serv.ournet.com
>> SLAVEHOST=slv_serv.ournet.com
>> REPLICATIONUSER=postgres
>>
>> The following is the script for the Master:
>> =======================================================================
>> #!/bin/sh
>>
>> slonik <<_EOF_
>>     # define the namespace the replication system uses in our example it
>> is
>>     # slony_example
>>         cluster name = $CLUSTERNAME;
>>
>>     # admin conninfo's are used by slonik to connect to the nodes one 
>> for each
>>     # node on each side of the cluster, the syntax is that of
>> PQconnectdb in
>>     # the C-API
>> node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST 
>> user=$REPLICATIONUSER';
>> node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST 
>> user=$REPLICATIONUSER';
>>
>>     # init the first node.  Its id MUST be 1.  This creates the schema
>>     # _$CLUSTERNAME containing all replication system specific database
>>     # objects.
>>
>>         init cluster ( id=1, comment = 'Master Node');
>>
>>     # Because the history table does not have a primary key or other
>> unique
>>     # constraint that could be used to identify a row, we need to add
>> one.
>>     # The following command adds a bigint column named
>>     # _Slony-I_$CLUSTERNAME_rowID to the table.  It will have a 
>> default value
>>     # of nextval('_$CLUSTERNAME.s1_rowid_seq'), and have UNIQUE and NOT
>> NULL
>>     # constraints applied.  All existing rows will be initialized with a
>>     # number
>>  table add key (node id = 1, fully qualified name = 
>> 'important_db.slony_test');
>>  table add key (node id = 2, fully qualified name = 
>> 'important_db.slony_test');
>>
>>     # Slony-I organizes tables into sets.  The smallest unit a node can
>>     # subscribe is a set.  The following commands create one set
>> containing
>>     # all 4 pgbench tables.  The master or origin of the set is node 1.
>> create set (id=1, origin=1, comment='All pgbench tables');
>> set add table (set id=1, origin=1, id=1, fully qualified name = 
>> 'radius.slony_test', comment='slony test table');
>>
>>     # Create the second node (the slave) tell the 2 nodes how to connect
>> to
>>     # each other and how they should listen for events.
>>
>> store node (id=2, comment = 'Slave node');
>> store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME 
>> host=$MASTERHOST user=$REPLICATIONUSER port-$PGPORT');
>> store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME 
>> host=$SLAVEHOST user=$REPLICATIONUSER port=$PGPORT');
>> store listen (origin=1, provider = 1, receiver =2);
>> store listen (origin=2, provider = 2, receiver =1);
>>
>> _EOF_
>> =======================================================================
>>
>> Problem 1. The script fails with:
>>
>> [postgres at tts postgres]$ ./slony_test.sh
>> <stdin>:15: Error: namespace "_slony_rep" already exists in database 
>> of node 1
>> <stdin>:15: ERROR: no admin conninfo for node 152637712
>>
>> I do not understand why the connect info does not work, since the 
>> full DNS is specified.
>>
>>
>> On the Slave, I have the following script:
>> =======================================================================
>>
>> #!/bin/sh
>>     slonik <<_EOF_
>>          # ----
>>          # This defines which namespace the replication system uses
>>          # ----
>>          cluster name = $CLUSTERNAME;
>>
>>          # ----
>>          # Admin conninfo's are used by the slonik program to connect
>>          # to the node databases.  So these are the PQconnectdb
>> arguments
>>          # that connect from the administrators workstation (where
>>          # slonik is executed).
>>          # ----
>>  node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST 
>> user=$REPLICATIONUSER';
>>  node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST 
>> user=$REPLICATIONUSER';
>>
>>          init cluster ( id=2, comment = 'Slave Node');
>>          #----
>>          #Add the key to be identical to master table
>>          #----
>>  table add key (node id = 2, fully qualified name = 
>> 'important_db.slony_test');
>>
>> #--
>>         # Create the second node (the slave) tell the 2 nodes how to 
>> connect to
>>         # each other and how they should listen for events.
>>         #--
>> store node (id=1, comment = 'Master node');
>> store node (id=2, comment = 'Slave node');
>> store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME 
>> host=$MASTERHOST user=$REPLICATIONUSER');
>> store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME 
>> host=$SLAVEHOST user=$REPLICATIONUSER');
>> store listen (origin=1, provider = 1, receiver =2);
>>
>>          # ----
>>          # Node 2 subscribes set 1
>>          # ----
>>          subscribe set ( id = 1, provider = 1, receiver = 2, forward =
>> no);
>>
>> _EOF_
>>
>> =======================================================================
>>
>> Problem 2. The script fails with:
>>
>> [postgres at nas postgres]$ ./slony_slv_tst.sh
>> <stdin>:25: Error: ev_origin for store_node cannot be the new node
>> <stdin>:15: loading of file /usr/local/pgsql/share//xxid.v73.sql: 
>> PGRES_FATAL_ERROR ERROR:  current transaction is aborted, queries 
>> ignored until end of transaction block
>> ERROR:  current transaction is aborted, queries ignored until end of 
>> transaction block
>> <stdin>:15: ERROR: no admin conninfo for node 156827920
>>
>> I'm not really sure what I am doing wrong. Your imput would be 
>> greatly appreciated.
>>
>> _______________________________________________
>> Slony1-general mailing list
>> Slony1-general at gborg.postgresql.org
>> http://gborg.postgresql.org/mailman/listinfo/slony1-general
>>
>>
>> _______________________________________________
>> Slony1-general mailing list
>> Slony1-general at gborg.postgresql.org
>> http://gborg.postgresql.org/mailman/listinfo/slony1-general
>
>
>




More information about the Slony1-general mailing list