hugoksouza at aim.com hugoksouza
Fri Oct 27 06:27:38 PDT 2006
Hi all,

 I can not make my first test Slony replication to work.
This is Slony-I 1.1.5 (RPM) on a CentOS 4.4 Linux.
  I am just trying to follow the Slony FIRSTDB quickstart, 
http://linuxfinances.info/info/slonyadmin.html#FIRSTDB .

  This is what I get when I try to start on nodes (a simple master-slave 
2 node env):

  [root at SERVER_MASTER etc]# slon $CLUSTERNAME "dbname=$MASTERDBNAME 
user=$REPLICATIONUSER host=$MASTERHOST"
 2006-10-26 18:39:54 EDT CONFIG main: slon version 1.1.5 starting up
 2006-10-26 18:39:54 EDT CONFIG main: local node id = 1
 2006-10-26 18:39:54 EDT CONFIG main: launching sched_start_mainloop
  2006-10-26 18:39:54 EDT CONFIG main: loading current cluster 
configuration
  2006-10-26 18:39:54 EDT CONFIG storeNode: no_id=2 no_comment='Slave 
node'
  2006-10-26 18:39:54 EDT CONFIG storePath: pa_server=2 pa_client=1 
pa_conninfo="dbname=slonytest host=SERVER_SLAVE user=slony" 
pa_connretry=10
  2006-10-26 18:39:54 EDT CONFIG storeListen: li_origin=2 li_receiver=1 
li_provider=2
  2006-10-26 18:39:54 EDT CONFIG storeSet: set_id=1 set_origin=1 
set_comment='All slonytest tables'
  2006-10-26 18:39:54 EDT CONFIG main: configuration complete - starting 
threads
 2006-10-26 18:39:54 EDT DEBUG1 localListenThread: thread starts
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13186
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13197
 2006-10-26 18:39:54 EDT CONFIG enableNode: no_id=2
 2006-10-26 18:39:54 EDT DEBUG1 main: running scheduler mainloop
 2006-10-26 18:39:54 EDT DEBUG1 remoteWorkerThread_2: thread starts
 2006-10-26 18:39:54 EDT DEBUG1 remoteListenThread_2: thread starts
 2006-10-26 18:39:54 EDT DEBUG1 syncThread: thread starts
 2006-10-26 18:39:54 EDT DEBUG1 cleanupThread: thread starts
  2006-10-26 18:39:54 EDT DEBUG1 remoteListenThread_2: connected to 
'dbname=slonytest host=SERVER_SLAVE user=slony'
 2006-10-26 18:39:54 EDT DEBUG1 slon: done

 On the slave node:

  [root at SERVER_SLAVE etc]# slon $CLUSTERNAME "dbname=$SLAVEDBNAME 
user=$REPLICATIONUSER host=$SLAVEHOST"
 2006-10-26 18:45:05 EDT CONFIG main: slon version 1.1.5 starting up
 2006-10-26 18:45:05 EDT CONFIG main: local node id = 2
 2006-10-26 18:45:05 EDT CONFIG main: launching sched_start_mainloop
  2006-10-26 18:45:05 EDT CONFIG main: loading current cluster 
configuration
  2006-10-26 18:45:05 EDT CONFIG storeNode: no_id=1 no_comment='Master 
Node'
  2006-10-26 18:45:05 EDT CONFIG storePath: pa_server=1 pa_client=2 
pa_conninfo="dbname=slonytest host=SERVER_MASTER user=slony" 
pa_connretry=10
  2006-10-26 18:45:05 EDT CONFIG storeListen: li_origin=1 li_receiver=2 
li_provider=1
  2006-10-26 18:45:05 EDT CONFIG storeSet: set_id=1 set_origin=1 
set_comment='All slonytest tables'
  2006-10-26 18:45:05 EDT WARN remoteWorker_wakeup: node 1 - no worker 
thread
  2006-10-26 18:45:05 EDT CONFIG main: configuration complete - starting 
threads
 2006-10-26 18:45:05 EDT DEBUG1 localListenThread: thread starts
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13027
 2006-10-26 18:45:05 EDT CONFIG enableNode: no_id=1
 2006-10-26 18:45:05 EDT DEBUG1 main: running scheduler mainloop
 2006-10-26 18:45:05 EDT DEBUG1 remoteWorkerThread_1: thread starts
 2006-10-26 18:45:05 EDT DEBUG1 syncThread: thread starts
 2006-10-26 18:45:05 EDT DEBUG1 remoteListenThread_1: thread starts
 2006-10-26 18:45:05 EDT DEBUG1 cleanupThread: thread starts
  2006-10-26 18:45:05 EDT DEBUG1 remoteListenThread_1: connected to 
'dbname=slonytest host=SERVER_MASTER user=slony'
  TODO: ********** remoteWorkerThread: node 1 - EVENT 1,4 - unknown 
event type
  TODO: ********** remoteWorkerThread: node 1 - EVENT 1,5 - unknown 
event type
 2006-10-26 18:45:05 EDT DEBUG1 slon: done

  On the master node, you can see this in the PostgreSQL log during the 
slon command execution:

 LOG: connection received: host=205.252.5.188 port=33705
 LOG: connection authorized: user=slony database=slonytest
 LOG: connection received: host=205.252.5.188 port=33708
 LOG: connection authorized: user=slony database=slonytest
 LOG: connection received: host=205.252.5.188 port=33711
 LOG: connection authorized: user=slony database=slonytest
 LOG: connection received: host=205.252.5.188 port=33715
 LOG: SSL SYSCALL error: EOF detected
 LOG: SSL SYSCALL error: EOF detected
 LOG: could not receive data from client: Connection reset by peer
 LOG: unexpected EOF on client connection
 LOG: could not receive data from client: Connection reset by peer
 LOG: unexpected EOF on client connection
 LOG: could not accept SSL connection: EOF detected

 , and here are the logs for the slave slon command execution:

 LOG: connection received: host=63.218.82.30 port=33481
 LOG: connection authorized: user=slony database=slonytest
 LOG: SSL SYSCALL error: EOF detected
 LOG: could not receive data from client: Connection reset by peer
 LOG: unexpected EOF on client connection

 Any kind of help/advice is more than welcome?

 Anybody?

 Thanks in advance.

 PS: Here follows more info.

 [root at SERVER_MASTER slony]# cat slony.env
 CLUSTERNAME=slonytest
 MASTERDBNAME=slonytest
 SLAVEDBNAME=slonytest
 MASTERHOST=SERVER_MASTER
 SLAVEHOST=SERVER_SLAVE
 REPLICATIONUSER=slony
 PGBENCHUSER=slony

  export CLUSTERNAME MASTERDBNAME SLAVEDBNAME MASTERHOST SLAVEHOST 
REPLICATIONUSER PGBENCHUSER

 [root at SERVER_MASTER slony]# cat slonytest.sql
 --drop database slonytest;

 create database slonytest owner slony;

 \connect slonytest slony

 create table test (
 id integer PRIMARY KEY,
 name varchar(32)
 );

  [admin at SERVER_MASTER slony]$ createlang -h $MASTERHOST plpgsql 
$MASTERDBNAME ?U $REPLICATIONUSER
  [admin at SERVER_MASTER slony]$ createlang -h $SLAVEHOST plpgsql 
$SLAVEDBNAME -U $REPLICATIONUSER

 [root at SERVER_MASTER slony]# cat slony_init_slonytest.sh | grep -v \#

 slonik << _EOF_

 cluster name = $CLUSTERNAME;

  node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST 
user=$REPLICATIONUSER';
  node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST 
user=$REPLICATIONUSER';

 init cluster ( id=1, comment = 'Master Node');

 create set (id=1, origin=1, comment='All slonytest tables');
  set add table (set id=1, origin=1, id=1, fully qualified name = 
'public.test', comment='test table');

 store node (id=2, comment = 'Slave node');
  store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME 
host=$MASTERHOST user=$REPLICATIONUSER');
  store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME 
host=$SLAVEHOST user=$REPLICATIONUSER');
 store listen (origin=1, provider = 1, receiver =2);
 store listen (origin=2, provider = 2, receiver =1);
 _EOF_

  [root at SERVER_MASTER slony]# slon -d 4 $CLUSTERNAME 
"dbname=$MASTERDBNAME user=$REPLICATIONUSER host=$MASTERHOST"
 2006-10-26 18:46:57 EDT CONFIG main: slon version 1.1.5 starting up
 2006-10-26 18:46:57 EDT CONFIG main: local node id = 1
 2006-10-26 18:46:57 EDT DEBUG2 main: main process started
 2006-10-26 18:46:57 EDT DEBUG2 slon: watchdog process started
 2006-10-26 18:46:57 EDT DEBUG2 slon: begin signal handler setup
 2006-10-26 18:46:57 EDT DEBUG2 slon: end signal handler setup
 2006-10-26 18:46:57 EDT DEBUG2 slon: wait for main child process
 2006-10-26 18:46:57 EDT DEBUG2 main: begin signal handler setup
 2006-10-26 18:46:57 EDT DEBUG2 main: end signal handler setup
 2006-10-26 18:46:57 EDT CONFIG main: launching sched_start_mainloop
  2006-10-26 18:46:57 EDT CONFIG main: loading current cluster 
configuration
  2006-10-26 18:46:57 EDT CONFIG storeNode: no_id=2 no_comment='Slave 
node'
 2006-10-26 18:46:57 EDT DEBUG2 setNodeLastEvent: no_id=2 event_seq=0
  2006-10-26 18:46:57 EDT CONFIG storePath: pa_server=2 pa_client=1 
pa_conninfo="dbname=slonytest host=SERVER_SLAVE user=slony" 
pa_connretry=10
  2006-10-26 18:46:57 EDT CONFIG storeListen: li_origin=2 li_receiver=1 
li_provider=2
  2006-10-26 18:46:57 EDT CONFIG storeSet: set_id=1 set_origin=1 
set_comment='All slonytest tables'
  2006-10-26 18:46:57 EDT DEBUG2 sched_wakeup_node(): no_id=1 (0 threads 
+ worker signaled)
 2006-10-26 18:46:57 EDT DEBUG2 main: last local event sequence = 6
  2006-10-26 18:46:57 EDT CONFIG main: configuration complete - starting 
threads
 2006-10-26 18:46:57 EDT DEBUG1 localListenThread: thread starts
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13243
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13254
 2006-10-26 18:46:58 EDT CONFIG enableNode: no_id=2
 2006-10-26 18:46:58 EDT DEBUG1 main: running scheduler mainloop
 2006-10-26 18:46:58 EDT DEBUG1 remoteWorkerThread_2: thread starts
 2006-10-26 18:46:58 EDT DEBUG1 remoteListenThread_2: thread starts
  2006-10-26 18:46:58 EDT DEBUG2 remoteListenThread_2: start listening 
for event origin 2
 2006-10-26 18:46:58 EDT DEBUG1 syncThread: thread starts
 2006-10-26 18:46:58 EDT DEBUG1 cleanupThread: thread starts
 2006-10-26 18:46:58 EDT DEBUG4 cleanupThread: bias = 35383
  2006-10-26 18:46:58 EDT DEBUG4 remoteWorkerThread_2: update provider 
configuration
  2006-10-26 18:46:58 EDT DEBUG1 remoteListenThread_2: connected to 
'dbname=slonytest host=SERVER_SLAVE user=slony'
  2006-10-26 18:46:58 EDT DEBUG2 remoteListenThread_2: queue event 2,1 
STORE_PATH
  2006-10-26 18:46:58 EDT DEBUG2 remoteWorkerThread_2: Received event 
2,1 STORE_PATH
 2006-10-26 18:46:58 EDT DEBUG1 slon: done
 2006-10-26 18:46:58 EDT DEBUG2 slon: exit(0)

  [root at SERVER_SLAVE etc]# slon -d 4 $CLUSTERNAME "dbname=$SLAVEDBNAME 
user=$REPLICATIONUSER host=$SLAVEHOST"
 2006-10-26 18:48:29 EDT CONFIG main: slon version 1.1.5 starting up
 2006-10-26 18:48:29 EDT CONFIG main: local node id = 2
 2006-10-26 18:48:29 EDT DEBUG2 main: main process started
 2006-10-26 18:48:29 EDT DEBUG2 slon: watchdog process started
 2006-10-26 18:48:29 EDT DEBUG2 slon: begin signal handler setup
 2006-10-26 18:48:29 EDT DEBUG2 slon: end signal handler setup
 2006-10-26 18:48:29 EDT DEBUG2 slon: wait for main child process
 2006-10-26 18:48:29 EDT DEBUG2 main: begin signal handler setup
 2006-10-26 18:48:29 EDT DEBUG2 main: end signal handler setup
 2006-10-26 18:48:29 EDT CONFIG main: launching sched_start_mainloop
  2006-10-26 18:48:29 EDT CONFIG main: loading current cluster 
configuration
  2006-10-26 18:48:29 EDT CONFIG storeNode: no_id=1 no_comment='Master 
Node'
 2006-10-26 18:48:29 EDT DEBUG2 setNodeLastEvent: no_id=1 event_seq=5
  2006-10-26 18:48:29 EDT CONFIG storePath: pa_server=1 pa_client=2 
pa_conninfo="dbname=slonytest host=SERVER_MASTER user=slony" 
pa_connretry=10
  2006-10-26 18:48:29 EDT CONFIG storeListen: li_origin=1 li_receiver=2 
li_provider=1
  2006-10-26 18:48:29 EDT CONFIG storeSet: set_id=1 set_origin=1 
set_comment='All slonytest tables'
  2006-10-26 18:48:29 EDT WARN remoteWorker_wakeup: node 1 - no worker 
thread
  2006-10-26 18:48:29 EDT DEBUG2 sched_wakeup_node(): no_id=1 (0 threads 
+ worker signaled)
 2006-10-26 18:48:29 EDT DEBUG2 main: last local event sequence = 1
  2006-10-26 18:48:29 EDT CONFIG main: configuration complete - starting 
threads
 2006-10-26 18:48:29 EDT DEBUG1 localListenThread: thread starts
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13037
 NOTICE: Slony-I: cleanup stale sl_nodelock entry for pid=13051
 2006-10-26 18:48:29 EDT CONFIG enableNode: no_id=1
 2006-10-26 18:48:29 EDT DEBUG1 main: running scheduler mainloop
 2006-10-26 18:48:29 EDT DEBUG1 remoteWorkerThread_1: thread starts
 2006-10-26 18:48:29 EDT DEBUG1 remoteListenThread_1: thread starts
  2006-10-26 18:48:29 EDT DEBUG2 remoteListenThread_1: start listening 
for event origin 1
 2006-10-26 18:48:29 EDT DEBUG1 cleanupThread: thread starts
 2006-10-26 18:48:29 EDT DEBUG4 cleanupThread: bias = 35383
 2006-10-26 18:48:29 EDT DEBUG1 syncThread: thread starts
  2006-10-26 18:48:29 EDT DEBUG4 remoteWorkerThread_1: update provider 
configuration
  2006-10-26 18:48:29 EDT DEBUG1 remoteListenThread_1: connected to 
'dbname=slonytest host=SERVER_MASTER user=slony'
  2006-10-26 18:48:29 EDT DEBUG2 remoteListenThread_1: queue event 1,6 
STORE_PATH
  2006-10-26 18:48:29 EDT DEBUG2 remoteWorkerThread_1: Received event 
1,6 STORE_PATH
 2006-10-26 18:48:29 EDT DEBUG1 slon: done
 2006-10-26 18:48:29 EDT DEBUG2 slon: exit(0)
________________________________________________________________________
Check Out the new free AIM(R) Mail -- 2 GB of storage and 
industry-leading spam and email virus protection.



More information about the Slony1-general mailing list