Tue Jun 2 11:02:46 PDT 2009
- Previous message: [Slony1-general] Another problem with slony1-ctl
- Next message: [Slony1-general] Initial replication of sequences is failing
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
> > Date: Tue, 2 Jun 2009 18:17:17 +0500 > From: "owais" <owais at preceptglobalaccess.com> > > 1) What is the procedure to stop replication process? Can anyone > present me a sample shell script? > > > 2) Is it possible to toggle master and slave? I only have 1 master > database and 1 slave. Can I change slave to master? > First, a disclaimer--I am not a Slony expert, but I did stay in a Holiday Inn Express last night. #1 To start and stop the slon processes, since I run on SUSE Linux, I use a SUSE style init script. I cannot take credit for most of this because I got it from someone else--don't remember who. It works great. Below, I'll post the contents of the init script along with a "slony_vars" file. You'll need to read through both to figure out the parts you need to change. Also, I pipe slon logs through cronolog to handle rotation at midnight (one log file per day). You don't have to do this, but it's simple to use and configure. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D FILENAME: /path/to/your/slony/scripts/slony_vars.sh I stripped this file down for generic purposes here. In my application, it has a case statement that is used to output slightly different files based on the host machine. The purpose is to output constants that are used in all slonik scripts and the init script. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D #!/bin/sh if [ -z "$SLONY_DIR" ]; then SLONY_DIR=3D"/path/to/your/slony/scripts" fi # This script may get included more than once, so create these variables if not # already initialized. if [ -z "$CLUSTERNAME" ]; then RET=3D0 CLUSTERNAME=3Dslony SLONY_USER=3Dslony SLONY_LOGDIR=3D/path/to/your/logfiles HOST1=3Dyour_master_host_name HOST1_DBNAME=3Dyour_master_db_name HOST1_DSN=3D"host=3D$HOST1 dbname=3D$HOST1_DBNAME user=3D$SLONY_USER" HOST2=3Dyour_slave_host_name HOST2_DBNAME=3Dyour_slave_db_name # Perhaps same as master name. HOST2_DSN=3D"host=3D$HOST2 dbname=3D$HOST2_DBNAME user=3D$SLONY_USER" # Create the slonik preamble -- the common stuff you have to do in every script. echo "# This file is written by slony_vars.sh - DO NOT EDIT." > $SLONY_DIR/slonik_preamble.txt echo "# Cluster definition common to all slonik scripts." >> $SLONY_DIR/slonik_preamble.txt echo "cluster name =3D $CLUSTERNAME;" >> $SLONY_DIR/slonik_preamble.txt echo "node 1 admin conninfo =3D '$HOST1_DSN';" >> $SLONY_DIR/slonik_preamble.txt echo "node 2 admin conninfo =3D '$HOST2_DSN';" >> $SLONY_DIR/slonik_preamble.txt chmod 666 "$SLONY_DIR/slonik_preamble.txt" # Generate the slon conf file used by the slon process on startup. SLON_CONFIG=3D"$SLONY_DIR/slon.conf" echo "# This file generated by the slony init script - DO NOT EDIT!" > $SLON_CONFIG echo "# The slon process uses these options on startup." >> $SLON_CONFIG echo "cluster_name=3D\"$CLUSTERNAME\"" >> $SLON_CONFIG echo "conn_info=3D\"$HOST1_DSN\"" >> $SLON_CONFIG echo "pid_file=3D\"$SLONY_DIR/slon.pid\"" >> $SLON_CONFIG echo "sync_group_maxsize=3D1000" >> $SLON_CONFIG echo "log_level=3D1" >> $SLON_CONFIG chmod 666 "$SLON_CONFIG" fi =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D FILENAME: /etc/init.d/slony This is a Suse style init script. I got this from someone else and modified for my use. Once in place, and with Suse-style rc aliases, you can do as root: # rcslony start # rcslony stop =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D #!/bin/sh # # /etc/init.d/slony # ### BEGIN INIT INFO # Provides: slony # Required-Start: $syslog $local_fs $network $time postgresql # Should-Start: sendmail # Required-Stop: $syslog $local_fs # Should-Stop: # Default-Start: 3 5 # Default-Stop: 0 1 2 6 # Short-Description: Slony database replication # Description: Start the slon listener to enable # Slony to communicate with other nodes in the cluster ### END INIT INFO # Location of the slony scripts. SLONY_DIR=3D"/path/to/your/slony/scripts" # I created a slony_vars.sh where I set a lot of global variables to be shared # by all Slony scripts. test -r $SLONY_DIR/slony_vars.sh || { echo "$SLONY_DIR/slony_vars.sh does not exist"; if [ "$1" =3D "stop" ]; then exit 0; else exit 6; fi; } # Read the slony variables . $SLONY_DIR/slony_vars.sh SLON_CONFIG=3D"$SLONY_DIR/slon.conf" SLON_LOGFILE=3D"$SLONY_LOGDIR/slon-%Y-%m-%d.log" # Make sure the file is good. test -r $SLON_CONFIG || { echo "$SLON_CONFIG does not exist"; if [ "$1" =3D "stop" ]; then exit 0; else exit 6; fi; } # Check for missing binaries (stale symlinks should not happen) # Note: Special treatment of stop for LSB conformance SLON_BIN=3D"/path/to/slon/binary" test -x $SLON_BIN || { echo "$SLON_BIN not installed"; if [ "$1" =3D "stop" ]; then exit 0; else exit 5; fi; } # I use cronolog to handle logging and log file rotation. CRONOLOG_BIN=3D"/opt/pg/bin/cronolog" test -x $CRONOLOG_BIN || { echo "$CRONOLOG_BIN not installed"; if [ "$1" =3D "stop" ]; then exit 0; else exit 5; fi; } . /etc/rc.status # Reset status of this service rc_reset case "$1" in start) echo -n "Starting slon " ## Start daemon with startproc(8). If this fails ## the return value is set appropriately by startproc. /sbin/startproc -v -u postgres $SLON_BIN -f $SLON_CONFIG 2>&1 | $CRONOLOG_BIN $SLON_LOGFILE & # Remember status and be verbose rc_status -v ;; stop) echo -n "Shutting down slon " ## Stop daemon with killproc(8) and if this fails ## killproc sets the return value according to LSB. /sbin/killproc -TERM $SLON_BIN # Remember status and be verbose rc_status -v ;; restart) ## Stop the service and regardless of whether it was ## running or not, start it again. $0 stop $0 start # Remember status and be quiet rc_status ;; status) echo -n "Checking for service slon " ## Check status with checkproc(8), if process is running ## checkproc will return with exit status 0. # Return value is slightly different for the status command: # 0 - service up and running # 1 - service dead, but /var/run/ pid file exists # 2 - service dead, but /var/lock/ lock file exists # 3 - service not running (unused) # 4 - service status unknown :-( # 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.) # NOTE: checkproc returns LSB compliant status values. /sbin/checkproc $SLON_BIN # NOTE: rc_status knows that we called this init script with # "status" option and adapts its messages accordingly. rc_status -v ;; *) echo "Usage: $0 {start|stop|status|restart}" exit 1 ;; esac rc_exit =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D #2 There are 2 main ways you make the slave into master. http://www.slony.info/adminguide/slony1-1.2.6/doc/adminguide/failover.html 1. Switchover http://www.slony.info/documentation/stmtmoveset.html 2. Failover http://www.slony.info/documentation/stmtfailover.html A "Fail over" is a concept as well as an actual slonik command. This is an irreversible action. That is, it is assumed the master has crashed. Once you've repaired the master, you'd have to re-setup Slony from scratch on it. (which is not really that difficult once you've figured it out and scripted your table set creation) A "Switch over" is really just "moving" a replication set from one node to another. That is, making a different node the "originator" of the data. The magical part is that all the slony triggers are dropped and re-created appropriately on the nodes so you can now insert on the new "master" db while the new slave(s) are protected from manual updates. We use switchover in testing our disaster recovery strategy. It is safe and fast to do. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.slony.info/pipermail/slony1-general/attachments/20090602/= 9f9041b7/attachment.htm
- Previous message: [Slony1-general] Another problem with slony1-ctl
- Next message: [Slony1-general] Initial replication of sequences is failing
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list