Fixed problem with DDL SCRIPT parser where C-style comments were
From cbbrowne at lists.slony.info Mon Nov 12 15:01:49 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Mon Nov 12 15:01:49 2007
Subject: [Slony1-commit] slony1-www/content news.txt
Message-ID: <20071112230149.4EF43290265@main.slony.info>
Update of /home/cvsd/slony1/slony1-www/content
In directory main.slony.info:/tmp/cvs-serv7338/content
Modified Files:
news.txt
Log Message:
Fix title
Index: news.txt
===================================================================
RCS file: /home/cvsd/slony1/slony1-www/content/news.txt,v
retrieving revision 1.37
retrieving revision 1.38
diff -C2 -d -r1.37 -r1.38
*** news.txt 12 Nov 2007 23:00:55 -0000 1.37
--- news.txt 12 Nov 2007 23:01:47 -0000 1.38
***************
*** 11,15 ****
---
! Slony-I 1.2.12 pre-1 available
http://slony.info/downloads/1.2/source/slony1-1.2.12.tar.bz2
2007-10-30
--- 11,15 ----
---
! Slony-I 1.2.12 available
http://slony.info/downloads/1.2/source/slony1-1.2.12.tar.bz2
2007-10-30
From cbbrowne at lists.slony.info Tue Nov 13 12:55:30 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Tue Nov 13 12:55:31 2007
Subject: [Slony1-commit] slony1-www layout.php
Message-ID: <20071113205530.28074290C1B@main.slony.info>
Update of /home/cvsd/slony1/slony1-www
In directory main.slony.info:/tmp/cvs-serv4185
Modified Files:
layout.php
Log Message:
Add in bugzilla link
Index: layout.php
===================================================================
RCS file: /home/cvsd/slony1/slony1-www/layout.php,v
retrieving revision 1.8
retrieving revision 1.9
diff -C2 -d -r1.8 -r1.9
*** layout.php 12 Feb 2007 19:04:46 -0000 1.8
--- layout.php 13 Nov 2007 20:55:28 -0000 1.9
***************
*** 21,24 ****
--- 21,25 ----
Documentation
Download
+ Bugs
From cbbrowne at lists.slony.info Tue Nov 13 13:00:31 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Tue Nov 13 13:00:32 2007
Subject: [Slony1-commit] slony1-www layout.php
Message-ID: <20071113210031.6CA03290C59@main.slony.info>
Update of /home/cvsd/slony1/slony1-www
In directory main.slony.info:/tmp/cvs-serv4307
Modified Files:
layout.php
Log Message:
Add in news about Bugzilla, and prefer the "index.cgi"-less suffix...
Index: layout.php
===================================================================
RCS file: /home/cvsd/slony1/slony1-www/layout.php,v
retrieving revision 1.9
retrieving revision 1.10
diff -C2 -d -r1.9 -r1.10
*** layout.php 13 Nov 2007 20:55:28 -0000 1.9
--- layout.php 13 Nov 2007 21:00:29 -0000 1.10
***************
*** 21,25 ****
Documentation
Download
! Bugs
--- 21,25 ----
Documentation
Download
! Bugs
From cbbrowne at lists.slony.info Tue Nov 13 13:00:31 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Tue Nov 13 13:00:33 2007
Subject: [Slony1-commit] slony1-www/content news.txt
Message-ID: <20071113210031.7BA7F290D59@main.slony.info>
Update of /home/cvsd/slony1/slony1-www/content
In directory main.slony.info:/tmp/cvs-serv4307/content
Modified Files:
news.txt
Log Message:
Add in news about Bugzilla, and prefer the "index.cgi"-less suffix...
Index: news.txt
===================================================================
RCS file: /home/cvsd/slony1/slony1-www/content/news.txt,v
retrieving revision 1.38
retrieving revision 1.39
diff -C2 -d -r1.38 -r1.39
*** news.txt 12 Nov 2007 23:01:47 -0000 1.38
--- news.txt 13 Nov 2007 21:00:29 -0000 1.39
***************
*** 11,17 ****
---
Slony-I 1.2.12 available
http://slony.info/downloads/1.2/source/slony1-1.2.12.tar.bz2
! 2007-10-30
Christopher Browne
--- 11,30 ----
---
+ Slony-I Bugzilla available
+ http://bugs.slony.info/bugzilla/
+ 2007-11-13
+ Christopher Browne
+
+ Many thanks to Command Prompt staff for installing and configuring
+ a Bugzilla instance for use in
+ tracking Slony-I bugs and feature requests.
+
+
Feature and bug lists will be migrating into this over the next
+ little while...
+
+ ---
Slony-I 1.2.12 available
http://slony.info/downloads/1.2/source/slony1-1.2.12.tar.bz2
! 2007-11-12
Christopher Browne
From cbbrowne at lists.slony.info Thu Nov 22 09:47:05 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 09:47:07 2007
Subject: [Slony1-commit] slony1-engine/tools configure-replication.sh
configure-replication.txt
Message-ID: <20071122174705.D8C52290CC1@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/tools
In directory main.slony.info:/tmp/cvs-serv16051
Modified Files:
configure-replication.sh configure-replication.txt
Log Message:
Add in SLONIKCONFDIR to allow the gentle user to specify a directory
in which to place the slonik files. This will enable using this tool
for automated processes.
Index: configure-replication.sh
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/tools/configure-replication.sh,v
retrieving revision 1.3
retrieving revision 1.4
diff -C2 -d -r1.3 -r1.4
*** configure-replication.sh 2 Nov 2007 15:31:43 -0000 1.3
--- configure-replication.sh 22 Nov 2007 17:47:03 -0000 1.4
***************
*** 36,39 ****
--- 36,47 ----
PORT5=${PORT5:-${PGPORT:-"5432"}}
+
+ tmpdir=`mktemp -d -t slonytest-temp.XXXXXX`
+ if [ $MY_MKTEMP_IS_DECREPIT ] ; then
+ tmpdir=`mktemp -d /tmp/slonytest-temp.XXXXXX`
+ fi
+ mktmp=${SLONIKCONFDIR:-${tmpdir}}
+ mkdir -p ${mktmp}
+
store_path()
{
***************
*** 78,86 ****
}
- mktmp=`mktemp -d -t slonytest-temp.XXXXXX`
- if [ $MY_MKTEMP_IS_DECREPIT ] ; then
- mktmp=`mktemp -d /tmp/slonytest-temp.XXXXXX`
- fi
-
PREAMBLE=${mktmp}/preamble.slonik
--- 86,89 ----
Index: configure-replication.txt
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/tools/configure-replication.txt,v
retrieving revision 1.1
retrieving revision 1.2
diff -C2 -d -r1.1 -r1.2
*** configure-replication.txt 20 Oct 2006 16:01:01 -0000 1.1
--- configure-replication.txt 22 Nov 2007 17:47:03 -0000 1.2
***************
*** 33,36 ****
--- 33,40 ----
namespace, such as public.my_sequence)
+ SLONIKCONFDIR - a directory in which to place the config files. If
+ not set, a unique directory under /tmp will be created for this
+ purpose.
+
Defaults are provided for ALL of these values, so that if you run
configure-replication.sh without setting any environment variables,
***************
*** 52,57 ****
provides if all your databases are on the same server!
! slonik config files are generated in a temp directory under /tmp. The
! usage is thus:
1. preamble.slonik is a "preamble" containing connection info used by
--- 56,62 ----
provides if all your databases are on the same server!
! slonik config files are generated in a temp directory under /tmp.
! (Or, if you set SLONIKCONFDIR, the directory you specify in that
! environment variable.) The usage is thus:
1. preamble.slonik is a "preamble" containing connection info used by
From cbbrowne at lists.slony.info Thu Nov 22 09:50:09 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 09:50:10 2007
Subject: [Slony1-commit] slony1-engine RELEASE-2.0
Message-ID: <20071122175009.5EC73290D22@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine
In directory main.slony.info:/tmp/cvs-serv16233
Modified Files:
RELEASE-2.0
Log Message:
Note change to tools/configure-replication.sh
Index: RELEASE-2.0
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/RELEASE-2.0,v
retrieving revision 1.6
retrieving revision 1.7
diff -C2 -d -r1.6 -r1.7
*** RELEASE-2.0 11 Jul 2007 17:20:18 -0000 1.6
--- RELEASE-2.0 22 Nov 2007 17:50:07 -0000 1.7
***************
*** 76,77 ****
--- 76,86 ----
attributes of each test, including system/platform information,
versions, and whether or not the test succeeded or failed.
+
+ - Revised functions that generate listen paths
+
+ - tools/configure-replication.sh script permits specifying a
+ destination path for generated config files. This enables using
+ it within automated processes, and makes it possible to use it to
+ generate Slonik scripts for tests in the "test bed," which has
+ the further merit of making tools/configure-replication.sh a
+ regularly-regression-tested tool.
From cbbrowne at lists.slony.info Thu Nov 22 14:51:06 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 14:51:10 2007
Subject: [Slony1-commit] slony1-engine/config acx_libpq.m4
Message-ID: <20071122225106.A3D0B290BEA@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/config
In directory main.slony.info:/tmp/cvs-serv26068/config
Modified Files:
Tag: REL_1_2_STABLE
acx_libpq.m4
Log Message:
Bug #16 - Change in function typenameTypeId() in PG 8.3 to have 3 args
http://www.slony.info/bugzilla/show_bug.cgi?id=16
Patch provided by Dave Page - thanks!
Index: acx_libpq.m4
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/config/acx_libpq.m4,v
retrieving revision 1.24.2.3
retrieving revision 1.24.2.4
diff -C2 -d -r1.24.2.3 -r1.24.2.4
*** acx_libpq.m4 29 Nov 2006 18:52:14 -0000 1.24.2.3
--- acx_libpq.m4 22 Nov 2007 22:51:04 -0000 1.24.2.4
***************
*** 352,356 ****
fi
!
AC_MSG_CHECKING(for typenameTypeId)
if test -z "$ac_cv_typenameTypeId_args"; then
--- 352,363 ----
fi
! AC_MSG_CHECKING(for typenameTypeId)
! if test -z "$ac_cv_typenameTypeId_args"; then
! AC_TRY_COMPILE(
! [#include "postgres.h"
! #include "parser/parse_type.h"],
! [typenameTypeId(NULL, NULL, NULL); ],
! ac_cv_typenameTypeId_args=3)
! fi
AC_MSG_CHECKING(for typenameTypeId)
if test -z "$ac_cv_typenameTypeId_args"; then
***************
*** 371,375 ****
AC_MSG_RESULT(no)
else
! if test "$ac_cv_typenameTypeId_args" = 2; then
AC_DEFINE(HAVE_TYPENAMETYPEID_2)
elif test "$ac_cv_typenameTypeId_args" = 1; then
--- 378,384 ----
AC_MSG_RESULT(no)
else
! if test "$ac_cv_typenameTypeId_args" = 3; then
! AC_DEFINE(HAVE_TYPENAMETYPEID_3)
! elif test "$ac_cv_typenameTypeId_args" = 2; then
AC_DEFINE(HAVE_TYPENAMETYPEID_2)
elif test "$ac_cv_typenameTypeId_args" = 1; then
From cbbrowne at lists.slony.info Thu Nov 22 14:51:06 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 14:51:10 2007
Subject: [Slony1-commit] slony1-engine/src/backend slony1_funcs.c
Message-ID: <20071122225106.AFEE0290C1B@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/src/backend
In directory main.slony.info:/tmp/cvs-serv26068/src/backend
Modified Files:
Tag: REL_1_2_STABLE
slony1_funcs.c
Log Message:
Bug #16 - Change in function typenameTypeId() in PG 8.3 to have 3 args
http://www.slony.info/bugzilla/show_bug.cgi?id=16
Patch provided by Dave Page - thanks!
Index: slony1_funcs.c
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/src/backend/slony1_funcs.c,v
retrieving revision 1.53.2.2
retrieving revision 1.53.2.3
diff -C2 -d -r1.53.2.2 -r1.53.2.3
*** slony1_funcs.c 2 May 2007 21:37:07 -0000 1.53.2.2
--- slony1_funcs.c 22 Nov 2007 22:51:04 -0000 1.53.2.3
***************
*** 1352,1355 ****
--- 1352,1358 ----
makeString("xxid"));
+ #ifdef HAVE_TYPENAMETYPEID_3
+ xxid_typid = typenameTypeId(NULL,xxid_typename,NULL);
+ #else
#ifdef HAVE_TYPENAMETYPEID_2
xxid_typid = typenameTypeId(NULL,xxid_typename);
***************
*** 1357,1360 ****
--- 1360,1364 ----
xxid_typid = typenameTypeId(xxid_typename);
#endif
+ #endif
plan_types[0] = INT4OID;
***************
*** 1435,1438 ****
--- 1439,1445 ----
lappend(lappend(NIL, makeString(NameStr(cs->clustername))),
makeString("xxid"));
+ #ifdef HAVE_TYPENAMETYPEID_3
+ xxid_typid = typenameTypeId(NULL, xxid_typename,NULL);
+ #else
#ifdef HAVE_TYPENAMETYPEID_2
xxid_typid = typenameTypeId(NULL, xxid_typename);
***************
*** 1440,1443 ****
--- 1447,1451 ----
xxid_typid = typenameTypeId(xxid_typename);
#endif
+ #endif
/*
From cbbrowne at lists.slony.info Thu Nov 22 14:51:06 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 14:51:10 2007
Subject: [Slony1-commit] slony1-engine RELEASE config.h.in configure
Message-ID: <20071122225107.CC03B290CFB@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine
In directory main.slony.info:/tmp/cvs-serv26068
Modified Files:
Tag: REL_1_2_STABLE
RELEASE config.h.in configure
Log Message:
Bug #16 - Change in function typenameTypeId() in PG 8.3 to have 3 args
http://www.slony.info/bugzilla/show_bug.cgi?id=16
Patch provided by Dave Page - thanks!
Index: RELEASE
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/RELEASE,v
retrieving revision 1.1.2.21
retrieving revision 1.1.2.22
diff -C2 -d -r1.1.2.21 -r1.1.2.22
*** RELEASE 30 Oct 2007 19:30:05 -0000 1.1.2.21
--- RELEASE 22 Nov 2007 22:51:04 -0000 1.1.2.22
***************
*** 1,4 ****
--- 1,9 ----
$Id$
+ RELEASE 1.2.13
+ - Fixed problem with compatibility with PostgreSQL 8.3; function
+ typenameTypeId() has 3 arguments as of 8.3.
+
+
RELEASE 1.2.12
- Fixed problem with DDL SCRIPT parser where C-style comments were
Index: config.h.in
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/config.h.in,v
retrieving revision 1.17.2.9
retrieving revision 1.17.2.10
diff -C2 -d -r1.17.2.9 -r1.17.2.10
*** config.h.in 4 Sep 2007 20:42:14 -0000 1.17.2.9
--- config.h.in 22 Nov 2007 22:51:04 -0000 1.17.2.10
***************
*** 88,91 ****
--- 88,94 ----
#undef HAVE_TYPENAMETYPEID_2
+ /* Set to 1 if typenameTypeId() takes 3 args */
+ #undef HAVE_TYPENAMETYPEID_3
+
/* Set to 1 if standard_conforming_strings available */
#undef HAVE_STANDARDCONFORMINGSTRINGS
Index: configure
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/configure,v
retrieving revision 1.70.2.8
retrieving revision 1.70.2.9
diff -C2 -d -r1.70.2.8 -r1.70.2.9
*** configure 4 Oct 2007 15:27:59 -0000 1.70.2.8
--- configure 22 Nov 2007 22:51:04 -0000 1.70.2.9
***************
*** 7652,7656 ****
--- 7652,7701 ----
fi
+ { echo "$as_me:$LINENO: checking for typenameTypeId" >&5
+ echo $ECHO_N "checking for typenameTypeId... $ECHO_C" >&6; }
+ if test -z "$ac_cv_typenameTypeId_args"; then
+ cat >conftest.$ac_ext <<_ACEOF
+ /* confdefs.h. */
+ _ACEOF
+ cat confdefs.h >>conftest.$ac_ext
+ cat >>conftest.$ac_ext <<_ACEOF
+ /* end confdefs.h. */
+ #include "postgres.h"
+ #include "parser/parse_type.h"
+ int
+ main ()
+ {
+ typenameTypeId(NULL, NULL, NULL);
+ ;
+ return 0;
+ }
+ _ACEOF
+ rm -f conftest.$ac_objext
+ if { (ac_try="$ac_compile"
+ case "(($ac_try" in
+ *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;;
+ *) ac_try_echo=$ac_try;;
+ esac
+ eval "echo \"\$as_me:$LINENO: $ac_try_echo\"") >&5
+ (eval "$ac_compile") 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && {
+ test -z "$ac_c_werror_flag" ||
+ test ! -s conftest.err
+ } && test -s conftest.$ac_objext; then
+ ac_cv_typenameTypeId_args=3
+ else
+ echo "$as_me: failed program was:" >&5
+ sed 's/^/| /' conftest.$ac_ext >&5
+
+
+ fi
+ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ fi
{ echo "$as_me:$LINENO: checking for typenameTypeId" >&5
echo $ECHO_N "checking for typenameTypeId... $ECHO_C" >&6; }
***************
*** 7747,7751 ****
echo "${ECHO_T}no" >&6; }
else
! if test "$ac_cv_typenameTypeId_args" = 2; then
cat >>confdefs.h <<\_ACEOF
#define HAVE_TYPENAMETYPEID_2 1
--- 7792,7801 ----
echo "${ECHO_T}no" >&6; }
else
! if test "$ac_cv_typenameTypeId_args" = 3; then
! cat >>confdefs.h <<\_ACEOF
! #define HAVE_TYPENAMETYPEID_3 1
! _ACEOF
!
! elif test "$ac_cv_typenameTypeId_args" = 2; then
cat >>confdefs.h <<\_ACEOF
#define HAVE_TYPENAMETYPEID_2 1
From cbbrowne at lists.slony.info Thu Nov 22 14:56:23 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 14:56:24 2007
Subject: [Slony1-commit] slony1-engine TODO
Message-ID: <20071122225623.15F48290CC1@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine
In directory main.slony.info:/tmp/cvs-serv26338
Modified Files:
TODO
Log Message:
Remove TODO item tracked as bug #10 in Bugzilla
Index: TODO
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/TODO,v
retrieving revision 1.12
retrieving revision 1.13
diff -C2 -d -r1.12 -r1.13
*** TODO 21 Sep 2007 22:11:14 -0000 1.12
--- TODO 22 Nov 2007 22:56:21 -0000 1.13
***************
*** 62,65 ****
--- 62,67 ----
- Have a test that does a bunch of subtransactions
+ - Need upgrade path
+
Longer Term Items
---------------------------
From cbbrowne at lists.slony.info Thu Nov 22 14:58:44 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 14:58:45 2007
Subject: [Slony1-commit] slony1-engine TODO
Message-ID: <20071122225844.49758290275@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine
In directory main.slony.info:/tmp/cvs-serv26447
Modified Files:
TODO
Log Message:
Remove items that are reflected in bugzilla bugs #9 and 10
Index: TODO
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/TODO,v
retrieving revision 1.13
retrieving revision 1.14
diff -C2 -d -r1.13 -r1.14
*** TODO 22 Nov 2007 22:56:21 -0000 1.13
--- TODO 22 Nov 2007 22:58:42 -0000 1.14
***************
*** 4,23 ****
$Id$
- Documentation Improvements
- --------------------------------------------
-
- - Removed all support for STORE/DROP TRIGGER commands. Users are
- supposed to use the ALTER TABLE [ENABLE|DISABLE] TRIGGER
- functionality in Postgres from now on.
-
- There is now mention of this in docs/adminguide/slonyupgrade.sgml;
- need to enhance it further after discussion with Jan Wieck.
-
Short Term Items
---------------------------
- CANCEL SUBSCRIPTION
-
-
Improve script that tries to run UPDATE FUNCTIONS across versions to
verify that upgrades work properly.
--- 4,10 ----
From cbbrowne at lists.slony.info Thu Nov 22 15:02:08 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 22 15:02:10 2007
Subject: [Slony1-commit] slony1-engine TODO
Message-ID: <20071122230208.68B84290C1B@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine
In directory main.slony.info:/tmp/cvs-serv26552
Modified Files:
TODO
Log Message:
Remove TODO items now in Bugzilla items 11, 12, 13
Index: TODO
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/TODO,v
retrieving revision 1.14
retrieving revision 1.15
diff -C2 -d -r1.14 -r1.15
*** TODO 22 Nov 2007 22:58:42 -0000 1.14
--- TODO 22 Nov 2007 23:02:06 -0000 1.15
***************
*** 24,40 ****
NODE and UPDATE FUNCTIONS.
- - Script to "duplicate node"; create a new node that looks just like
- an existing node, that is, with the same replication sets.
-
- Overview:
- - Run slony1_extract_schema.sh against the origin node for the sets
- subscribed to; this gives an SQL script.
-
- - Generate slonik script to do:
- * STORE NODE for new node
- * STORE PATH to get the new node to communicate with the node it's
- duplicating
- * SUBSCRIBE SET for all the relevant sets
-
- Need to draw some "ducttape" tests into NG tests
--- 24,27 ----
***************
*** 54,78 ****
---------------------------
- - Add more tests (what???) to test_slony_state script(s).
-
- e.g. - add a warning if there exist tables with generated PK.
-
- Arguably, that isn't really a good thing to do; if there is a table
- with column generated via TABLE ADD KEY, then we have the
- undesirable result that there will be an error/warning reported
- every time test_slony_state is run.
-
- Perhaps there should be a second script that looks for "static"
- problems, so we can leave test_slony_state to look for "dynamic"
- problems.
-
- - Partitioning Support
-
- Add stored procedures to support adding in subscriptions to empty
- tables; verify emptiness on origin, TRUNCATE on subscribers. The
- stored proc(s) would be run as part of EXECUTE SCRIPT...
-
- - Use PGXS
-
- Windows-compatible version of tools/slony1_dump.sh
--- 41,44 ----
From cbbrowne at lists.slony.info Tue Nov 27 13:42:16 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Tue Nov 27 13:42:18 2007
Subject: [Slony1-commit] slony1-engine/tests/one-offs/multi-clusters - New
directory
Message-ID: <20071127214216.A6A5E148BF3@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/tests/one-offs/multi-clusters
In directory main.slony.info:/tmp/cvs-serv1627/multi-clusters
Log Message:
Directory /home/cvsd/slony1/slony1-engine/tests/one-offs/multi-clusters added to the repository
From cbbrowne at lists.slony.info Tue Nov 27 13:43:48 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Tue Nov 27 13:43:50 2007
Subject: [Slony1-commit] slony1-engine/tests/one-offs/multi-clusters README
build-inheriting-clusters.sh
Message-ID: <20071127214348.B0EEC148BF3@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/tests/one-offs/multi-clusters
In directory main.slony.info:/tmp/cvs-serv1665
Added Files:
README build-inheriting-clusters.sh
Log Message:
Add in a fairly complex replication test involving multiple clusters that
coexist, 3 clusters across 4 distinct databases.
This provides some verification that some of the tools function properly.
--- NEW FILE: README ---
build-inheriting-clusters
--------------------------------------
$Id: README,v 1.1 2007-11-27 21:43:46 cbbrowne Exp $
This is an example that validates interactions across multiple
replication clusters.
- It creates three clusters:
- Two "OLTP" clusters that (very) vaguely resemble primitive domain
name registries
- An "Aggregation" cluster where the origin node aggregates data from
the two "registries"
The script "tools/configure-replication.sh" is used to automate
generation of the slonik scripts required to set up replication.
The script "tools/mkslonconf.sh" is used to generate .conf files for
each of the slon processes.
[Implicit result: This test provides some assurance that those two
tools work as expected for some not entirely trivial configurations.]
- On the "OLTP" cluster origins, stored functions are set up to ease creation
of sorta-random data.
- On the "aggregation" cluster" origin, there are two sorts of stored
functions set up:
1. A trigger function is put on the OLTP transaction table to
detect every time a new transaction comes in, and to add the
transaction to a "work queue."
2. A function is also set up to "work the queue," copying data from
the registries' individual transaction tables into a single
central transaction table.
Processing takes place thus:
- Set up all databases and schemas
- Create all tables that are required
- Create all stored procedures
- Generate Slonik scripts for the three clusters
- Run the slonik scripts
- Generate further "STORE TRIGGER" slonik scripts to allow the trigger
functions to be visible on a subscriber node.
- We briefly run a "slon" process against each node in order to get
node creation and STORE PATH requests to propagate across the
clusters.
Without this, the use of "tools/mkslonconf.sh" picks up dodgy
configuration; this is a script that needs to be run against a
"live" cluster that can already be consider to be functioning.
- Then, we use tools/mkslonconf.sh to generate a .conf file for each
slon.
- All 6 slon processes are launched to manage the 3 clusters.
- Queries are run on the "TLD origins" to inject transactions to that
cluster.
The trigger functions on the "centralized billing" subscriber node
collects a queue of transactions to be copied into the "data
warehouse" schema.
Periodically, a process works on that queue.
--- NEW FILE: build-inheriting-clusters.sh ---
#!/bin/bash
# $Id: build-inheriting-clusters.sh,v 1.1 2007-11-27 21:43:46 cbbrowne Exp $
# Here is a test which sets up 3 clusters:
# TLD1 - tld1 + billing - origin node receiving data, and "billing" DB doing aggregation
# TLD2 - tld2 + billing - origin node receiving data, and "billing" DB doing aggregation
# DW - billing + dw - cluster that processes aggregated data
# SLONYTOOLS needs to be set
SLONYTOOLS=${HOME}/Slony-I/CMD/slony1-HEAD
for db in tld1 tld2 billing dw; do
dropdb $db
createdb $db
createlang plpgsql $db
done
TESTHOME=/tmp/inhcltst
rm -rf ${TESTHOME}
mkdir -p ${TESTHOME}
for TLD in tld1 tld2; do
NTABLES=""
for t in domains hosts contacts txns; do
NTABLES="${NTABLES} ${TLD}.${t}"
done
for t in host contact; do
NTABLES="${NTABLES} ${TLD}.domain_${t}"
done
NSEQUENCES=""
for s in domains hosts contacts; do
NSEQUENCES="${NSEQUENCES} ${TLD}.${s}_id_seq"
done
SLONIKCONFDIR="${TESTHOME}/${TLD}" DB2=billing DB1="${TLD}" NUMNODES=2 CLUSTER=${TLD} HOST1=localhost HOST2=localhost TABLES="${NTABLES}" SEQUENCES="${NSEQUENCES}" ${SLONYTOOLS}/tools/configure-replication.sh
done
NTABLES=""
for t in txns; do
NTABLES="${NTABLES} billing.${t}"
done
NSEQUENCES=""
for s in txns; do
NSEQUENCES="${NSEQUENCES} billing.${s}_id_seq"
done
SLONIKCONFDIR=${TESTHOME}/dw CLUSTER=dw NUMNODES=2 DB1=billing DB2=dw HOST1=localhost HOST2=localhost TABLES="${NTABLES}" SEQUENCES="${NSEQUENCES}" ${SLONYTOOLS}/tools/configure-replication.sh
for DB in tld1 billing; do
for schema in tld1 billing; do
psql -d ${DB} -c "create schema ${schema};"
done
done
for DB in tld2 billing; do
for schema in tld2 billing; do
psql -d ${DB} -c "create schema ${schema};"
done
done
psql -d dw -c "create schema billing;"
for TLD in tld1 tld2; do
for DB in ${TLD} billing; do
psql -d ${DB} -c "create table ${TLD}.txns (id serial primary key, domain_id integer not null, created_on timestamptz default now(), txn_type text);"
done
done
for DB in billing dw; do
psql -d ${DB} -c "create table billing.txns (id serial, tld text, domain_id integer not null, created_on timestamptz default now(), txn_type text, primary key(id, tld));"
done
for TLD in tld1 tld2; do
for DB in ${TLD} billing; do
for table in domains hosts contacts; do
psql -d ${DB} -c "create table ${TLD}.${table} (id serial primary key, name text not null unique, created_on timestamptz default now());"
done
for sub in host contact; do
psql -d ${DB} -c "create table ${TLD}.domain_${sub} (domain_id integer not null, ${sub}_id integer not null, primary key (domain_id, ${sub}_id));"
done
done
done
MKRAND="
create or replace function public.randomtext (integer) returns text as \$\$
declare
i_id alias for \$1;
c_res text;
c_fc integer;
c_str text;
c_base integer;
c_bgen integer;
c_rand integer;
c_i integer;
c_j integer;
begin
c_res := 'PF';
c_str := '1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ';
c_base := length(c_str);
c_bgen := c_base * c_base * c_base * c_base;
for c_i in 1..3 loop
c_rand := floor(random() * c_bgen + i_id + random())::integer % c_bgen;
for c_j in 1..4 loop
c_fc := c_rand % c_base;
c_res := c_res || substr(c_str, c_fc, 1);
c_rand := c_rand / c_base;
end loop;
end loop;
return lower(c_res);
end;\$\$ language 'plpgsql';
"
for db in tld1 tld2; do
psql -d ${db} -c "${MKRAND}"
done
for tld in tld1 tld2; do
MKDOMAIN="
create or replace function ${tld}.mkdomain (text) returns integer as \$\$
declare
i_tld alias for \$1;
c_domain_name text;
c_domain_id integer;
c_contact_name text;
c_contact_id integer;
c_host_name text;
c_host_id integer;
begin
c_domain_name := public.randomtext(nextval('${tld}.domains_id_seq')::integer) || '.' || i_tld;
insert into ${tld}.domains (name) values (c_domain_name);
select currval('${tld}.domains_id_seq') into c_domain_id;
c_contact_name := public.randomtext(nextval('${tld}.contacts_id_seq')::integer);
insert into ${tld}.contacts (name) values (c_contact_name);
select currval('${tld}.contacts_id_seq') into c_contact_id;
insert into ${tld}.domain_contact (domain_id, contact_id) values (c_domain_id, c_contact_id);
insert into ${tld}.hosts (name) values ('ns1.' || public.randomtext(nextval('${tld}.hosts_id_seq')::integer) || '.net');
select currval('${tld}.hosts_id_seq') into c_host_id;
insert into ${tld}.domain_host (domain_id, host_id) values (c_domain_id, c_host_id);
insert into ${tld}.domain_host (domain_id, host_id) select c_domain_id, id from ${tld}.hosts order by random() limit 1;
insert into ${tld}.txns (domain_id, txn_type) values (c_domain_id, 'domain:create');
return c_domain_id;
end; \$\$ language plpgsql;
"
psql -d ${tld} -c "${MKDOMAIN}"
done
psql -d billing -c "
create table billing.work_queue (tld text, txn_id integer);
create or replace function tld1.add_to_queue () returns trigger as \$\$
begin
insert into billing.work_queue (tld, txn_id) values ('tld1', NEW.id);
return new;
end; \$\$ language plpgsql;
create or replace function tld2.add_to_queue () returns trigger as \$\$
begin
insert into billing.work_queue (tld, txn_id) values ('tld2', NEW.id);
return new;
end; \$\$ language plpgsql;
create or replace function billing.work_on_queue(integer) returns integer as \$\$
declare
c_prec record;
c_count integer;
begin
c_count := 0;
for c_prec in select * from billing.work_queue limit \$1 loop
if c_prec.tld = 'tld1' then
insert into billing.txns (id, tld, domain_id, created_on, txn_type) select id, 'tld1', domain_id, created_on, txn_type from tld1.txns where id = c_prec.txn_id;
delete from billing.work_queue where tld = c_prec.tld and txn_id = c_prec.txn_id;
c_count := c_count + 1;
end if;
if c_prec.tld = 'tld2' then
insert into billing.txns (id, tld, domain_id, created_on, txn_type) select id, 'tld2', domain_id, created_on, txn_type from tld2.txns where id = c_prec.txn_id;
delete from billing.work_queue where tld = c_prec.tld and txn_id = c_prec.txn_id;
c_count := c_count + 1;
end if;
end loop;
return c_count;
end; \$\$ language plpgsql;
create or replace function billing.work_on_queue() returns integer as \$\$
begin
return billing.work_on_queue(97);
end; \$\$ language plpgsql;
create trigger tld1_queue after insert on tld1.txns for each row execute procedure tld1.add_to_queue ();
create trigger tld2_queue after insert on tld2.txns for each row execute procedure tld2.add_to_queue ();
"
for i in create_nodes create_set store_paths subscribe_set_2; do
for j in `find ${TESTHOME} -name "${i}.slonik"`; do
echo "Executing slonik script: ${j}"
slonik ${j}
done
done
# Add in "store trigger" to activate the above triggers
for tld in tld1 tld2; do
TFILE=${TESTHOME}/${tld}/store-billing-trigger.slonik
echo "include <${TESTHOME}/${tld}/preamble.slonik>;" > ${TFILE}
echo "store trigger (table id = 4, trigger name = 'tld1_queue', event node = 1);" >> ${TFILE}
slonik ${TFILE}
done
for cluster in tld1 tld2; do
for nodetld2 in 1:${cluster} 2:billing; do
node=`echo $nodetld2 | cut -d ":" -f 1`
dbname=`echo $nodetld2 | cut -d ":" -f 2`
echo "Briefly fire up slon for ${nodetld2}"
(sleep 3; killall slon)&
slon -q 5 ${cluster} "dbname=${dbname}" > /dev/null 2> /dev/null
done
DESTDIR=${TESTHOME}/${cluster}
SLONYCLUSTER=${cluster} MKDESTINATION=${DESTDIR} LOGHOME=${DESTDIR}/logs PGDATABASE=${cluster} ${SLONYTOOLS}/tools/mkslonconf.sh
done
for cluster in dw; do
for nodetld2 in 1:billing 2:dw; do
node=`echo $nodetld2 | cut -d ":" -f 1`
dbname=`echo $nodetld2 | cut -d ":" -f 2`
echo "Briefly fire up slon for ${nodetld2}"
(sleep 3; killall slon)&
slon -q 5 ${cluster} "dbname=${dbname}" > /dev/null 2> /dev/null
done
DESTDIR=${TESTHOME}/${cluster}
SLONYCLUSTER=${cluster} MKDESTINATION=${DESTDIR} LOGHOME=${DESTDIR}/logs PGDATABASE=${cluster} ${SLONYTOOLS}/tools/mkslonconf.sh
done
for cluster in tld1 tld2 dw; do
DESTDIR=${TESTHOME}/${cluster}
for i in 1 2; do
echo "Start up node ${i} for real..."
CONF="${DESTDIR}/${cluster}/conf/node${i}.conf"
LOG="${DESTDIR}/logs/node${i}.log"
slon -f ${CONF} > ${LOG} 2>&1 &
done
done
# Add in "store trigger" to activate the billing triggers
echo "Generate STORE TRIGGER scripts to activate billing triggers"
for tld in tld1 tld2; do
TFILE=${TESTHOME}/${tld}/store-billing-trigger.slonik
echo "include <${TESTHOME}/${tld}/preamble.slonik>;" > ${TFILE}
echo "store trigger (table id = 4, trigger name = '${tld}_queue', event node = 1);" >> ${TFILE}
slonik ${TFILE}
echo "Activated trigger ${tld}_queue"
done
for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17; do
for tld in tld1 tld2; do
BQ="select ${tld}.mkdomain('${tld}');"
Q="begin;"
for j in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25; do
for q in 1 2 3 4; do
Q="${Q} ${BQ} "
done
done
Q="${Q} end;"
psql -d ${tld} -c "${Q}" &
sleep 1
done
psql -d billing -c "select billing.work_on_queue(167);" &
done
for i in 1 2 3 4 5 6 7 8 9 10 11 12; do
psql -d billing -c "select billing.work_on_queue(277);"
psql -d billing -c "vacuum billing.work_queue;"
done
From cbbrowne at lists.slony.info Thu Nov 29 07:57:58 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 29 07:58:00 2007
Subject: [Slony1-commit] slony1-engine/config acx_libpq.m4
Message-ID: <20071129155758.AC3A8290CA6@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/config
In directory main.slony.info:/tmp/cvs-serv31408
Modified Files:
acx_libpq.m4
Log Message:
Change to libpq detection code...
- This version of Slony-I no longer needs to support versions earlier than
8.3, so we alter rules that related to earlier versions, either:
- Drop irrelevant tests
- Report that they are irrelevant
Index: acx_libpq.m4
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/config/acx_libpq.m4,v
retrieving revision 1.27
retrieving revision 1.28
diff -C2 -d -r1.27 -r1.28
*** acx_libpq.m4 29 Nov 2006 18:44:36 -0000 1.27
--- acx_libpq.m4 29 Nov 2007 15:57:56 -0000 1.28
***************
*** 143,168 ****
PG_VERSION_MINOR=`echo $PG_VERSION | cut -d. -f2`
if test "$PG_VERSION_MAJOR" = "7"; then
! if test $PG_VERSION_MINOR -gt 3; then
AC_MSG_RESULT($PG_VERSION)
! AC_DEFINE(PG_VERSION_OK,1,[PostgreSQL 7.4 or later])
else
AC_MSG_RESULT("error")
AC_MSG_ERROR(Your version of PostgreSQL ($PG_VERSION) is lower
! than the required 7.4. Slony-I needs functions included in
a newer version.)
fi
- fi
- if test "$PG_VERSION_MAJOR" = "8"; then
AC_MSG_RESULT($PG_VERSION)
! AC_DEFINE(PG_VERSION_OK,1,[PostgreSQL 7.4 or later])
fi
! if test "$PG_VERSION_MAJOR" = "8"; then
! if test $PG_VERSION_MINOR -gt 0; then
! if test "$PG_SHAREDIR" = ""; then
! PG_SHAREDIR=`$PG_CONFIG_LOCATION --sharedir`/ 2>/dev/null
! echo "pg_config says pg_sharedir is $PG_SHAREDIR"
! fi
! fi
fi
--- 143,169 ----
PG_VERSION_MINOR=`echo $PG_VERSION | cut -d. -f2`
if test "$PG_VERSION_MAJOR" = "7"; then
! AC_MSG_RESULT("error")
! AC_MSG_ERROR(Your version of PostgreSQL ($PG_VERSION) is lower
! than the required 8.3. Slony-I needs functionality included in
! a newer version.)
! fi
! if test "$PG_VERSION_MAJOR" = "8"; then
! if test $PG_VERSION_MINOR -gt 2; then
AC_MSG_RESULT($PG_VERSION)
! AC_DEFINE(PG_VERSION_OK,1,[PostgreSQL 8.3 or later])
else
AC_MSG_RESULT("error")
AC_MSG_ERROR(Your version of PostgreSQL ($PG_VERSION) is lower
! than the required 8.3. Slony-I needs functionality included in
a newer version.)
fi
AC_MSG_RESULT($PG_VERSION)
! AC_DEFINE(PG_VERSION_OK,1,[PostgreSQL 8.3 or later])
fi
!
! if test "$PG_SHAREDIR" = ""; then
! PG_SHAREDIR=`$PG_CONFIG_LOCATION --sharedir`/ 2>/dev/null
! echo "pg_config says pg_sharedir is $PG_SHAREDIR"
fi
***************
*** 231,236 ****
else
AC_MSG_ERROR(Your version of libpq doesn't have PQunescapeBytea
! this means that your version of PostgreSQL is lower than 7.3
! and thus not supported by Slony-I.)
fi
--- 232,238 ----
else
AC_MSG_ERROR(Your version of libpq doesn't have PQunescapeBytea
! which means that your version of PostgreSQL is lower than 7.3
! and thus not even remotely supported by Slony-I version 2,
! which requires 8.3+)
fi
From cbbrowne at lists.slony.info Thu Nov 29 13:29:05 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 29 13:29:07 2007
Subject: [Slony1-commit] slony1-engine RELEASE-2.0
Message-ID: <20071129212905.95101290CCB@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine
In directory main.slony.info:/tmp/cvs-serv6919
Modified Files:
RELEASE-2.0
Log Message:
Fix to bug #15
http://www.slony.info/bugzilla/show_bug.cgi?id=15
Fix to bug #15 - where long cluster name (>40 chars) leads to
things breaking when an index name is created that contains
the cluster name.
-> Warn upon creating a long cluster name.
-> Give a useful exception that explains the cause rather
than merely watching index creation fail.
Index: RELEASE-2.0
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/RELEASE-2.0,v
retrieving revision 1.7
retrieving revision 1.8
diff -C2 -d -r1.7 -r1.8
*** RELEASE-2.0 22 Nov 2007 17:50:07 -0000 1.7
--- RELEASE-2.0 29 Nov 2007 21:29:03 -0000 1.8
***************
*** 85,86 ****
--- 85,97 ----
the further merit of making tools/configure-replication.sh a
regularly-regression-tested tool.
+
+ - Fix to bug #15 - where long cluster name (>40 chars) leads to
+ things breaking when an index name is created that contains
+ the cluster name.
+
+ -> Warn upon creating a long cluster name.
+ -> Give a useful exception that explains the cause rather
+ than merely watching index creation fail.
+
+ http://www.slony.info/bugzilla/show_bug.cgi?id=15
+
From cbbrowne at lists.slony.info Thu Nov 29 13:29:05 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 29 13:29:07 2007
Subject: [Slony1-commit] slony1-engine/src/backend slony1_funcs.sql
Message-ID: <20071129212905.A9E06290D36@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/src/backend
In directory main.slony.info:/tmp/cvs-serv6919/src/backend
Modified Files:
slony1_funcs.sql
Log Message:
Fix to bug #15
http://www.slony.info/bugzilla/show_bug.cgi?id=15
Fix to bug #15 - where long cluster name (>40 chars) leads to
things breaking when an index name is created that contains
the cluster name.
-> Warn upon creating a long cluster name.
-> Give a useful exception that explains the cause rather
than merely watching index creation fail.
Index: slony1_funcs.sql
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/src/backend/slony1_funcs.sql,v
retrieving revision 1.123
retrieving revision 1.124
diff -C2 -d -r1.123 -r1.124
*** slony1_funcs.sql 23 Oct 2007 17:00:27 -0000 1.123
--- slony1_funcs.sql 29 Nov 2007 21:29:03 -0000 1.124
***************
*** 697,700 ****
--- 697,706 ----
perform setval(''@NAMESPACE@.sl_local_node_id'', p_local_node_id);
perform @NAMESPACE@.storeNode_int (p_local_node_id, p_comment, false);
+
+ if (pg_catalog.current_setting(''max_identifier_length'')::integer - pg_catalog.length(''@NAMESPACE@'')) < 5 then
+ raise notice ''Slony-I: Cluster name length [%] versus system max_identifier_length [%] '', pg_catalog.length(''@NAMESPACE@''), pg_catalog.current_setting(''max_identifier_length'');
+ raise notice ''leaves narrow/no room for some Slony-I-generated objects (such as indexes).'';
+ raise notice ''You may run into problems later!'';
+ end if;
return p_local_node_id;
***************
*** 5172,5175 ****
--- 5178,5183 ----
v_count int4;
v_iname text;
+ v_ilen int4;
+ v_maxlen int4;
BEGIN
v_count := 0;
***************
*** 5195,5199 ****
if not found then
-- raise notice ''index was not found - add it!'';
! idef := ''create index "PartInd_@CLUSTERNAME@_sl_log_'' || v_log || ''-node-'' || v_dummy.set_origin ||
''" on @NAMESPACE@.sl_log_'' || v_log || '' USING btree(log_xid @NAMESPACE@.xxid_ops) where (log_origin = '' || v_dummy.set_origin || '');'';
execute idef;
--- 5203,5214 ----
if not found then
-- raise notice ''index was not found - add it!'';
! v_iname := ''PartInd_@CLUSTERNAME@_sl_log_'' || v_log || ''-node-'' || v_dummy.set_origin;
! v_ilen := pg_catalog.length(v_iname);
! v_maxlen := pg_catalog.current_setting(''max_identifier_length''::text)::int4;
! if v_ilen > v_maxlen then
! raise exception ''Length of proposed index name [%] > max_identifier_length [%] - cluster name probably too long'', v_ilen, v_maxlen;
! end if;
!
! idef := ''create index "'' || v_iname ||
''" on @NAMESPACE@.sl_log_'' || v_log || '' USING btree(log_xid @NAMESPACE@.xxid_ops) where (log_origin = '' || v_dummy.set_origin || '');'';
execute idef;
From cbbrowne at lists.slony.info Thu Nov 29 13:37:48 2007
From: cbbrowne at lists.slony.info (Chris Browne)
Date: Thu Nov 29 13:37:49 2007
Subject: [Slony1-commit] slony1-engine/doc/adminguide slonik_ref.sgml
Message-ID: <20071129213748.932EF290D56@main.slony.info>
Update of /home/cvsd/slony1/slony1-engine/doc/adminguide
In directory main.slony.info:/tmp/cvs-serv7165
Modified Files:
slonik_ref.sgml
Log Message:
Add notes to INIT CLUSTER about maximum sizing of cluster name
Index: slonik_ref.sgml
===================================================================
RCS file: /home/cvsd/slony1/slony1-engine/doc/adminguide/slonik_ref.sgml,v
retrieving revision 1.74
retrieving revision 1.75
diff -C2 -d -r1.74 -r1.75
*** slonik_ref.sgml 22 Oct 2007 20:49:26 -0000 1.74
--- slonik_ref.sgml 29 Nov 2007 21:37:46 -0000 1.75
***************
*** 463,468 ****
CLUSTER does not need to draw configuration from other
existing nodes.
-
Locking Behaviour
--- 463,475 ----
CLUSTER does not need to draw configuration from other
existing nodes.
+
+ Be aware that some objects are created that contain
+ the cluster name as part of their name. (Notably, partial indexes
+ on sl_log_1 and sl_log_2.) As a
+ result, really long cluster names are a bad
+ idea, as they can make object names blow up
past the
+ typical maximum name length of 63 characters.
+
Locking Behaviour