Fri Mar 23 08:05:26 PDT 2007
- Previous message: [Slony1-general] Vacuum full required?
- Next message: [Slony1-general] Unexpected problems with Slony config
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hit two things recently that were pretty non-POLA from my perspective. I haven't seen these documented anywhere ... First, it appears that when installing Slony, you _must_ have a node 1. We were planning to use "serial #" style node IDs that conveyed some meaning about what the nodes do (actually, node names would be even better, but that's a wish). When trying to create a new cluster without defining a node 1, Slony gives an error that it couldn't find conninfo for node 1. It would appear as if a dependency on node 1 is hardcoded somewhere. The second thing that surprised me is an odd limit on the size of the node ID. As I already mentioned, we were trying to use node IDs that contain some intelligence (serial # style), so our node IDs were around 20000000. I didn't expect that to be a problem, since it's certainly small enough to fit in an int, but slonik gave a rather cryptic error: <stdin>:974: PGRES_FATAL_ERROR select "_clustername".initializeLocalNode(20000000, 'Local Slave node'); select "_clustername".enableNode_int(20000000); - ERROR: bigint out of range CONTEXT: SQL statement "SELECT setval('"_clustername".sl_rowid_seq', $1 ::int8 * '1000000000000000'::int8)" PL/pgSQL function "initializelocalnode" line 26 at perform Which makes it appear as if the node ID is being multiplied by one quadrillion before attempting to stuff it into a BIGINT. when I set the node ID down to single-digits, the error stopped. -- Bill Moran Collaborative Fusion Inc.
- Previous message: [Slony1-general] Vacuum full required?
- Next message: [Slony1-general] Unexpected problems with Slony config
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list