Christopher Browne cbbrowne
Tue Mar 7 09:49:10 PST 2006
Brian Wong wrote:

>List,
>I am trying to get slony working with the example given in the
>documentation. I am using the pgbench tool as suggested. The problem
>is, when I subscribe the slave node to the master, the slave slon
>process cant find the column "_Slony-I_test_cluster_rowID" on the
>master. I check the table and I verify it is there. Here are the
>relevant logs from the slave slon process:
>
>NOTICE:  truncate of "public"."accounts" succeeded
>NOTICE:  truncate of "public"."branches" succeeded
>NOTICE:  truncate of "public"."tellers" succeeded
>NOTICE:  ALTER TABLE / ADD UNIQUE will create implicit index
>"history__Slony-I_testcluster_rowID_key" for table "history"
>CONTEXT:  SQL statement "alter table only "public"."history" add
>unique ("_Slony-I_testcluster_rowID");"
>PL/pgSQL function "determineattkindserial" line 54 at execute statement
>NOTICE:  truncate of "public"."history" succeeded
>2006-03-07 12:31:55 EST ERROR  remoteWorkerThread_1: "select
>"_testcluster".prepareTableForCopy(4); copy "public"."history"
>("tid","bid","aid","delta","mtime","filler","_Slony-I_test_cluster_rowID","_Slony-I_testcluster_rowID")
>from stdin; " ERROR:  column "_Slony-I_test_cluster_rowID" of relation
>"history" does not exist
> ERROR:  column "_Slony-I_test_cluster_rowID" of relation "history"
>does not exist
>
>Here is the description of the 'history' table on the master:
>
>testbench=# \d history
>                                                    Table "public.history"
>           Column            |            Type             |          
>                 Modifiers
>-----------------------------+-----------------------------+------------------------------------------------------------------
> tid                         | integer                     |
> bid                         | integer                     |
> aid                         | integer                     |
> delta                       | integer                     |
> mtime                       | timestamp without time zone |
> filler                      | character(22)               |
> _Slony-I_test_cluster_rowID | bigint                      | not null
>default nextval('_test_cluster.sl_rowid_seq'::regclass)
> _Slony-I_testcluster_rowID  | bigint                      | not null
>default nextval('_testcluster.sl_rowid_seq'::regclass)
>Indexes:
>    "history__Slony-I_test_cluster_rowID_key" UNIQUE, btree
>("_Slony-I_test_cluster_rowID")
>    "history__Slony-I_testcluster_rowID_key" UNIQUE, btree
>("_Slony-I_testcluster_rowID")
>Triggers:
>    _test_cluster_logtrigger_4 AFTER INSERT OR DELETE OR UPDATE ON
>history FOR EACH ROW EXECUTE PROCEDURE
>_test_cluster.logtrigger('_test_cluster', '4', 'vvvvvvk')
>    _testcluster_logtrigger_4 AFTER INSERT OR DELETE OR UPDATE ON
>history FOR EACH ROW EXECUTE PROCEDURE
>_testcluster.logtrigger('_testcluster', '4', 'vvvvvvvk
>
>
>I am sure I am missing something very simple. Thanks in advance.
>  
>
Well, it looks like you have re-initialized replication a couple of
times with different cluster names, in that there are the multiple
"table add key" columns that have been added.  And there are multiple
log triggers.  (Wow, that's pretty freaky!)

The database as it stands is fairly much "mussed up."

I'd be inclined to drop the database and reinitialize from scratch with
whichever cluster name you're planning to use for your tests.



More information about the Slony1-general mailing list