Fri Feb 5 12:20:29 PST 2010
- Previous message: [Slony1-general] Any reason not to have 2 replication slaves, replicating to the same query slave
- Next message: [Slony1-general] Any reason not to have 2 replication slaves, replicating to the same query slave
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Fri, Feb 5, 2010 at 5:37 AM, Brad Nicholson <bnichols at ca.afilias.info> wrote: > On Thu, 2010-02-04 at 22:39 -0800, Tory M Blue wrote: >> On Thu, Feb 4, 2010 at 6:02 PM, Andrew Sullivan <ajs at crankycanuck.ca> wrote: >> > On Thu, Feb 04, 2010 at 03:39:03PM -0800, Tory M Blue wrote: >> >> Slon 1.2.20 (For now as I migrate to 8.4 ) >> >> >> >> Just working out a new Slon setup, with a cascade configuration. >> >> >> >> ! master talking to 2 Slaves (if not more), that talk to 2-3 Qslaves. >> >> >> >> Is it wrong to have both Slaves talking, replicating to both QSlaves? >> > >> > When you say "replicating to", do you mean they're both actively >> > sending data to the other replicas? If so, then it won't work. Only >> > one can be the set origin for a given target at a time. But they can >> > both be forwarders, such that one could take over for the other. >> > >> > A Not to beat a dead horse but i had to come back to this. Andrew you said this would not work, I said I had it working, however looking at my paths today, I believe you are right.. Here is the current paths and subscribes on my Qslave (node 4) idb03 = 1 (Master Insert) idb04 = 2 (Pri Insert Slave) idb05 = 3 (Sec Insert Slave) qdb03 =4 (Pri Qslave) clsdb=# select * from _cls.sl_path; pa_server | pa_client | pa_conninfo | pa_connretry -----------+-----------+-------------------------------------------------------------------------------------------+-------------- 2 | 1 | dbname=clsdb host=devidb04 user=postgres password=SECURED | 10 3 | 1 | dbname=clsdb host=devidb05 user=postgres password=SECURED | 10 1 | 3 | dbname=clsdb host=devidb03 user=postgres password=SECURED | 10 1 | 2 | dbname=clsdb host=devidb03 user=postgres password=SECURED | 10 2 | 4 | dbname=clsdb host=devidb04 user=postgres password=SECURED | 10 3 | 4 | dbname=clsdb host=devidb05 user=postgres password=SECURED | 10 4 | 2 | dbname=clsdb host=devqdb03 user=postgres password=SECURED | 10 4 | 3 | dbname=clsdb host=devqdb03 user=postgres password=SECURED | 10 (8 rows) clsdb=# select * from _cls.sl_subscribe; sub_set | sub_provider | sub_receiver | sub_forward | sub_active ---------+--------------+--------------+-------------+------------ 1 | 1 | 2 | t | t 1 | 1 | 3 | t | t 2 | 1 | 2 | t | t 2 | 1 | 3 | t | t 3 | 1 | 2 | t | t 3 | 1 | 3 | t | t 1 | 3 | 4 | f | t (7 rows) Here are the subscribe set commands in my script when i'm adding node 4 (QSlave) subscribe set ( id = 1, provider = $SLAVE1_NODE, receiver = $ADDNODE_ID, forward = no); subscribe set ( id = 1, provider = $SLAVE2_NODE, receiver = $ADDNODE_ID, forward = no); Although interesting to me (prob not to you folks), while it didn't provide any errors or "don't do that arsehat" messages, it seemed to take my second subscribe set argument only (based on sl_subscribe) output (overwriting the initial subscribe set command) Also when I drop node 3, I am no longer syncing to Node 4. admissionclsdb=# select * from _admissioncls.sl_subscribe; sub_set | sub_provider | sub_receiver | sub_forward | sub_active ---------+--------------+--------------+-------------+------------ 1 | 1 | 2 | t | t 2 | 1 | 2 | t | t 2 | 1 | 3 | t | t 3 | 1 | 2 | t | t 3 | 1 | 3 | t | t (5 rows) Which tells me I need to, I think... Remove the second subscribe line from my addnode script when adding the QSLAVES (non forwarders, set 1 subscribers only). And add it to the dropnode section of node 3, so that when I drop node 3, I add a subscribe set 1 back to 4 via 2?? This get's me to the 1 to 1 configuration that Brad also mentioned. Although we know we can do 1 to many, it doesnt look like many to 1 is appropriate for subscribe sets. Hopefully I'm still talking in a dialect you understand as the guts of slony are still kind of new to me, although I've had to manage it for the last 3 years. Just trying to make this as fault tolerant but efficient as I can.. Obviously being able to lose a provider without manual intervention would be ideal,but don't see that quite yet :) Thanks Tory
- Previous message: [Slony1-general] Any reason not to have 2 replication slaves, replicating to the same query slave
- Next message: [Slony1-general] Any reason not to have 2 replication slaves, replicating to the same query slave
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list