<div dir="ltr"><br><div class="gmail_extra">So I&#39;m backing up in a big way. I know what started it, &quot;adding a new insert slave which took 13 hours to complete (indexes etc)&quot;.. But now it doesn&#39;t appear I am able to catch up. I see the slave doing what it&#39;s suppose to, get a bunch of data, truncate the sl_log files move on. But the master is having a hard time.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Postgres 9.4.5 and Slony 2.2.3</div><div class="gmail_extra"><br></div><div class="gmail_extra">All other nodes don&#39;t have any errors or issues.</div><div class="gmail_extra"><br></div><div class="gmail_extra">this is Node 1 (the master)</div><div class="gmail_extra">node 2 is a slave</div><div class="gmail_extra">node 3-5 are query slaves with only 1 of 3 sets being replicated too.</div><div class="gmail_extra"><br></div><div class="gmail_extra">I have interval at 5 minutes and sync_group_maxsize=50</div><div class="gmail_extra"><br></div><div class="gmail_extra">Any suggestions on where to thump it. At some point this will cause issues on my master and when I see that starting, I&#39;ll have to drop node 2 again, and when i add it, it will take 13+ hours and I&#39;ll be back in the same position :)</div><div class="gmail_extra"><br></div><div class="gmail_extra">Thanks</div><div class="gmail_extra">Tory</div>







<div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Node:  Old Transactions Kept Open</div><div class="gmail_extra">================================================</div><div class="gmail_extra">Old Transaction still running with age 01:48:00 &gt; 01:30:00</div><div class="gmail_extra"><br></div><div class="gmail_extra">Query: autovacuum: VACUUM</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Node: 0 threads seem stuck</div><div class="gmail_extra">================================================</div><div class="gmail_extra">Slony-I components have not reported into sl_components in interval 00:05:00</div><div class="gmail_extra"><br></div><div class="gmail_extra">Perhaps slon is not running properly?</div><div class="gmail_extra"><br></div><div class="gmail_extra">Query:</div><div class="gmail_extra">     select co_actor, co_pid, co_node, co_connection_pid, co_activity, co_starttime, now() - co_starttime, co_event, co_eventtype</div><div class="gmail_extra">     from &quot;_admissioncls&quot;.sl_components</div><div class="gmail_extra">     where  (now() - co_starttime) &gt; &#39;00:05:00&#39;::interval</div><div class="gmail_extra">     order by co_starttime;</div><div class="gmail_extra">  </div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Node: 1 sl_log_1 tuples = 219700 &gt; 200000</div><div class="gmail_extra">================================================</div><div class="gmail_extra">Number of tuples in Slony-I table sl_log_1 is 219700 which</div><div class="gmail_extra">exceeds 200000.</div><div class="gmail_extra"><br></div><div class="gmail_extra">You may wish to investigate whether or not a node is down, or perhaps</div><div class="gmail_extra">if sl_confirm entries have not been propagating properly.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Node: 1 sl_log_2 tuples = 1.74558e+07 &gt; 200000</div><div class="gmail_extra">================================================</div><div class="gmail_extra">Number of tuples in Slony-I table sl_log_2 is 1.74558e+07 which</div><div class="gmail_extra">exceeds 200000.</div><div class="gmail_extra"><br></div><div class="gmail_extra">You may wish to investigate whether or not a node is down, or perhaps</div><div class="gmail_extra">if sl_confirm entries have not been propagating properly.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Node: 2 sl_log_2 tuples = 440152 &gt; 200000</div><div class="gmail_extra">================================================</div><div class="gmail_extra">Number of tuples in Slony-I table sl_log_2 is 440152 which</div><div class="gmail_extra">exceeds 200000.</div><div class="gmail_extra"><br></div><div class="gmail_extra">You may wish to investigate whether or not a node is down, or perhaps</div><div class="gmail_extra">if sl_confirm entries have not been propagating properly.</div><div class="gmail_extra"><br></div></div></div>