Christopher Browne cbbrowne
Wed May 31 07:40:43 PDT 2006
Rod Taylor <pg at rbt.ca> writes:
>> > Exactly what "replication work" do you mean?  One table? All tables being copied?
>> > In my situation I have 6500*5 + 100 tables to copy.  No way is that going to be
>> > completed in 5 minutes no matter that the tables are small.  (And no
>> > I did not design the schema :)
>> 
>> That isn't the initial COPY, for sure, because none of ours will
>> finish the COPY in 5 mins, either.  I suspect Rod means that there is
>> a time limit on how long a snapshot-application process runs.  It
>> can't be 5 mins, though, or we'd run into this after the initial
>> COPY.  Chris, can you shed more light on this one?
>
> Not only that, but please note in my followup message it often happens
> in the opposite direction of the data flow.
>
> Node 4 subscribes to datasets on Node 1.
>
> Node 1 will (once every two months) result in hundreds of connections to
> Node 4.
>
> I've seen this between DB machines connected to the same switch, so I
> don't think it is network related.
>
> But again, I have no idea what triggers it.

Rod has mentioned this before; I haven't observed anything like it,
nor can I see a reason for it to happen.

>> > Are you saying slony won't handle databases of >100GB?  Or
>> > tables? If the database is larger than that, exactly what patches
>> > should be added for exactly what result?  For the most part I am
>> > doing production work and never apply patches not in the main
>> > release for obvious reasons.  If there are crucial ones, though,
>> > I need more details.
>> 
>> Rod sent some observations about group size (and a patch) to the list
>> oh, about a month ago?
>
> I think so, but it was really just to bring things back to the way
> they were before that. The maximum group size has shifted between
> 100 to 10000 a couple of times.

Yeah.  

Latest logic also involves accelerating faster; group size used to
increase based on the formula:

  next_size := last_size * 1.1 + 1;

On a system that was well behind, it would take quite a while for that
to grow from 1 to 20.

Now, the formula is:

  next_size := last_size * 2 + 1;

Growing from group size 1 to 20 only takes 5 iterations of the SYNC
loop, now.
-- 
output = ("cbbrowne" "@" "ca.afilias.info")
<http://dba2.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)



More information about the Slony1-general mailing list