Christopher Browne cbbrowne
Wed Nov 10 16:15:58 PST 2004
Rod Taylor wrote:

>The declare'd statement which finds the 2000 or so statements within a
>"snapshot" seems to take a little over a minute to FETCH the first 100
>entries. Statement attached.
>
>The query itself isn't that bad, it's just that the log_xid range is
>about 200000 transactions wide, which means it has to sift through a
>fair amount of data to get what it's looking for.
>
>Is it possible to get slony to deal with 10 or 100 snapshots at a time
>rather than just 1 when it's trying to catch up?
>  
>
[postgres at tcwsds501]:pg74$ slon -h
usage: slon [options] clustername conninfo

Options:
    -d <debuglevel>       verbosity of logging (1..4)
    -s <milliseconds>     SYNC check interval (default 10000)
    -t <milliseconds>     SYNC interval timeout (default 60000)
    -g <num>              maximum SYNC group size (default 6)

You might pass a larger value for the "-g" option...  The maximum is 
typically 100 snapshots at once, although that might vary a bit with 
1.0.1. 

In 1.0.5, the relevant bit in slon.c is thus:

            case 'g':    sync_group_maxsize = strtol(optarg, NULL, 10);
                        if (sync_group_maxsize < 1 || sync_group_maxsize 
 > 100)
                        {
                            fprintf(stderr, "sync group size must be 
between 1 and 100 ms\n");
                            errors++;
                        }
                        group_size_set = 1;
                        break;
-- 
http://cbbrowne.dyndns.info:8741/cgi-bin/twiki/view/Sandbox/SlonyIAdministration


More information about the Slony1-general mailing list