Tory M Blue tmblue at gmail.com
Thu Aug 4 10:29:55 PDT 2016
On Thu, Aug 4, 2016 at 8:39 AM, Jan Wieck <jan at wi3ck.info> wrote:
> Another problem we recently ran into is that after bulk operations, the
> queries selecting the log data on the data provider can degrade. A
> workaround
> in one case was to ALTER the slony user and set work_mem to 512MB. The
> work_mem of 100MB, used by the application, was causing the scans on
> the sl_log table to become sequential scans.
>
> The problem is amplified if the tables are spread across many sets. Merging
> the sets into few larger ones causes fewer scans.
>
>
> Regards, Jan
>

Unfortunately, I've never been able to migrate to a slony user, so
slony runs as postgres and my setting are

work_mem = 2GB

maintenance_work_mem = 2GB

So don't think I can test this theory.

Also of note, we have 3 sets

Same today, just don't see any long running queries where slony has no
time to truncate the table, so I'm not clear what is preventing the
slony table from being truncated. ( it really sounds like something
that Jan has run into before, just a huge table and for some reason
the truncate can't complete before another job or request comes in?
Even now the slon table is over 12Million and I expect it to be able
to truncate/clear in the next 30 minutes or so, but why.. the big bulk
operation finishes at 5am, not sure why it takes another almost 6
hours to truncate that table..

Thanks again!
Tory


More information about the Slony1-general mailing list