Tue Feb 19 07:02:54 PST 2008
- Previous message: [Slony1-general] proper procedure for re-starting slony after replication slave reboots
- Next message: [Slony1-general] proper procedure for re-starting slony after replication slave reboots
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Tue, Feb 19, 2008 at 08:46:49AM -0500, Geoffrey wrote: > It's now obvious that there's a problem with slony for this one > database. For example, on one table, the primary node has grown in the > number of records by about 6000 records, still the slave is sitting at > the same value from yesterday. > > Should I restart the daemons? Do I need to start over? I don't seen > anything in the logs that tells me there's a problem. I would restart the daemons and see if that fixes it, yes (although I'd expect an error). But I wonder whether your slave is properly configured. Are you sure you have fsync enabled and write cacheing turned off? Was this a controlled reboot, or did it reboot itself? If you have lost cached data, then your replica could indeed be missing data: Slony's only going to be as reliable as the underlying PostgreSQL installation. A
- Previous message: [Slony1-general] proper procedure for re-starting slony after replication slave reboots
- Next message: [Slony1-general] proper procedure for re-starting slony after replication slave reboots
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Slony1-general mailing list