Slony-I Trigger Handling

4.3. Slony-I Trigger Handling

In PostgreSQL version 8.3, new functionality was added where triggers and rules may have their behaviour altered via ALTER TABLE, to specify the following alterations:

  • DISABLE TRIGGER trigger_name

  • ENABLE TRIGGER trigger_name

  • ENABLE REPLICA TRIGGER trigger_name

  • ENABLE ALWAYS TRIGGER trigger_name

  • DISABLE RULE rewrite_rule_name

  • ENABLE RULE rewrite_rule_name

  • ENABLE REPLICA RULE rewrite_rule_name

  • ENABLE ALWAYS RULE rewrite_rule_name

A new GUC variable, session_replication_role controls whether the session is in origin, replica, or local mode, which then, in combination with the above enabling/disabling options, controls whether or not the trigger function actually runs.

We may characterize when triggers fire, under Slony-I replication, based on the following table; the same rules apply to PostgreSQL rules.

Table 4-1. Trigger Behaviour

Trigger FormWhen EstablishedLog Triggerdenyaccess TriggerAction - originAction - replica Action - local
DISABLE TRIGGERUser requestdisabled on subscriberenabled on subscriberdoes not firedoes not firedoes not fire
ENABLE TRIGGERDefaultenabled on subscriberdisabled on subscriberfiresdoes not firefires
ENABLE REPLICA TRIGGERUser requestinappropriateinappropriatedoes not firefiresdoes not fire
ENABLE ALWAYS TRIGGERUser requestinappropriateinappropriatefiresfiresfires

There are, correspondingly, now, several ways in which Slony-I interacts with this. Let us outline those times that are interesting:

  • Before replication is set up, every database starts out in "origin" status, and, by default, all triggers are of the ENABLE TRIGGER form, so they all run, as is normal in a system uninvolved in replication.

  • When a Slony-I subscription is set up, on the origin node, both the logtrigger and denyaccess triggers are added, the former being enabled, and running, the latter being disabled, so it does not run.

    From a locking perspective, each SLONIK SET ADD TABLE request will need to briefly take out an exclusive lock on each table as it attaches these triggers, which is much the same as has always been the case with Slony-I.

  • On the subscriber, the subscription process will add the same triggers, but with the polarities "reversed", to protect data from accidental corruption on subscribers.

    From a locking perspective, again, there is not much difference from earlier Slony-I behaviour, as the subscription process, due to running TRUNCATE, copying data, and altering table schemas, requires extensive exclusive table locks, and the changes in trigger behaviour do not change those requirements.

  • If you restore a backup of a Slony-I node (taken by pg_dump or any other method), and drop the Slony-I namespace, this now cleanly removes all Slony-I components, leaving the database, including its schema, in a "pristine", consistent fashion, ready for whatever use may be desired.

  • Section 3.3 is now performed in quite a different way: rather than altering each replicated table to "take it out of replicated mode", Slony-I instead simply shifts into the local status for the duration of this event.

    On the origin, this deactivates the logtrigger trigger.

    On each subscriber, this deactivates the denyaccess trigger.

  • At the time of invoking SLONIK MOVE SET against the former origin, Slony-I must transform that node into a subscriber, which requires dropping the lockset triggers, disabling the logtrigger triggers, and enabling the denyaccess triggers.

    At about the same time, when processing SLONIK MOVE SET against the new origin, Slony-I must transform that node into an origin, which requires disabling the formerly active denyaccess triggers, and enabling the logtrigger triggers.

    From a locking perspective Slony-I will need to take out exclusive locks to disable and enable the respective triggers.

  • Similarly to SLONIK MOVE SET, SLONIK FAILOVER transforms a subscriber node into an origin, which requires disabling the formerly active denyaccess triggers, and enabling the logtrigger triggers. The locking implications are again, much the same, requiring an exclusive lock on each such table.

4.3.1. TRUNCATE in PostgreSQL 8.4+

In PostgreSQL 8.4, triggers were augmented to support the TRUNCATE event. Thus, one may create a trigger which runs when one requests TRUNCATE on a table, as follows:

create trigger "_@CLUSTERNAME@_truncatetrigger" 
   before truncate on my_table 
   for each statement 
     execute procedure @NAMESPACE@.log_truncate(22);

Slony-I supports this on nodes running PostgreSQL 8.4 and above, as follows:

  • Tables have an additional two triggers attached to them:

    • log_truncate(tab_id)

      Running on the origin, this captures TRUNCATE requests, and stores them in sl_log_1 and sl_log_2 so that they are applied at the appropriate point on subscriber nodes.

    • truncate_deny()

      Running on subscriber nodes, this forbids running TRUNCATE directly against replicated tables on these nodes, in much the same way denyAccess() forbids running INSERT/UPDATE/DELETE directly against replicated tables.

  • For each table, the command TRUNCATE TABLE ONLY my_schema.my_table CASCADE; is submitted.

    Various options were considered (see Bugzilla Bug #134 ), after which CASCADE was concluded to be the appropriate answer.

    Warning

    If you have a subscriber node where additional tables have gotten attached via foreign keys to a replicated table, then running TRUNCATE against that parent table will also TRUNCATE all the children.

    Of course, it should be observed that this was a terribly dangerous thing to have done because deleting data from the parent table would already either:

    • Lead to deleting data from the child tables, this meaning the addition of TRUNCATE support is really no change at all;

    • Lead to foreign keys being broken on the subscriber, causing replication to keel over.

    (In effect, we're not really worsening things.)

  • Note that if a request truncates several tables (e.g. - as where a table has a hierachy of children), then a request will be logged in sl_log_1/sl_log_2 for each table, and the TRUNCATE CASCADE will effectively mean that the child tables will be truncated, first indirectly, then directly. If there is a hierarchy of 3 tables, t1, t2, and t3, then t3 will get truncated three times. It's empty after the first TRUNCATE, so additional iterations will be cheap.

  • If mixing PostgreSQL 8.3 and higher versions within a cluster:

    • PostgreSQL 8.3 nodes will not capture TRUNCATE requests, neither to log the need to propagate the TRUNCATE, nor to prevent it, on either origin or replica.

    • PostgreSQL 8.4 nodes do capture TRUNCATE requests for both purposes.

    • If a PostgreSQL 8.4+ node captures a TRUNCATE request, it will apply fine against a subscriber running PostgreSQL 8.3.