You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use a search engine (typesense) but I need to keep the documents in it up to date when items in my Aurora Serverless V1 postgres cluster change, in a few tables.
I wanted to do it with a trigger that invokes a lambda function, but I'm unable to attach a role to a serverless V1 cluster which appears to be a requirement for granting permission to invoke the function.
Serverless V2 might work but it doesn't scale to 0 so it's a non-starter (or non-stopper?)
My current idea is to write to the pg logs in the trigger, like:
CREATE OR REPLACEFUNCTIONpublic.reindex_trigger()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS
$$
DECLARE
json_value json;
BEGIN
IF TG_OP ='DELETE'OR (TG_OP='UPDATE'AND OLD <> NEW) OR TG_OP='INSERT' THEN
-- log the change as JSON
json_value := json_build_object(
'table', TG_TABLE_NAME,
'id', NEW.id,
'op', TG_OP
);
RAISE WARNING '${PG_LOG_TRIGGER_PREFIX}%', json_value #>> '{}'; -- 'NOTICE' doesn't appear in logs
END IF;
RETURN NEW;
END;
$$;
And then create some sort of logs handler to read the logs and parse the events. Is this insane? Maybe there is a better way?
// run DB trigger configuration scriptnewScript(stack,'ConfigureReindexTriggers',{onCreate: 'domain/search/reindexTrigger/configureDatabaseTriggers.handler',});// handler to read postgres logs and trigger reindexconstlogsSubscriptionFn=newFunction(stack,'ReindexTrigger',{handler: 'domain/search/reindexTrigger/logSubscription.handler',config: [reindexFnConfig],});reindexFunction.grantInvoke(logsSubscriptionFn);// log subscription can invoke reindex function// subscribe to logs of our databaseconstlogGroup=LogGroup.fromLogGroupName(stack,'PgLogGroup',// auto-generated log group for the DB:`/aws/rds/cluster/${db.cluster.clusterIdentifier}/postgresql`);// call our handler when a log is writtennewSubscriptionFilter(stack,'Subscription',{
logGroup,destination: newLambdaDestination(logsSubscriptionFn),filterPattern: FilterPattern.allTerms(PG_LOG_TRIGGER_PREFIX),// CDC logs});
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I want to use a search engine (typesense) but I need to keep the documents in it up to date when items in my Aurora Serverless V1 postgres cluster change, in a few tables.
I wanted to do it with a trigger that invokes a lambda function, but I'm unable to attach a role to a serverless V1 cluster which appears to be a requirement for granting permission to invoke the function.
Serverless V2 might work but it doesn't scale to 0 so it's a non-starter (or non-stopper?)
My current idea is to write to the pg logs in the trigger, like:
And then create some sort of logs handler to read the logs and parse the events. Is this insane? Maybe there is a better way?
Beta Was this translation helpful? Give feedback.
All reactions