Skip to content

Triggers

This document describes the database triggers used in the Handelsregister application.

create_basic_discovery_job

Automatically creates a basic discovery processing job when a new search is inserted.

Trigger Details

  • Table: public.searches
  • Event: AFTER INSERT
  • Function: processing.create_basic_discovery_job()

What It Does

When a new search is created, this trigger automatically:

  • Creates a processing job of type basic_discovery in the processing.processing_jobs table
  • Sets status to queued
  • Links the job to the search via search_id
  • Includes the search query and user_id in the job data

Job Data Structure

json
{
"user_id": "uuid_of_search_creator"
}

Notes

  • Runs with SECURITY DEFINER privileges
  • Part of the automated processing pipeline
  • Ensures every search gets a basic discovery job without manual intervention

create_entity_discovery_job

Automatically creates a basic discovery processing job when an entity's processing status changes to indicate user-initiated discovery.

Trigger Details

  • Table: content.entities
  • Event: AFTER UPDATE OF processing_status
  • Function: processing.create_entity_discovery_job()
  • Condition: NEW.processing_status = 'basic_discovery_running' AND OLD.processing_status = 'discovered'

What It Does

When an entity's processing status changes from discovered to basic_discovery_running, this trigger automatically:

  • Creates a processing job of type basic_discovery in the processing.processing_jobs table
  • Sets status to queued
  • Links the job to the entity via entity_id
  • Uses the entity name as the search term
  • Includes metadata about the trigger source and search parameters

Job Data Structure

json
{
  "trigger_source": "user_discovery",
  "search_mode": "exact_match",
  "max_results": 1
}

Notes

  • Runs with SECURITY DEFINER privileges
  • Uses restricted search_path for security
  • Triggered by user actions that require fresh entity data
  • Part of the user-initiated discovery workflow

create_xml_parsing_job

Automatically creates an XML parsing job when an entity's data is ready to be processed.

Trigger Details

  • Table: content.entities
  • Event: AFTER INSERT OR UPDATE
  • Function: processing.create_xml_parsing_job()
  • Condition: NEW.processing_status = 'xml_ready'

What It Does

When an entity is created with or updated to a processing_status of xml_ready, this trigger automatically:

  • Creates a new processing job of type xml_parsing in the processing.processing_jobs table.
  • Sets the job status to queued.
  • Links the job directly to the entity via entity_id.

Notes

  • This trigger handles two key scenarios:
    1. Auto-Download: Fires on INSERT when a new entity is created with pre-downloaded XML data.
    2. User-Initiated: Fires on UPDATE when a user action leads to the successful download of an entity's XML.
  • The conditional logic to differentiate between INSERT and UPDATE is handled inside the trigger function to prevent loops and support both event types.

on_job_failure

Orchestrates the retry and final failure logic for all processing jobs. This trigger centralizes error handling, making the worker logic simpler and more robust.

Trigger Details

  • Table: processing.processing_jobs
  • Event: AFTER UPDATE
  • Function: processing.handle_job_failure_and_retries()
  • Condition: NEW.status = 'failed' AND OLD.status <> 'failed'

What It Does

When a worker sets a job's status to failed, this trigger intercepts the update and performs one of two actions:

  1. Retry Logic: If the job's retry_count is less than its max_retries, the trigger automatically re-queues the job by:

    • Incrementing the retry_count.
    • Resetting the job status to queued.
    • Clearing the worker_id and started_at fields to prepare it for a new attempt.
  2. Final Failure Logic: If the job has exhausted all its retries, the trigger handles the permanent failure by setting a descriptive terminal status on the related record. The behavior varies by job type:

    • For basic_discovery jobs:
      • If linked to a search_id, it updates the public.searches record's status to failed.
      • If linked to an entity_id (for user-initiated XML downloads), it updates the content.entities record's processing_status to xml_download_failed.
    • For xml_parsing jobs:
      • It updates the content.entities record's processing_status to xml_parsing_failed.

Notes

  • This centralized design keeps worker logic simple. Workers can focus on their task and just report 'completed' or 'failed'.
  • The WHEN (OLD.status <> 'failed') condition is critical to prevent the trigger from entering an infinite loop.
  • Runs with SECURITY DEFINER privileges, allowing it to securely modify tables across different schemas (processing, public, content).

log_job_step_history

Captures a complete audit trail for every processing job by logging all status changes and creation events to the job_step_history table.

Trigger Details

  • Table: processing.processing_jobs
  • Function: processing.log_job_step_history()
  • Triggers:
    1. trigger_log_job_step_history: Fires AFTER UPDATE for any change to a job.
    2. trigger_log_basic_discovery_job_creation: Fires AFTER INSERT for new basic_discovery jobs.

What It Does

This function is the core of the system's audit trail. It's triggered on both the creation and any subsequent update of a processing job, creating a rich history for debugging and monitoring.

  • On Job Creation (AFTER INSERT): When a new basic_discovery job is created, the trigger logs its initial queued status.
  • On Job Updates (AFTER UPDATE): Any time a job's status, worker assignment, or other fields change, a new record is inserted into job_step_history. This captures transitions like:
    • queued -> running
    • running -> completed
    • running -> failed

Logged Data

For each event, the trigger logs:

  • job_id, step_type, status, and worker_id.
  • The attempt_number is recorded from retry_count + 1.
  • If the new status is failed, it extracts the full structured error from the result JSONB field and stores it across the error_message, error_type, error_source, and error_code columns.

sync_shareholder_download_status

Automatically updates the entity's shareholder research status when a worker picks up a shareholder download job.

Trigger Details

  • Table: processing.processing_jobs
  • Event: AFTER UPDATE
  • Function: processing.sync_shareholder_download_status()
  • Condition: OLD.status = 'queued' AND NEW.status = 'running' AND NEW.type = 'shareholder_download'

What It Does

When a worker claims a shareholder download job by changing its status from queued to running, this trigger:

  • Updates the linked entity's shareholder_research_status to downloading
  • Keeps the entity status in sync with the actual processing state

Notes

  • Runs with SECURITY DEFINER privileges
  • Ensures UI can accurately display that a download is in progress
  • Part of the two-stage shareholder research workflow (download → parse)

sync_shareholder_parsing_status

Automatically updates the entity's shareholder research status when a worker picks up a shareholder parsing job.

Trigger Details

  • Table: processing.processing_jobs
  • Event: AFTER UPDATE
  • Function: processing.sync_shareholder_parsing_status()
  • Condition: OLD.status = 'queued' AND NEW.status = 'running' AND NEW.type = 'shareholder_parsing'

What It Does

When a worker claims a shareholder parsing job by changing its status from queued to running, this trigger:

  • Updates the linked entity's shareholder_research_status to parsing
  • Keeps the entity status in sync with the LLM extraction phase

Notes

  • Runs with SECURITY DEFINER privileges
  • Ensures UI can accurately display that parsing/LLM extraction is in progress
  • Second stage of the shareholder research workflow