Skip to main content
Skip table of contents

Transactions and Data Persistence

Every Flow in P4 operates in a transactional environment.
That means all data operations - creations, updates, and deletions - are handled as a single, coherent unit of work. Either everything succeeds, or nothing changes.

This ensures that processes remain consistent, recoverable, and predictable, even when they involve multiple steps, entities, or integrations.


The Transactional Model

When a Flow begins, all data operations are executed in memory using the DataStore. Nodes modify objects and variables, but no permanent write occurs until the Flow successfully reaches an End Node.

At that point, the platform transitions from the logical layer to the persistence layer:

  1. Evaluate: All objects in the DataStore are inspected for changes - newly created records, modified entities, and items marked for deletion.

  2. Begin Transaction: If transactional mode is enabled (default), the system opens a database transaction scope.

  3. Apply Changes: Each pending operation is executed in sequence.

  4. Commit or Rollback:

    • If all operations succeed, the transaction commits and the database reflects the new state.

    • If an error occurs, all operations are rolled back, and the system reverts to the previous stable state.

This guarantees data integrity even in complex multi-step or multi-user scenarios.


Why Transaction Control Matters

Without transactional control, partial updates could leave the system in an inconsistent state - for example, a work order could be created without its related task entries, or an inventory reservation might be confirmed without updating stock levels.

By isolating all changes in memory and applying them atomically, the Flow Builder prevents such inconsistencies entirely.
It also allows safe re-execution: if a process fails, it can be corrected and retried without any manual cleanup.


Persistence Scope and Configuration

Each Flow can define how and when its data are persisted. The main persistence scopes include:

  • Transactional (Simple process):
    All database operations are performed within a single atomic transaction. Ideal for standard processes where consistency is critical.

  • Non-Transactional:
    Writes occur immediately as nodes execute. This mode is rarely used but may be necessary for long-running or asynchronous background operations where holding a transaction open is impractical.

  • Subflow Transactions:
    Subflows can either inherit the parent transaction or execute independently. This flexibility allows fine-tuned control over data commits in nested process hierarchies.

Developers and consultants should always choose the simplest and safest process architecture - unless there’s a clear performance or architecture reason not to.


Concurrency and Isolation

In environments where many users or background workers operate simultaneously, P4’s process engine ensures isolation between transactions.
Each running Flow maintains its own DataStore and transaction context, preventing race conditions or conflicting writes.

Standard database isolation levels apply, so reads and writes from one Flow never leak into another until the transaction commits.


Error Handling and Recovery

If a Flow encounters an error before the End Node:

  • The system halts execution immediately.

  • All changes in the DataStore are discarded.

  • No database writes occur.

If an error happens during the transaction phase, the system rolls back automatically and logs detailed diagnostic information.
This mechanism ensures that failed operations never leave partial or corrupted data behind.


Integration with Flow Debugging

The Flow Execution Console visualizes the entire process lifecycle - including the transition from logic to persistence.
When debugging, users can see:

  • Which nodes triggered database operations.

  • Which objects were created, updated, or deleted.

  • The exact order of execution and commit.

This traceability makes troubleshooting both straightforward and safe.


Best Practices

  • Use transactional processes for most Flows - it’s safer, faster, and easier to maintain.

  • Keep transactions short and efficient; avoid holding open database locks longer than necessary.

  • Separate read-only and write-heavy logic into distinct Flows when possible.

  • Handle potential conflicts or dependencies (e.g., duplicate keys, missing references) explicitly with Condition nodes.

  • Always include error-handling logic to gracefully manage failed commits or rollback events.


Summary

Transactions and data persistence form the bridge between process logic and permanent reality.
By isolating all data operations in memory, executing them only upon success, and ensuring atomic commits, the Flow Builder guarantees data integrity across every process - from the smallest update to the most complex workflow.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.