Open Source RDBMS - Seamless, Scalable, Stable and Free

한국어 | Login |Register

Versions available for this page: CUBRID 8.2.1 |  CUBRID 8.3.0 |  CUBRID 8.3.1 |  CUBRID 8.4.0 |  CUBRID 8.4.1 |  CUBRID 8.4.3 |  CUBRID 9.0.0 | 

Database Concurrency

If there are multiple users with read and write privileges in a database, possibility exists that more than one user will access the database simultaneously. Controlling access and update in a multi-user environment is essential to protect database integrity and ensure that users and transactions should have accurate and consistent data. Without appropriate control, data could be updated incorrectly in the wrong order.

Like most commercial database systems, CUBRID adopts serializability, an element that is essential to maintaining data concurrency within the database. Serializability is required every time multiple transactions are executed at the same time. The simultaneous execution of multiple transactions must work in the same manner as executing each transaction sequentially. This principle is based on the assumption that database concurrency will be protected if transactions are executed atomically (i.e., the entire transaction either commits or rolls back). Serializability in CUBRID is managed by the well-known two-phase locking technique. This will be covered in the Lock Protocol section.

The transaction to commit must ensure database concurrency, and each transaction must guarantee appropriate results. When other transactions are being executed, events in one transaction must be hidden from other transactions. This is called isolation. Transaction isolation level is the degree to which a transaction is separated from all other concurrent transactions. The higher the isolation level, the lower the influence from other transactions. The lower the isolation level, the higher the concurrency. Consistency and concurrency are controlled by the isolation level. Therefore, you need to determine the isolation level based on the system to be applied.

For isolation levels supported by CUBRID, see Concurrency/Lock Parameters in Database Server Configuration.

CUBRID controls concurrent accesses by using a sophisticated locking technique. For any type of data, the lock required for access is determined automatically, without the need for any hint from the application; that is, the user’s intervention is not required. CUBRID acquires read and write locks to execute retrieval and update operations, respectively. You can lock a specific table, instance or index, as well as the entire database.

The isolation levels can be explained by the following three concepts.

  • Dirty read: This occurs when a transaction reads/updates an object that has been updated by another transaction, but has not yet been committed, and it will never be committed (i.e. the transaction can be rolled back). In this case, the second transaction may access an object that does not exist in fact. The following describes several dirty read scenarios:
    • Transaction T1 updates an object. Transaction T2 reads the object before transaction T1 commits or aborts. If T1 aborts, T2 will see the contents of an object that has not been committed.
    • Transaction T1 deletes instance O1 of table C, and inserts O2. Then transaction T2 executes a query on table C. If T1 is aborted, T2 will see the instance O2 that has not been committed, or will not see O1, which has not been deleted.
    • Two transactions concurrently update the same object. The two transactions may have a new value derived from the object, by an uncommitted value. If one transaction aborts or both transactions commit, it is not certain which value will be stored in the database and which will be lost. This is often called a lost update.
  • Non-repeatable read: If a transaction reads an object that it has previously read, it will see a different value of the object. The following describes several non-repeatable read scenarios:
    • Transaction T1 reads an object. Then transaction T2 updates (deletes) the object and commits. If T1 attempts to read the object again, it will see new contents of the object. (It may find that the object no longer exists.)
    • Transaction T1 executes a query on table C. Then transaction T2 deletes and inserts some instances of C, and commits. If T1 executes the same query again, it may get different results due to the inserted/deleted objects.
    • Transaction T1 executes a query on table C. Then transaction T2 deletes an attribute of table C and commits. If T1 executes the same query again, it may not see the deleted column.
  • Phantom read: If a transaction retrieves an object multiple times with a search condition, it will see newly inserted objects, which are called phantoms. The following describes several phantom read scenarios:
    • Transaction T1 retrieves an object with a single search condition. Then transaction T2 inserts a new instance satisfying the search condition and commits. If T1 attempts to read the object again using the same search condition, it may see the new object inserted by T2.
    • Transaction T1 retrieves an object with a single search condition. Then transaction T2 updates an object that no longer satisfies the search condition, and commits. If T1 attempts to read the object again with the same search condition, it may see the object updated by T2.
    • Transaction T1 retrieves an object after transaction T2 deletes one of the objects satisfying the search condition of T1. Then T2 rolls back. If T1 retrieves the object again with the same search condition, it may see the object T2 tried to delete.