posted 2 years ago in Dev Platform category by Donghyun Lee
The internals of an RDBMS is extremely profound and sensitive. When updating a record in the table, not only the size of columns to be updated affects the performance but also the size of the record itself. Updating a variable-length column is less favorable than updating a fixed-length column in terms of performance, and when updating a variable-length column, you need to change the unfill_factor value and adjust the percentage of disk space.
In this article, I will explain why these details affect an UPDATE performance, and what happens inside CUBRID when an UPDATE query is executed. My story will be based on the latest version of CUBRID as of today 8.4.1.
UPDATE in RDBMS
Let's make an analogy between the process that implements an UPDATE query in RDBMS and the process of selling products in a shopping mall.
At a shopping mall the employee keep frequently bought products on the shelves and less frequently purchased products in the main storage. When customers come, they can buy the products available on shelves quickly without waiting. For products not available on the shelves, they have to wait while the shopping mall employees go and retrieve them from the storage. It is the same in RDBMS. To have fast access to the data, you need to put frequently requested data in a memory buffer. How and what products are displayed on the shelves will have a profound impact on the shopping mall's sales. Likewise, depending on which data is in the memory buffer, it will have a decisive effect on RDBMS.
In order to improve performance, RDBMS handles data input/output by page (the default value in CUBRID is 16KB). Operations for the memory buffer used for cache are also processed by pages.
There could be a shortage of space if too many products are displayed on the shelf. Suppose you want to remove slow-selling items off the shelves and display other more frequently sold products. However, the latter ones are too big to be displayed. What should you do? You will have to move products around on the shelf to allocate more space for big ones.
Which method should you use if you do not want to rearrange a page when the new data to be UPDATED from RDBMS is bigger than the old record?
Maybe the employees at the shopping mall could display products on the shelf all together. It would be effective to divide areas but it is impossible in RDBMS. In RDBMS, reading and writing can occur in one record at the same time, or there could be a race in limited buffer space. Therefore, you should be able to access a record exclusively by using record locking.
How do we actually make this work?
UPDATE athlete SET name = 'Cheolsu Kim' WHERE code = '10980';
The above SQL implementation process can be summarized in the flowchart below:
Figure 1: Implementation process flowchart for UPDATE query.
To be precise, we should also express the process of SQL parsing, SQL optimization, transaction process, data locking, data correction, data storing, operation logging, and operation restoration. However, this article will focus only on what happens in memory buffer and disk while updating.
The first thing to be done is to find the record we need to update. If the conditions used in the corresponding query can use index, then we can use index navigation.
If the details of the our record are in the memory buffer, use them as they are. If the record is not found in the memory buffer, we need to read it from the disk and transfer it to the memory buffer. This process is called FIX.
In CUBRID, index is composed of a B+ tree (for details refer to CUBRID Query Tuning Techniques and What is Covering Index in CUBRID 8.4.0?). Generally speaking, a B+ tree stores the location (VPID) of previous and next key nodes in a non-leaf node while the disk location of the actual data which corresponds to the key is stored in a leaf node. Therefore data search process is affected by the type of a B+ tree configuration.
Figure 2: Storage structure of a B+ tree index.
Sometimes the size of a column (like text) can be bigger than a page (16KB). In this case you need a separate storage other than the node of a B+ tree. Therefore, you might have to allocate a separate storage space except the B+ tree when searching for data.
Further I will explain about which index configuration is likely to cause the overflow key file or overflow OID issues when UPDATE-ing a record.
First, let's look at a very large node that configures index.
CREATE INDEX idx_history ON athlete(history); UPDATE athlete SET gold = 3 WHERE history = $history_string;
If history field data used by key is roughly 100KB, it will be greater than a page (16KB), a unit that stores regular data. This is called an overflow key, and you cannot save the key value to a page. In a node, you should store ID (OFID, Overflow File ID), which refers to a file that stores the key value.
As CUBRID manages data by page, it is burdensome if data is divided into several pages, as many memory copies (memcpy) are needed to configure a single piece of data. Hence, it is recommended to consider the size of a key so that it can be stored in one page if possible.
Now let's look at a situation in which keys are overlapped.
CREATE INDEX idx_gender ON athlete(gender); UPDATE FROM athlete SET gender_type = 'Male' WHERE gender = 'M'
Assume gender column is indexed. Low-selectivity of the gender field (only two values "M" and "F", or DISTINCT(column) is low) may result in many OIDs being stored in the same node in B+ tree for the same index key. If space necessary for storing OID is bigger than a page by 1/4, it will be stored in a separate Overflow OID (OOID) page. If the number of separate overflow pages increases, it would affect the performance as there would be a burden to navigate between these additional links.
The following is a structure of a B+ tree in case overflow files and overflow OID pages exist.
Figure 3: A storage structure of OFID and OOID.
Modifying the Column Value
This process finds the record to be updated and modifies the necessary column data of the record. The following flowchart summarizes this process:
Figure 4: Data update flowchart.
Since RDBMS implements read and write operations in pages, you will need to read or write more than one pages. Data stored in the memory can be read/written faster than the data stored on a disk, thus you should minimize disk I/O operations by maximizing the memory.
To do this, you can use a memory buffer, which is an intermediate storage space. As shown in the above Figure 4, first, check if the necessary data is in the memory buffer. If there is no data, load it to the memory buffer from the disk (DB Volume). Then update the data in the memory. When the data is updated, the data to be modified is recorded in logs, and, finally, it is recorded to the database volume (disk) at the checkpoint.
Memory and Disk Management
The following figure summarizes the policy and main tasks used for memory and disk management.
Figure 5: Work that occurs when storing updated data.
Page Management Policy
LRU (Least-Recently Used)
In order to effectively use the limited buffer size and maintain recently used data, remove the data which has not been used for a long time and free up the space. This is a memory method used by the STEAL policy.
Tasks of Buffer Administrator
The task acquires a latch for a buffer in competition with other transactions when reading a page from a disk using a memory buffer. Therefore, you cannot use different transactions while acquiring a latch for a buffer to be used in your transaction. If there is a corresponding page in a buffer, a page is not loaded from a disk (database volume).
This is a different concept from FIX in a way that the task is for a page that is no longer used in a transaction. It is not always the case that an UNFIXed page is flushed to a disk.
In this task you will write a DIRTY page in a buffer to a disk. As you have to write a log file (WAL - Write-Ahead Log) to a disk, just before executing the task a race can occur in which two or more files write in the same log file.
Logging Policy to Restore Failure or Rollback
WAL (Write-Ahead Logging)
Write-Ahead Logging is a policy to enable restoration upon system failure, i.e. always write UNDO/REDO logs to a disk before writing the data page. UNDO includes the data before UPDATE and REDO includes the new data after UPDATE.
You can use STEAL/NO STEAL, FORCE/NO FORCE policies to specify when to reflect the page stored in a memory buffer to a disk.
When a transaction attempts to use a memory buffer, the STEAL policy flushes the Least Recently Used (LRU) dirty pages and frees up the memory buffer. To use the STEAL policy, you need UNDO logging in order to recover old data when you rollback a transaction.
In contrast, the NO STEAL policy keeps a DIRTY page in a buffer until a transaction is completed. Therefore, there should be enough buffer space to keep all pages that have been changed by all transactions in progress.
The FORCE policy reflects all pages updated by a transaction to a database volume when commit is called. Using the FORCE policy affects performance, as disk write has to be carried out every time a transaction commits. If a page has to be corrected 20 times by several transactions in a brief period of time, the disk also needs to be written to 20 times.
The NO FORCE policy does not necessarily reflect all pages renewed by a transaction during a commit process. The costs for rewriting a page could be reduced, if other transactions are renewing the same page while commit is not reflected to a database volume. However, to ensure data changes by a successfully committed transaction, REDO logging is necessary in the event of a system failure.
The NO STEAL and FORCE policies will be the easiest implementation, but their performance is the worst. As the STEAL and NO FORCE policies are the best, most databases including CUBRID are implemented via these policies. For this reason both UNDO logging and REDO logging are necessary, and the entire record should be logged not just the column to UPDATE. When UNDO logging, the record before the change is stored as-is and the record after the change is compressed and stored when REDO logging.
You must note that a big record size will affect logging, even if the size of column to UPDATE is small. If there are a lot of records that are very big and the number of columns to UPDATE frequently is small, it is preferable to gather these UPDATE-able columns and create a separate table for them. This is recommended only considering the size of logging records.
Page Storage Structure
Now let's learn about how a page is stored depending on the size of data to UPDATE. The following figure shows the basic structure where a page is stored in the record:
Figure 6: Record storage structure inside a page.
When a new value is entered, records are assigned from the front of a page while slots are assigned from the end of a page in the opposite direction. One record consists of a header (header), fixed-length columns (Fix Col), and variable-length columns (Var Col). The following explains each component.
- page header: Stores the following information about total slots within a page
- a number of slots and records
- the size of total extra space and contiguous extra space
- the initial offset of contiguous space
- transaction ID
- heap page header: Stores initial record location and schema information location.
- record: Stores a record value.
- slot: Stores information about record location (offset, length).
If there is a variable-length column in a record, the size of the record to UPDATE may be changed. If the record to UPDATE is smaller than the existing size, data can be UPDATEd in the same location. However, if the size is bigger, assign the record to UPDATE in the same page, and the existing slot indicates the record to UPDATE. If there is no space in the same page to UPDATE a new record, a separate page will be assigned as shown in the following figure, and the existing record indicates a new record.
Figure 7: The structure of record storage when space to UPDATE in the existing page is insufficient.
If the size of a record to UPDATE is bigger than a page, data is divided and stored by page as shown in the following figure, and the page that stored the previous data indicates the page that stored renewed data.
Figure 8: The structure of a record storage when the record to UPDATE is bigger than a page.
When writing data in this case, data should be split into several pages, and to read, the split pages should be loaded in a series of memory, which may negatively affect the performance.
Exclusive Record Lock
To this point we have explored how CUBRID uses system memory and disk when UPDATE query is requested. Now I will explain how to manage records within the corresponding range when executing an UPDATE query.
Generally, a shared lock is used to search for a specific record to allow other transactions to see [SELECT] the same record you are currently navigating. For INSERT, UPDATE, or DELETE, an exclusive lock is used to block access from other transactions.
In the searching process using WHERE condition, however, you must use an update lock instead of shared lock when acquiring the lock for the record in the target range of UPDATE. You also need to acquire the key of the record in operation and the lock for the next key to block your operating range from other transactions.
We will describe why an additional update lock is necessary besides exclusive lock during UPDATE, and why key locking is needed.
The following explains lock acquisition/release for UPDATE execution.
- Request UPDATE query with the WHERE condition to the database server.
- Acquire update lock while searching for the record with the corresponding range condition.
- In this case, acquire the key lock for the following corresponding key in the WHERE condition range.
- While executing actual data UPDATE, convert update lock to exclusive lock.
- Commit transactions and release all acquired locks.
The action of an update lock can be explained as:
as this is the only transaction authorized to UPDATE until the UPDATE transaction is completed, other transactions cannot not UPDATE.
Another transaction can acquire an update lock on a transaction that has a record with shared lock. However, if a transaction has acquired update lock, another transaction cannot obtain shared lock.
Concurrency may decrease if an exclusive lock of the record to UPDATE can be obtained immediately, and thus using shared lock seems to be a better option while searching for a target by condition. Why do we have to use an update lock instead of shared lock? The reason is to minimize deadlock. For better understanding, let's look at the following table, which has no update lock.
|Transaction 1||shared lock1 (Record A)|
|Transaction 2||shared lock2 (Record A)|
|Transaction 1||exclusive lock1 (Record A);|
|Transaction 2||exclusive lock1 (Record A);|
The situation above is the one in which Transactions 1 and 2 of the same Record A acquired the shared lock, and they are about to upgrade the lock to exclusive lock. However, they both are waiting for the other transaction to free shared lock; that is, they are in deadlock. What would happen if there is an update lock?
|Transaction 1||update lock1 (Record A);|
|Transaction 2||update lock2 (Record A);|
|Transaction 1||exclusive lock1 (Record A);|
|Transaction 2||update lock2 (Record A);|
exclusive lock2 (Record A);
Transaction 1 has acquired an update lock and Transaction 2 is in the standby mode to obtain an update lock. Transaction 1 changes the update lock to exclusive lock, UPDATEs Record A and releases the lock. Transaction 2 acquires the update lock2, which was in a standby mode, and thus it can execute UPDATE. When the update lock is introduced, you can avoid some of the deadlock effects caused by UPDATE.
The following deals with specific examples of the situations above. Automatic commit mode is released and isolation level is 4 (REPEATABLE READ SCHEMA, READ COMMITTED INSTANCES).
create table t1(id int, num int); create unique index idx_t1_id on t1(id); insert into t1 values (10,10), (20,20), (40,40), (80, 80); commit;
|Transaction 1||Execution (update ... where id = 40;)|
|Transaction 2||Standby (update ... id = 40;); Standby|
|Transaction 2||Execution (update ... where id = 40;)|
When a transaction is acquiring an update lock, other transactions cannot UPDATE or DELETE the corresponding record. Therefore, if you use an update lock and access the same record above, you can avoid deadlock resulting from UPDATE. An update lock is used not only for UPDATE, but also for DELETE tasks with the WHERE condition that uses index.
An update lock can be obtained even when the record to UPDATE is in the shared lock with other transactions; however, two or more transactions cannot be acquired at the same time.
For data consistency, update lock alone is insufficient. Let's consider the following example.
|Transaction 1||delete table where id = 1|
|Transaction 2||insert into t1 (id) values (1)|
There are many models that can be used to handle the request above, and approach methods vary depending on RDBMS.
CUBRID 2008 R4.1 places the INSERT task of Transaction 2 in a standby mode. However, Transaction 1 cannot put Transaction 2's tasks in the standby mode by using an update lock as the data of the corresponding key will be deleted from the B+tree when executing DELETE. This problem can be solved by using a key lock.
When performing DML tasks, a key lock acquires lock authority of the record indicated by the next key of the record for processing. If table locking is executed without obtaining a key lock, it will deteriorate concurrency; thus, a lock to a certain area should be obtained to ensure maximum concurrency.
Typical example of using key locks is a unique index. When performing DML tasks, you should obtain a key lock for the record indicated by the next key of the corresponding record and the target record, to ensure there are no DML tasks by other transactions within the range.
Let's learn about the operating process of key locks with the following example: Automatic commit mode is released.
create table t1 (id int, num int); create unique index idx_t1_id on t1(id); insert into t1 values (10,10), (20,20), (40,40), (80, 80); commit;
|Transaction 1||update t1 set num=400 where id=40;|
insert into t1 values (30, 30); Standby… rollback;
delete from t1 where id=20; Standby… rollback;
insert into t1 value (70, 70); Standby… rollback;
delete from t1 where id=80; Standby… rollback;
update t1 set num=444 where id=80; Standby…rollback;
- When Transaction 1 executes UPDATE of the id=40 record, the transaction will acquire the key lock for the id=40 key and id=80, the next key. In this case, other transactions within the range cannot INSERT, UPDATE, or DELETE new records.
- For Transaction 2 to INSERT the id=30 record, it should acquire the key lock for id=40; as Transaction 1 already acquired the key lock for id=40, the work is in standby mode.
- As Transaction 1 already acquired the key lock for id=40 and id=80, all other tasks, which require the work with these keys, are all in standby mode.
It is likely that other transactions can be in standby mode because of key lock. The more indexes you have on a column you plan to UPDATE, the occurrence of key lock may increase which results in a greater negative effect on UPDATE performance. Thus, we recommend that you remove index for the corresponding columns, considering both SELECT and UPDATE performance.
Figure 9: Key lock related scenario about records being updated.
Considering the occurrence of deadlock between indexes due to key lock, we recommend that you retry the corresponding task. In the following scenario, we will learn about deadlocks resulting from key lock. Deadlock may occur under the assumption that Transaction 1 and 2 are implemented at the same time.
create table t1(id int, num int); create unique index idx_t1_id on t1(id); insert into t1 values (10,10), (20,20), (40,40), (80, 80);commit;
|Transaction 1||update t1 set num = 40 where id = 1 using index pk_t1_id; (pk)|
|Transaction 2||update t1 set id = 30 where num = 7 using index i_t1_num; (idx)|
Figure 10: Deadlock scenario caused by key lock.
Transaction 1 attempts to UPDATE the column num of id=1 record and obtains the key lock for id=1 and id=5 as well as the exclusive lock for record (1,9). In this case, Transaction 2 attempts to UPDATE the column id, which is num = 7, and acquires the key lock for num=7 and num=9 as well as the exclusive lock for record (5 and 7).
Transaction 1 needs the key lock for index idx 9 to UPDATE the column num of id=1 record but Transaction 2 has already acquired the corresponding lock. While Transaction 2 needs the key lock for index pk 5 to UPDATE the column id of num=9 record, Transaction 1 has already acquired the corresponding lock. That is, deadlock has occurred.
Key lock has key shared lock and key exclusive lock. The lock that the INSERT task acquires for the next key is key shared lock, and the lock that UPDATEd or DELETEd tasks will acquire for the next key is key exclusive lock. When UPDATE or DELETE is working on the record of a specific range, other transactions must not change the record within the corresponding range; this is to allow INSERT by other transactions into the key lock range of the next location during INSERT operation. Let's examine the following figure:
Figure 11: Key lock related scenario for record being INSERTed.
When Transaction 1 INSERTs id = 90 record, it acquires the key shared lock for id = 100, the next key. When Transaction 2 INSERTs id = 95 record, it acquires the key shard lock for id = 100, which is also the next key. Lock sharing by transactions that are different from one another on the same key id = 100 is possible because key lock is acquired in key shared lock mode.
In this article we have observed the events that occur in the memory and disk storage structure during UPDATE of data in CUBRID and how to obtain locks so as to block other transactions from the record within the UPDATE range.
It is a mistake to think that the processing costs will be small if just one record or a small column is UPDATEd. When searching for the WHERE condition to find the UPDATE target, yes, the use of index will affect the searching time. But also you must remember that the size of a record also affects UNDO/REDO logging, not the size of UPDATE target column. We can assume that the smaller the size of record to be UPDATEd and the longer the fixed-length compared to the existing size, the more advantageous it is for UPDATE performance.
Under the assumption that UPDATE of a variable-length column occurs always, you can consider adjusting the system parameter unfill_factor (the default value 0.1) value, which controls the disk space of a pre-allocated page for data UPDATE. For example, suppose the value of unfill_factor is 0.1, there will be about 10% of extra space in a page when data INSERT occurs continuously; thus, the remained 10% of extra space could be used for INSERT for the same page or for UPDATE.
UPDATE requires update lock and key lock. You should understand that these could lead to standby or deadlock of competing transactions. If an unexpected delay or deadlock occurs, you should check if an index configuration for UPDATE column is appropriate, or if the number of indexes on the column could be minimized.
I hope that this article and the recommendations I gave to reduce the burden of UPDATE execution provide you an opportunity to understand the locking and storage structure usage methods in CUBRID database when executing UPDATE operations.
By Lee Donghyun, CUBRID Manual and Release Notes writer, CUBRID DBMS Development Lab, NHN Corporation.