How CUBRID HA Works?
It short sentences, CUBRID HA:
- Is a feature to provide load-balanced, fault-tolerant and continuous service availability.
- Provides automatic fail-over feature which allows a falling master node to delegate the control to slave nodes without human intervention.
More detailed information can be found in CUBRID HA Overview documentation.
The following image illustrates the architecture of the CUBRID HA Process.
The process can be divided into two parts: Client and Server.
Client side processes
- Users make requests through a client application (a script on a web server).
- Each application connects to a particular broker, a CUBRID middle-ware, and sends the request.
- Then the broker relays the message to an active (master) database server.
- The master database server executes the request and returns the data.
Server side processes
On the server side each CUBRID HA node consists of the following processes:
- One cub_master master process.
- One or more cub_server database server processes.
- One or more copylogdb replication log copy processes.
- And, one or more applylogdb replication log reflection processes.
When a database is configured, all of these processes will start. Since each of the processes run independently, the delay in replicating reflections does not affect the transaction that is being executed.
Additionally, each node uses two log files to deliver High-Availability:
- Transaction Log
Each incoming write transaction is automatically logged in this file.
- Replica Log
This log file is used by a slave database server to apply the changes made to a master database server.
When the active database server fails, the fail-over occurs and the first configured slave database server becomes a master node, while the failed node enters into a dead state. After the failed database server is restored, it uses the Replica Log file to replicate all the changes made in the meantime at the master database server. The database administrators have an option to make the restored node as a master node, or leave it as a standby. You can learn more about status changes in CUBRID HA Servers.
Thus, when a request reaches the master database server, the following process flow can be observed.
- If the client request results in a write operation, the master node logs it to its Transaction Log file.
- At any moment of a time the master node communicates with slave nodes through CUBRID Heartbeat messages. This allows to detect the failure of the master node and fail-over to one of the slave nodes based on configurations.
- While a transaction is being logged on the active server, the copylogdb utility of each slave server requests the transaction log from the master node in real-time.
- Then the copylogdb utility reflects the requested transaction log in a Replica Log file.
- In the meantime, in each database server the applylogdb utility is running independently which constantly checks if the Replica Log file has been updated. All new transactions are automatically replicated by the applylogdb to its database server as shown in the figure above. The information of reflected replications is stored in the internal system table called db_ha_apply_info. DBAs can access this information using the cubrid_applyinfo utility.
- As a part of the normal operation, each standby database server logs the transaction to their Transaction Log files.
- If the master database server fails, upon restoration it can replicate the data from one of these Transaction Log files of the slave nodes.
This explains the entire CUBRID HA Process.
It is very easy to get started with CUBRID. Follow these step-by-step tutorials and you will see how fun it is to learn CUBRID.
The article has been removed according to the DMCA removal notice received from Google.
[CUBRID Manager] (CM) is the most powerful database administration tool developed with DBAs in mind. It has a convenient and intuitive Graphical U...
The following is a list of CUBRID Drivers. Chose one to see the project details and other developers resources.
If you want to know how to create a table with an AUTO_INCREMENT attribute on a field, you can check out the Column Definition and AUTO_...