Open Source RDBMS - Seamless, Scalable, Stable and Free

한국어 | Login |Register

Configure CUBRID HA with Vagrant and Chef Cookbook under 4 minutes


This is a follow up tutorial for Create a CUBRID Database VM with Vagrant and Chef Cookbook under 5 minutes. Therefore I will assume you have read the previous tutorial as this one will be a continuation of it.

In this tutorial I will show how to create multiple VMs automatically with CUBRID installed on each of them and have them configured in one HA group. I will tell you that with Vagrant and Chef Cookbook it is as easy as running vagrant up and waiting for 4-5 minutes until everything gets set up.

Requirements

Requirements are the same as in the first tutorial. Refer to it for details. In short, you need VirtualBox and Vagrant installed.

Vagrantfile

If you have not already downloaded cubrid-vagrant-1.5.1.tar.gz (211KB), do it now (or check the latest version from Sf.net repo). You will find everything ready for you in that archive. Refer to the previous tutorial for details on what is included.

Now for this tutorial to configure CUBRID HA on multiple VMs, we will add a few more configuration options to our Vagrantfile which comes by default in that archive.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant::Config.run do |config|
  config.vm.box_url = "http://files.vagrantup.com/precise64.box"
  config.vm.box = "precise64" # Ubuntu 12.04 x64

  config.vm.define :node1 do |node1_config|
    node1_config.vm.host_name = "node1"
    node1_config.vm.network :hostonly, "10.11.12.13"
  end

  config.vm.define :node2 do |node2_config|
    node2_config.vm.host_name = "node2"
    node2_config.vm.network :hostonly, "10.11.12.14"
  end

  config.vm.customize "modifyvm", :id, "--memory", 800

  config.vm.provision :chef_solo do |chef|
    chef.cookbooks_path = "cookbooks"

    chef.json = {
        "cubrid" => {
            "version" => "8.4.3",
            "ha_dbs" => "ha_test_db",
            "ha_hosts" => {"node1" => "10.11.12.13", "node2" => "10.11.12.14"}
        }
    }

    chef.add_recipe "cubrid"
    chef.add_recipe "cubrid::ha"
  end
end

I have removed the comments this time so that you can see the real code more prominently.

Number of VMs

The first thing to take care in this scenario is to define how many VMs (hosts) we would like to run in this HA environment. No matter how many, each host will run an independent CUBRID Server node. Later all these hosts will join in one HA group and provide auto failover between themselves.

In the above Vagrantfile you can notice the following lines:

config.vm.define :node1 do |node1_config|
   node1_config.vm.host_name = "node1"
   node1_config.vm.network :hostonly, "10.11.12.13"
 end

 config.vm.define :node2 do |node2_config|
   node2_config.vm.host_name = "node2"
   node2_config.vm.network :hostonly, "10.11.12.14"
 end

This means that we want Vagrant to build for us two VMs. The first VM should be distinguished by node1 name and should be assigned 10.11.12.13 IP address. The second VM will have a hostname node2 with an IP 10.11.12.14. In this example both hostname and IP are arbitrary, i.e. you can set your own, but you should remember to set IP within the same netmask (refer to Multiple Networks in Vagrant for details).

This is how you define the number of VMs to start up. The first VM which gets configured and started will become "master" in CUBRID HA, while other VM hosts will become "slaves".

VM memory size

In this example we set each VM to have 800MB RAM. This is enough, though you can set lower or higher values.

config.vm.customize "modifyvm", :id, "--memory", 800

If you need to have different memory on each VM, you can do so by specifying this same property when you define each VM.

Configure CUBRID HA

chef.json = {
    "cubrid" => {
        "version" => "8.4.3",
        "ha_dbs" => "ha_test_db",
        "ha_hosts" => {"node1" => "10.11.12.13", "node2" => "10.11.12.14"}
    }
}

chef.add_recipe "cubrid"
chef.add_recipe "cubrid::ha"
  1. First of all, you need to add "cubrid::ha" recipe to have CUBRID HA configured.
  2. Second, you need to provide a list of hosts and IPs which should join the same HA group. For this override ha_hosts attribute. These hosts and IPs must be identical to those you have defined for each VM above.
  3. Then, optionally, you can override ha_dbs attribute by providing an array of database names to create and sync in HA. By default it will create one database called testdb.
  4. Also, you can optionally override ha_group attribute which defaults to cubrid. This is the HA group all hosts will join. You can set any arbitrary value.

That is all you need to configure CUBRID HA. Now let's go and bring up CUBRID HA.

Vagrant Box

I will assume that you have already added precise64 (Ubuntu 12.04 LTS x64) Vagrant box. If you haven't, see the previous tutorial.

Start Up Vagrant

Run the following command to start Vagrant:

$ vagrant up

Wait some 4-5 minutes and you will have your VMs up and running in HA environment.

Test CUBRID HA

Validate CUBRID HA

Let's first validate and see if CUBRID HA has been properly configured.

Open SSH connection

For this we will open SSH connection and login to the first node1 VM. Remember, the first VM which got started becomes master while the rest become slaves.

$ vagrant ssh node1
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

100 packages can be updated.
50 updates are security updates.

Welcome to your Vagrant-built virtual machine.
Last login: Thu Jan  3 06:46:49 2013 from 10.0.2.2
vagrant@node1:~$

To connect to other nodes, simply change the node name as:

$ vagrant ssh node2

Check CUBRID Service status

vagrant@node1:~$ cubrid service status
@ cubrid master status
++ cubrid master is running.
@ cubrid server status
 HA-Server ha_test_db (rel 8.4, pid 1423)
@ cubrid broker status
  NAME           PID  PORT  AS  JQ      REQ  TPS  QPS    LONG-T    LONG-Q  ERR-Q
================================================================================
* query_editor  1641 30000   5   0        0    0    0    0/60.0    0/60.0      0
* broker1       1651 33000   5   0        0    0    0    0/60.0    0/60.0      0
@ cubrid manager server status
++ cubrid manager server is running.
vagrant@node1:~$

We can notice that CUBRID Service is running and our ha_test_db database has been successfully started. You should see the same result if you execute this command on other nodes.

Check CUBRID Heartbeat status

vagrant@node1:~$ cubrid heartbeat status
@ cubrid heartbeat list

 HA-Node Info (current node1, state master)
   Node node2 (priority 2, state slave)
   Node node1 (priority 1, state master)


 HA-Process Info (master 1224, state master)
   Applylogdb ha_test_db@localhost:/opt/cubrid/databases/ha_test_db_node2 (pid 1559, state registered)
   Copylogdb ha_test_db@node2:/opt/cubrid/databases/ha_test_db_node2 (pid 1557, state registered)
   Server ha_test_db (pid 1230, state registered_and_active)

vagrant@node1:~$ 
  1. Notice that our node1 states that it is master while node2 is slave.
  2. We can also notice that Applylogdb and Copylogdb HA processes have been successfully started.

These are two main indicators that CUBRID Heartbeat is running.

If we run the same command on node2, we will see a little bit different picture:

vagrant@node2:~$ cubrid heartbeat status
@ cubrid heartbeat list

 HA-Node Info (current node2, state slave)
   Node node2 (priority 2, state slave)
   Node node1 (priority 1, state master)


 HA-Process Info (master 1257, state slave)
   Applylogdb ha_test_db@localhost:/opt/cubrid/databases/ha_test_db_node1 (pid 1592, state registered)
   Copylogdb ha_test_db@node1:/opt/cubrid/databases/ha_test_db_node1 (pid 1590, state registered)
   Server ha_test_db (pid 1263, state registered)

vagrant@node2:~$

Check HA mode

As a final stroke, let's check the HA mode of each node.

vagrant@node1:~$ cubrid changemode ha_test_db@localhost
The server `ha_test_db@localhost''s current HA running mode is active.

We can see that the master node1 is in active mode. For other options, refer to Servers in CUBRID HA.

When we run the same command on the slave node, we will see that it is in standby mode:

vagrant@node2:~$ cubrid changemode ha_test_db@localhost
The server `ha_test_db@localhost''s current HA running mode is standby.

Insert sample data

To confirm that replication works in CUBRID HA, let's create a sample table and insert some data.

CREATE TABLE ha_table(
    id INTEGER AUTO_INCREMENT PRIMARY KEY,
    f_name VARCHAR(20) NOT NULL
);

INSERT INTO ha_table (f_name) VALUES ('Zorro'), ('Guppy'), ('Watchman');

Execute these queries in your favorite tool. You can use CUBRID Manager, CUBRID Query Browser administration tools, or CSQL command line tool. In this example I will use CSQL on our master node1:

vagrant@node1:~$ csql -u dba ha_test_db@localhost

CUBRID SQL Interpreter


Type `;help' for help messages.

csql> CREATE TABLE ha_table(
csql>     id INTEGER AUTO_INCREMENT PRIMARY KEY,
csql>     f_name VARCHAR(20) NOT NULL
csql> );
SQL statement execution time:     0.002845 sec

Current transaction has been committed.

1 command(s) successfully processed.
csql> 
csql> INSERT INTO ha_table (f_name) VALUES ('Zorro'), ('Guppy'), ('Watchman');

3 rows affected.
SQL statement execution time:     0.001594 sec

Current transaction has been committed.

1 command(s) successfully processed.
csql> SHOW TABLES;

=== <Result of SELECT Command in Line 1> ===

  Tables_in_ha_test_db@localhost
======================
  'ha_table'          


1 rows selected.
SQL statement execution time:     0.018776 sec

Current transaction has been committed.

1 command(s) successfully processed.
csql> SELECT * FROM ha_table;

=== <Result of SELECT Command in Line 1> ===

           id  f_name              
===================================
            1  'Zorro'             
            2  'Guppy'             
            3  'Watchman'          


3 rows selected.
SQL statement execution time:     0.003838 sec

Current transaction has been committed.

1 command(s) successfully processed.
csql> ;ex
vagrant@node1:~$ 

We can notice that all queries have been successfully executed on master node1.

Now, let's see if these statements have been applied to slave node2.

vagrant@node2:~$ csql -u dba ha_test_db@localhost

CUBRID SQL Interpreter


Type `;help' for help messages.

csql> SHOW TABLES;

=== <Result of SELECT Command in Line 1> ===

  Tables_in_ha_test_db@localhost
======================
  'ha_table'          


1 rows selected.
SQL statement execution time:     0.016050 sec

Current transaction has been committed.

1 command(s) successfully processed.
csql> SELECT * FROM ha_table;

=== <Result of SELECT Command in Line 1> ===

           id  f_name              
===================================
            1  'Zorro'             
            2  'Guppy'             
            3  'Watchman'          


3 rows selected.
SQL statement execution time:     0.004936 sec

Current transaction has been committed.

1 command(s) successfully processed.
csql> ;ex
vagrant@node1:~$ 

We can see that all statements are successfully replicated on the slave node2.

Shutdown master node to initiate failover

Now, let's see if master role will successfully be delegated from node1 to node2 as a result of a failover. For this to work we will manually power off our node1 VM.

$ vagrant halt node1

Then check again the heartbeat status on node2.

vagrant@node2:~$ cubrid heartbeat status
@ cubrid heartbeat list

 HA-Node Info (current node2, state master)
   Node node2 (priority 2, state master)
   Node node1 (priority 1, state unknown)


 HA-Process Info (master 1190, state master)
   Applylogdb ha_test_db@localhost:/opt/cubrid/databases/ha_test_db_node1 (pid 1525, state registered)
   Copylogdb ha_test_db@node1:/opt/cubrid/databases/ha_test_db_node1 (pid 1523, state registered)
   Server ha_test_db (pid 1196, state registered_and_active)

vagrant@node2:~$

Bingo! Now node2 is master, while the state of node1 is unknown.

Revive node1 and put it to standby (slave)

Now let's revive node1 which was previously powered off, then check its heartbeat status.

~$ vagrant up node1
....
~$ vagrant ssh node1
....
vagrant@node1:~$ cubrid heartbeat status
@ cubrid heartbeat list

 HA-Node Info (current node1, state slave)
   Node node2 (priority 2, state master)
   Node node1 (priority 1, state slave)


 HA-Process Info (master 1613, state slave)
   Applylogdb ha_test_db@localhost:/opt/cubrid/databases/ha_test_db_node2 (pid 1948, state registered)
   Copylogdb ha_test_db@node2:/opt/cubrid/databases/ha_test_db_node2 (pid 1946, state registered)
   Server ha_test_db (pid 1619, state registered)

vagrant@node1:~$

Correct! node1 became slave in our HA group.

Conclusion

In this tutorial you have learnt how to create multiple VMs automatically with CUBRID installed on each of them and have them configured in one HA group. As you can see, Vagrant along with Chef cookbook allow us to work with CUBRID Database and other software very easily. It is so much convenient and saves so much of our time. Now creating a development environment is no more painful.

Now as you know how to setup CUBRID HA environment, you can proceed to configuring CUBRID SHARD multi VM environment. Refer to:

If you have questions, feel free to ask at CUBRID Q&A site, forum, our Facebook page, or Twitter. If you have issues or feature requests to cubrid-cookbook, create a new issue at its Github repo.

comments powered by Disqus
Page info
viewed 5017 times
translations en
Author
posted last year by
Esen Sagynov
Contributors
updated last year by
View revisions
Share this article