Background Image




PrevPrev Article

NextNext Article

Larger Font Smaller Font Up Down Go comment Print Attachment

Author: Youngjin Joo


Recently, we have seen many companies that use Docker and Kubernetes to serve as containers. CUBRID also provide the container mechanism and will be serviced in Docker and Kubernetes environments.


Docker first comes across in the book ‘ Docker for the Really Impatient’. We highly recommend you to read this book and it is publicly available in website: Meanwhile, there is a slide summary of the book on the page: However, all of this material are written in Korean.


Docker is the result of an open source Container project created using a virtualization technology provided by the OS by a company called Docker. A container is similar to a virtual machine in that it configures a guest environment on the host, but it does not require an OS to be installed separately, and has the advantage of being able to achieve the same performance as the host. There are many other advantages, but I think the biggest advantage is that it is easy to deploy; and Kubernetes makes good use of this advantage.


Kubernetes is an open source container orchestration system created by Google to automate and manage container deployment. Deploying and managing containers on multiple hosts is not an easy task because it takes into account the host's resources and network connectivity issues, including containers, but Kubernetes does it for you.


There are many well-organized articles about Docker and Kubernets, including its conceptual content, strengths, and weaknesses. Rather than overlapping the concept of these articles, I would like to talk more about the process of constructing the Docker and Kubernets environment and making CUBRID into containers in this article.



1. Node Configuration

2. Installing the Docker

3. Firewall Settings

4. Installing Kubernets

5. Installing pod network add-on

6. kubedm join

7. Installing Kubernets Dashboard

8. Creating a CUBRID container image

9. Upload CUBRID Container Image to Docker Hub

10. Deploy CUBRID Containers to Kubernets Environments


1. Node Configuration


At least two nodes (Master, Worker) are required to utilize Kubernetes to deploy and service containers. I tested it with 3 nodes to see if the container is created on different nodes. The OS for each node was downloaded from the '' page from the CentOS 7.8.2003 ISO file and installed as Minimal.


1-1. Hostname setting: 

# hostnamectl set-hostname master
# hostnamectl set-hostname worker1
# hostnamectl set-hostname worker2


1-2. /etc/hosts setting:

# cat > /etc/hosts master worker1 worker2


2. Installing the Docker


Kubernetes only automates and manages the deployment of containers, and Docker is the real container service. Therefore,  Docker must be installed before installing Kubernetes. Docker Community Edition (CE) need to be installed. (For installation on Docker CE, refer to:


2-1. Installing the YUM Package required for the Docker installation

You need to install the yum-util package, which includes the yum-config-manager utility used when adding the YUM Repository, and the device-mapper-persistent-data and lvm2 packages required to use the devicemapper storage driver.

# yum install -y yum-utils device-mapper-persistent-data lvm2


2-2. Add Docker CE Repository using yum-config-manager utility

# yum-config-manager --add-repo


2-3. Install docker-ce from Docker CE Repository

For kubernets compatibility, see to install versions supported by kubernets.

# yum list docker-ce --showduplicates | sort -r
# yum update -y && yum install -y \ \
  docker-ce-19.03.11 \


2-4. Docker autostart setting when OS boots

# systemctl enable docker


2-5. Start Docker

# systemctl start docker


2-6. docker command check

By default, non-root users cannot execute the docker command. You must log in with the root user or a user account with sudo privileges and run the docker command.

# docker


2-7. Add non-root user to Docker group

In addition to logging in with the root user or a user account with sudo privileges, non-root users must be added to the Docker group to run docker commands.

# usermod -aG docker your-user


2-8. Delete Docker

# yum remove -y docker \
    docker-client \
    docker-client-latest \
    docker-common \
    docker-latest \
    docker-latest-logrotate \
    docker-logrotate \
    docker-selinux \
    docker-engine-selinux \
# rm -rf /var/lib/docker


3. Firewall Settings

The ports used in the Kubernetes environment can be checked on the '' page. The ports used vary depending on the node type (Master, Worker).


3-1. Master note



 Port Range 


 Userd By 




 Kubernetes API server





 etcd server client API

 kube-apiserver, etcd 




 Kubelet API

 Self, Control plane












# firewall-cmd --permanent --zone=public --add-port=6443/tcp
# firewall-cmd --permanent --zone=public --add-port=2379-2380/tcp
# firewall-cmd --permanent --zone=public --add-port=10250/tcp
# firewall-cmd --permanent --zone=public --add-port=10251/tcp
# firewall-cmd --permanent --zone=public --add-port=10252/tcp
# firewall-cmd --reload


3-2. Work Note



 Port Range 


 Userd By 




 Kubelet API

 Self, Control plane 




 NodePort Services 



# firewall-cmd --permanent --zone=public --add-port=10250/tcp
# firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp
# firewall-cmd --reload


3-3. pod network add-on

The ports used by Calico to be installed as pod network add-ons must also be opened. The ports used by Calico can be found on the'' page.



Connection type


Calico networking (BGP)





TCP 179


4. Installing Kubernets

For Kubernetes installation, refer to the page.


4-1. Add Kubernetes Repository

# cat << EOF > /etc/yum.repos.d/kubernetes.repo



4-2. Set to SELinux Permissive Mode

# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# sestatus

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31


4-3. Install kubelet, kubeadm, and kubectl

- kubeadm: the command to bootstrap the cluster.

- kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.

- kubectl: the command line util to talk to your cluster.

# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes


4-4. Setting kubelet auto start when OS boots

# systemctl enable kubelet


4-5. Start kubelet

# systemctl start kubelet


4-6. Setting net.bridge.bridge-nf-call-iptables

# cat << EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

# sysctl --system


4-7. swap off

Kubernetes must switch off both Master and Worker nodes. If swap is not off, an error message such as "[ERROR Swap]: running with swap on is not supported. Please disable swap" is displayed at kubeadm init.

# swapoff -a
# sed s/\\/dev\\/mapper\\/centos-swap/#\ \\/dev\\/mapper\\/centos-swap/g -i /etc/fstab


4-8. kubeadm init

For initialization, refer to the . Depending on the type of pod network add-on, the setting value of --pod-network-cidr should be different. In this case, using Calico as a pod network add-on, the setting value of --pod-network-cidr should be, but it overlaps with the virtual machine network band, so change it to Use.

# kubeadm init --pod-network-cidr= --apiserver-advertise-address=

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config


5. Installing pod network add-on

In the Kubernetes environment, pod network add-on is required for pods to communicate. CoreDNS will not start until the pod network add-on is installed. After checking'kubeadm init' with'kubectl get pods -n kube-system', you can see that CoreDNS has not started yet. If the pod network overlaps with the host network, problems may occur. So, instead of using the network overlapping the virtual machine network band in the'kubeadm init' step, it was changed to


kubeadm only supports Container Network Interface (CNI) based networks, not kubenet. Container Network Interface (CNI) is one of the projects of the Cloud Native Computing Foundation (CNCF) and is a standard interface for creating plug-ins that can control networking between containers. There are several types of pod network add-ons created based on the CNI (Container Network Interface), installation instructions are provided for each network add-on: Here we are going to use Calico.


Instructions for installing Calico can be found on the page above and  the following link: Calico is built on the third layer, also known as Layer 3 or the network layer, of the Open System Interconnection (OSI) model. Calico uses the Border Gateway Protocol (BGP) to build routing tables that facilitate communication among agent nodes. By using this protocol, Calico networks offer better performance and network isolation.


5-1. Calico Setting

If the pod network add-on is not installed, you can see that CoreDNS has not started yet.

# kubectl get pods -n kube-system

NAME                                       READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-s8d9q                   1/1     Running   0          6h16m
coredns-66bff467f8-t4mpw                   1/1     Running   0          6h16m
etcd-master                                1/1     Running   0          6h16m
kube-apiserver-master                      1/1     Running   0          6h16m
kube-controller-manager-master             1/1     Running   0          6h16m
kube-proxy-kgrxj                           1/1     Running   0          6h11m
kube-scheduler-master                      1/1     Running   0          6h16m



The pod network add-on installation is done only on the Master node. In the'kubeadm init' step, we changed the setting value of --pod-network-cidr to, but we had to install it by changing the value in the Calico YAML file.

# kubectl apply -f

# wget
# sed s/\\/16/\\/16/g -i calico.yaml
# kubectl apply -f calico.yaml


After installing the pod network add-on, you can check the status that CoreDNS started normally.

# kubectl get pods -n kube-system

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-fnwtn   1/1     Running   0          6h13m
coredns-66bff467f8-s8d9q                   1/1     Running   0          6h16m
coredns-66bff467f8-t4mpw                   1/1     Running   0          6h16m
etcd-master                                1/1     Running   0          6h16m
kube-apiserver-master                      1/1     Running   0          6h16m
kube-controller-manager-master             1/1     Running   0          6h16m
kube-proxy-kgrxj                           1/1     Running   0          6h11m
kube-scheduler-master                      1/1     Running   0          6h16m


6. kubedm join

In order to become a worker node, you need to register the master node by executing the'kubeadm join' command.

# kubeadm token create --print-join-command


6-1. Worker node registration

$ kubeadm join --token vyx68j.mu3vdmm3rscy60pd     --discovery-token-ca-cert-hash sha256:258640f0d92a1d49da2012be529d3c240594fb007e8666015d0a99adc421ffd1


7. Installing Kubernets Dashboard

You can check how to install Dashboard on . There are two ways to install Kubernetes Dashboard: Recommended setup and Alternative setup. Alternative setup installation is not recommended as a method to access Dashboard using HTTP without using HTTPS.


7-1. Certificate generation

To access the Dashboard directly without 'kubectl proxy', you need to establish a secure HTTPS connection using a valid certificate.

# mkdir ~/certs
# cd ~/certs

# openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
# openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
# openssl req -new -key dashboard.key -out dashboard.csr
# openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt


7-2. Recommended setup

# kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system
# kubectl apply -f


7-3. Dashboard service changed to NodePort

If you change the Dashboard service to NodePort, you can connect directly through NodePort without a'kubectl proxy'.

# kubectl edit service kubernetes-dashboard -n kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
apiVersion: v1
kind: Service
  annotations: |
  creationTimestamp: "2020-07-31T04:14:49Z"
    k8s-app: kubernetes-dashboard
  - apiVersion: v1
    fieldsType: FieldsV1
          .: {}
          .: {}
          f:k8s-app: {}
        f:externalTrafficPolicy: {}
          .: {}
            .: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          .: {}
          f:k8s-app: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: kubectl
    operation: Update
    time: "2020-07-31T04:15:28Z"
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "2308"
  selfLink: /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard
  uid: 44d10147-c353-4f0c-8181-9a16ec6ebd73
  externalTrafficPolicy: Cluster
  - nodePort: 30899
    port: 443
    protocol: TCP
    targetPort: 8443
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
  loadBalancer: {}


7-4. Dashboard access information confirm

# kubectl get service -n kubernetes-dashboard

NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP    <none>        8000/TCP        6h15m
kubernetes-dashboard        NodePort   <none>        443:30899/TCP   6h15m


7-5. Dashboard account creation

# kubectl create serviceaccount cluster-admin-dashboard-sa
# kubectl create clusterrolebinding cluster-admin-dashboard-sa --clusterrole=cluster-admin --serviceaccount=default:cluster-admin-dashboard-sa


7-6. Check account token information required when accessing Dashboard

The contents of the token value can be checked by decoding it on: .

# kubectl get secret $(kubectl get serviceaccount cluster-admin-dashboard-sa -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode



7-7. Dashboard connection check



8. Creating a CUBRID container image

8-1. Dockerfile

FROM centos:7


RUN yum install -y epel-release; \
    yum update -y; \
    yum install -y sudo \
                   java-1.8.0-openjdk-devel; \
    yum clean all;

RUN mkdir /docker-entrypoint-initdb.d

RUN useradd cubrid; \
    sed 102s/#\ %wheel/%wheel/g -i /etc/sudoers; \
    sed s/wheel:x:10:/wheel:x:10:cubrid/g -i /etc/group; \
    sed -e '61 i\cubrid\t\t soft\t nofile\t\t 65536 \
    cubrid\t\t hard\t nofile\t\t 65536 \
    cubrid\t\t soft\t core\t\t 0 \
    cubrid\t\t hard\t core\t\t 0\n' -i /etc/security/limits.conf; \
    echo -e "\ncubrid     soft    nproc     16384\ncubrid     hard    nproc     16384" >> /etc/security/limits.d/20-nproc.conf

RUN curl -o /home/cubrid/CUBRID-$CUBRID_BUILD_VERSION-Linux.x86_64.tar.gz -O$CUBRID_VERSION/CUBRID-$CUBRID_BUILD_VERSION-Linux.x86_64.tar.gz > /dev/null 2>&1 \
    && tar -zxf /home/cubrid/CUBRID-$CUBRID_BUILD_VERSION-Linux.x86_64.tar.gz -C /home/cubrid \
    && mkdir -p /home/cubrid/CUBRID/databases /home/cubrid/CUBRID/tmp /home/cubrid/CUBRID/var/CUBRID_SOCK

COPY /home/cubrid/

RUN echo ''; \
    echo 'umask 077' >> /home/cubrid/.bash_profile; \
    echo ''; \
    echo '. /home/cubrid/' >> /home/cubrid/.bash_profile; \
    chown -R cubrid:cubrid /home/cubrid

COPY /usr/local/bin/
RUN chmod +x /usr/local/bin/; \
    ln -s /usr/local/bin/ /

VOLUME /home/cubrid/CUBRID/databases

EXPOSE 1523 8001 30000 33000




export CUBRID=/home/cubrid/CUBRID
if [ ! -z $LD_LIBRARY_PATH ]; then
export PATH=$CUBRID/bin:$PATH

export TMPDIR=$CUBRID/tmp

export JAVA_HOME=/usr/lib/jvm/java
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.
export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64:$JAVA_HOME/jre/lib/amd64/server:$LD_LIBRARY_PATH



set -eo pipefail
shopt -s nullglob
# Set the database default character set.
# '$CUBRID_CHARSET' If the environment variable is not set,set'ko_KR.utf8' as the default character set.
if [[ -z "$CUBRID_CHARSET" ]]; then
# set the database name to be created when the container starts.
# If you do not set the'$CUBRID_DATABASE' environment variable, set'demodb' as the default database.
if [[ -z "$CUBRID_DATABASE" ]]; then
# Set the name of the user account to use in the database.
# If the'$CUBRID_USER' environment variable is not set,'dba' is set as the default user.
if [[ -z "$CUBRID_USER" ]]; then
# Set the password for the'dba' account to use in the database and the password for the user account.
# '$CUBRID_DBA_PASSWORD', '$CUBRID_PASSWORD' Check the environment variables and set them individually.
# If only one of the two environment variables is set, set both environment variables to the same value.
# If neither environment variable is set, set the'$CUBRID_PASSWORD_EMPTY' environment variable value to 1.
if [[ ! -z "$CUBRID_DBA_PASSWORD" || -z "$CUBRID_PASSWORD" ]]; then
elif [[ -z "$CUBRID_DBA_PASSWORD" || ! -z "$CUBRID_PASSWORD" ]]; then
elif [[ -z "$CUBRID_DBA_PASSWORD" || -z "$CUBRID_PASSWORD" ]]; then
# Check if you are currently connected to an account that can manage CUBRID.
if [[ "$(id -un)" = 'root' ]]; then
    su - cubrid -c  "mkdir -p \$CUBRID_DATABASES/$CUBRID_DATABASE"
elif [[ "$(id -un)" = 'cubrid' ]]; then
    . /home/cubrid/
    echo 'ERROR: Current account is not vaild.'; >&2
    echo "Account : $(id -un)" >&2
    exit 1
# When the container starts, it creates a database with the name set in the'$CUBRID_DATABASE' environment variable.
su - cubrid -c "sed s/#server=foo,bar/server=$CUBRID_DATABASE/g -i \$CUBRID/conf/cubrid.conf"
# If the default user account is'dba' or'$CUBRID_USER' environment variable, the user account is created.
if [[ ! "$CUBRID_USER" = 'dba' ]]; then
    su - cubrid -c "csql -u dba $CUBRID_DATABASE -S -c \"create user $CUBRID_USER\""
# If the default database is'demodb', DEMO data is created in the account set with the'$CUBRID_USER' environment variable.
if [[ "$CUBRID_DATABASE" = 'demodb' ]]; then
    su - cubrid -c "cubrid loaddb -u $CUBRID_USER -s \$CUBRID/demo/demodb_schema -d \$CUBRID/demo/demodb_objects -v $CUBRID_DATABASE"
# '$CUBRID_PASSWORD_EMPTY' If the environment variable value is not 1, set the password for the DBA account as the environment variable value of'$CUBRID_DBA_PASSWORD'.
if [[ ! "$CUBRID_PASSWORD_EMPTY" = 1 ]]; then
    su - cubrid -c "csql -u dba $CUBRID_DATABASE -S -c \"alter user dba password '$CUBRID_DBA_PASSWORD'\""
# '$CUBRID_PASSWORD_EMPTY' If the environment variable value is not 1, set the user account password as the'$CUBRID_PASSWORD' environment variable value.
if [[ ! "$CUBRID_PASSWORD_EMPTY" = 1 ]]; then
    su - cubrid -c "csql -u dba -p '$CUBRID_DBA_PASSWORD' $CUBRID_DATABASE -S -c \"alter user $CUBRID_USER password '$CUBRID_PASSWORD'\""
# '/Docker-entrypoint-initdb.d' Execute'*.sql' files in the directory as csql utility.
for f in /docker-entrypoint-initdb.d/*; do
    case "$f" in
        *.sql)    echo "$0: running $f"; su - cubrid -c "csql -u $CUBRID_USER -p $CUBRID_PASSWORD $CUBRID_DATABASE -S -i \"$f\""; echo ;;
        *)        echo "$0: ignoring $f" ;;
# Remove the two environment variables,'$CUBRID_DBA_PASSWORD', and'$CUBRID_PASSWORD', which have the password for the'dba' account and the password for the user account set.
echo 'CUBRID init process complete. Ready for start up.'
su - cubrid -c "cubrid service start"
tail -f /dev/null



docker build --rm -t cubridkr/cubrid-test: .


8-5. Build to create Docker images

# sh


9. Upload CUBRID Container Image to Docker Hub

To upload a Docker image to Docker Hub, you must first create a Docker Hub account and repository. However, this part is simple, so I will skip it and introduce only the process of uploading a Docker image while the Docker Hub Repository is created.


9-1. Docker Hub Login

# docker login --username [username]


9-2. Uploading Docker images to Docker Hub

# docker push cubridkr/cubrid-test:


9-3. Docker Hub Confirm



10. Deploy CUBRID Containers to Kubernets Environments

You can deploy a Kubernetes environment Docker container by writing the Kubernetes YAML format.

# cat << EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
  name: cubrid
      app: cubrid
  replicas: 1
        app: cubrid
      - name: cubrid
        image: cubridkr/cubrid-test:
        imagePullPolicy: Always
        - name: manager
          containerPort: 8001
        - name: queryeditor
          containerPort: 30000
        - name: broker1
          containerPort: 33000
apiVersion: v1
kind: Service
  name: cubrid
  type: NodePort
  - name: manager
    port: 8001
  - name: queryeditor
    port: 30000
  - name: broker1
    port: 33000
    app: cubrid


10-1. Distribution of CUBRID container by accessing Dashboard



10-2. Check connection information

# kubectl get service cubrid
​NAME     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                                          AGE
cubrid   NodePort   <none>        8001:31290/TCP,30000:30116/TCP,33000:32418/TCP   179m


10-3. Connect to SQLGate and test.



List of Articles
No. Category Subject Date
37 Tools eXERD Which Deals with Logical/Physical Models, Supports CUBRID file 2020.07.23
36 Tools Let’s Take a Look at SQLGate for CUBRID! file 2020.07.23
35 Tools Utilizing the CUBRID Migration Tool (CMT) file 2020.07.23
34 Tools CUBRID Manager Import Wizard Useful Tips! file 2020.07.23
» Tools Running CUBRID Container Services in Docker, Kubernetes Environment file 2020.07.23
32 Server How to Start CUBRID Automatically upon LINUX Boot 2020.07.17
31 Server What's New in CUBRID 10: 'String Compression' file 2020.07.17
30 Server CUBRID GRANT ALL TABLES file 2020.07.17
29 Server Using Two-way Encryption Functions with Java SP in CUBRID 2020.07.17
28 Server Changing the LOB Data Path 2020.07.17
Board Pagination Prev 1 2 3 4 Next
/ 4

Join the CUBRID Project on