Contents

CUBRID 9.1 Release Notes

Release Notes Information

This document includes information on CUBRID 9.1 (Build No. 9.1.0.0212). CUBRID 9.1 includes all of the fixed errors and improved functionalities that were detected in the CUBRID 9.0 Beta version and applied to the previous versions.

For details on the CUBRID 9.0 Beta, see CUBRID 9.0 Release Notes. For details on the CUBRID 2008 R4.3, see http://release.cubrid.org/en.

Overview

CUBRID 9.1 is an upgraded and stabilized version of CUBRID 9.0 Beta. Issues found in the 9.0 Beta version have been fixed and stabilized. With a variety of query-related functionalities, CUBRID 9.1 offers improved query processing performance and improved query optimization. In addition, its globalization-related functionalities have been developed, and the operating tools are improved.

As the database volume of CUBRID 9.1 is not compatible with the database of CUBRID 9.0 Beta version, users of CUBRID 9.0 Beta version or previous versions should migrate the database. For more information on migration, see section Upgrade.

Major enhacements include:

New SQL Functions and Index Hints

  • New functions: NTILE, WIDTH_BUCKET, LEAD and LAG.
  • TRUNC and ROUND functions accept the date types.
  • Support a new index hint clause.
  • SQL hints for Multi UPDATE and DELETE statement.
  • SQL hints for MERGE statement.

Performance Improvements and Optimizations

  • Improve HA replication performance.
  • Improve multi-key range optimization.
  • Enhance optimization of ORDER BY and GROUP BY clause.
  • Improve analytic function performance.
  • Improve performance of INSERT ON DUPLICATE KEY UPDATE and REPLACE statement.
  • Improve search and delete performance for non-unique indexes with many duplicate keys.
  • Improve delete performance when insert and delete operations are repeated.

Globalization

  • Support collation for tables.
  • SHOW COLLATION statement and new functions (CHARSET, COLLATION, and COERCIBILITY).
  • Support collation with expansion which sorts French with backward accent order.
  • Improve and fix restrictions and issues of 9.0 Beta version.

CUBRID SHARD

  • Support "cubrid shard getid" command to verify its shard ID of the shard key.
  • CUBRID SHARD is now available from Windows.

Administration Utility

  • "cubrid applyinfo" utility now also shows information about the replication delay.
  • killtran utility has ability to show the query execution information of each transaction.
  • killtran utility has ability to remove transactions which executes a designated SQL.
  • to log the query execution information in the server error log and the CAS log file when the query timeout occurs.

Behavioral Changes

  • CUBRID_LANG environment variable is no longer used.
  • CUBRID_CHARSET environment variable which sets the database charset instead of CUBRID_LANG and the CUBRID_MSG_LANG environment variable which sets the charset for utility and error messages.
  • Change array execution functions such as cci_execute_array, cci_execute_batch function and Statement.executeBatch and PreparedStatement.executeBatch method of JDBC to commit whenever it executes an individual query under auto commit mode, while the previous versions commit once for entire execution.
  • Change the behavior of cci_execute_array, cci_execute_batch and cci_execute_result function when an error occurs while they are executing multiple statements. These functions now continue to execute the entire given queries while the previous versions stop execution and return an error. Users can access the results and identify the errors with CCI_QUERY_RESULT_* macros.
  • OFF is no longer supported for KEEP_CONNECTION broker parameter.
  • SELECT_AUTO_COMMIT broker parameter is no longer supported.
  • Change the allowed value range of a broker parameter APPL_SERVER_MAX_SIZE_HARD_LIMIT to 1 - 2,097,151.
  • Change the default value of a broker parameter SQL_LOG_MAX_SIZE from 100 MB to 10 MB.
  • Change the behavior of the call_stack_dump_activation_list parameter.

Driver Compatibility

  • The JDBC and CCI driver of CUBRID 9.1 are compatible with CUBRID 9.0 Beta and CUBRID 2008 R4.x version. Some features that are fixed and improved for 9.1 are not supported when 9.1 drivers connect to the previous versions.

Numerous Improvements and Bug Fixes

  • Fix many critical issues of the previous versions.
  • Improve of fix many issues of stability, SQL, partitioning, HA, Sharding, utilities, and drivers.

For more details on changes, see the following. Users of previous versions should check the Behavioral Changes and New Cautions sections.

New Features

SQL

WIDTH_BUCKET function(CUBRIDSUS-4209)

WIDTH_BUCKET function evenly splits the range among the buckets and assigns sequential numbers to the buckets. That is, the WIDTH_BUCKET function creates an equi-width histogram. The range is specified by specifying the minimum value and the maximum value. The range is evenly divided and bucket numbers are assigned sequentially given from 1.

The following example shows how to split the range of eight customers from '1950-01-01' to '1999-12-31' into five buckets based on their dates of birth. When the data is out of the range, 0 or the largest bucket number + 1 is returned.

SELECT name, birthdate, WIDTH_BUCKET (birthdate, date'1950-01-01', date'2000-1-1', 5) age_group
FROM t_customer
ORDER BY birthdate;
  name                  birthdate     age_group
===============================================
  'James'               12/28/1948            0
  'Amie'                03/18/1978            4
  'Tom'                 07/28/1980            4
  'Jane'                05/12/1983            5
  'David'               07/28/1986            5
  'Lora'                03/26/1987            5
  'Peter'               10/25/1988            5
  'Ralph'               03/17/1995            6

NTILE Analytic function(CUBRIDSUS-9688)

NTILE analytic function evenly splits the range value among the buckets and assigns sequential numbers to the buckets. That is, the NTILE function creates an equi-height histogram.

The following example shows how to split the row numbers of eight customers into five buckets based on their dates of birth. Buckets #1, #2, and #3 have two rows, and Buckets #4 and #5 have one row.

SELECT name, birthdate, NTILE (5) OVER (ORDER BY birthdate) age_group
FROM t_customer;
  name                  birthdate     age_group
===============================================
  'James'               12/28/1948            1
  'Amie'                03/18/1978            1
  'Tom'                 07/28/1980            2
  'Jane'                05/12/1983            2
  'David'               07/28/1986            3
  'Lora'                03/26/1987            3
  'Peter'               10/25/1988            4
  'Ralph'               03/17/1995            5

LEAD and LAG analytic function(CUBRIDSUS-9414)

LEAD and LAG analytic function returns the column information of the next row and the previous one based on the current for each.

The following example shows how to sort employee numbers and output the next employee number on the same row:

CREATE TABLE t_emp (name VARCHAR (10), empno INTEGER);
INSERT INTO t_emp VALUES
('Amie', 11011), ('Jane', 13077), ('Lora', 12045), ('James', 12006),
('Peter', 14006), ('Tom', 12786), ('Ralph', 23518), ('David', 55);

SELECT name, empno, LEAD (empno, 1) OVER (ORDER BY empno) next_empno
FROM t_emp
ORDER BY 2;
  name                        empno   next_empno
================================================
  'David'                        55        11011
  'Amie'                      11011        12006
  'James'                     12006        12045
  'Lora'                      12045        12786
  'Tom'                       12786        13077
  'Jane'                      13077        14006
  'Peter'                     14006        23518
  'Ralph'                     23518         NULL

The following example shows how to sort employee numbers and output the previous employee number on the same row:

SELECT name, empno, LAG (empno, 1) OVER (ORDER BY empno) prev_empno
FROM t_emp
ORDER BY 2;
  name                        empno   prev_empno
================================================
  'David'                        55         NULL
  'Amie'                      11011           55
  'James'                     12006        11011
  'Lora'                      12045        12006
  'Tom'                       12786        12045
  'Jane'                      13077        12786
  'Peter'                     14006        13077
  'Ralph'                     23518        14006

TRUNC function accepts DATE, DATETIME and TIMESTAMP types(CUBRIDSUS-9413)

TRUNC function truncates the DATE, DATETIME and TIMESTAMP type values with the specified format.

SELECT TRUNC (TO_DATE ('2012-10-26'), 'YYYY') d;
  d
============
 01/01/2012

ROUND function accepts DATE, DATETIME and TIMESTAMP types(CUBRIDSUS-9488)

ROUND function rounds off the DATE, DATETIME and TIMESTAMP type values with the specified format.

SELECT ROUND (datetime'2012-10-21 10:20:30', 'yyyy') d;
  d
============
 01/01/2013

Support new index hint clause(CUBRIDSUS-6675)

Support new index hint specification syntaxes, such as USE INDEX, FORCE INDEX and IGNORE INDEX clauses, as well as the general USING INDEX clause.

SELECT * FROM tbl USE INDEX (idx1), IGNORE INDEX (idx2) WHERE col1 < 4;

In addition, supports the "USING INDEX index_name(-)" syntax that let optimizer ignore the designated indexes. It provides the same sematics with IGNORE INDEX clause.

SELECT * FROM tbl  WHERE col1 < 4 USING INDEX idx1(-);

SQL hints for UPDATE JOIN and DELETE JOIN statement(CUBRIDSUS-9491)

SQL hints such as ORDERED, USE_DESC_IDX, NO_COVERING_INDEX, NO_DESC_IDX, USE_NL, USE_IDX, USE_MERGE, NO_MULTI_RANGE_OPT, and RECOMPILE are allowed for the UPDATE JOIN and DELETE JOIN statements.

Index hints for MERGE statement(CUBRIDSUS-10134)

Support Index Hints, for instance, USE_UPDATE_IDX, USE_INSERT_IDX for MERGE statement. USE_UPDATE_IDX hint is applied to ON clause and WHERE clause of UPDATE clause when performing UPDATE clause, and USE_INSERT_IDX hint is applied to ON clause when performing INSERT clause.

MERGE /*+ USE_UPDATE_IDX(i_s_ij) USE_INSERT_IDX(i_t_ij, i_t_i) */
INTO target t USING source s ON t.i=s.i
WHEN MATCHED THEN UPDATE SET t.j=s.j WHERE s.i <> 1
WHEN NOT MATCHED THEN INSERT VALUES(i,j);

Refer previously specified columns in the VALUES and SET clauses of INSERT statement(CUBRIDSUS-3112)

It is allowed to refer the previously specified columns in the VALUES and SET clauses of INSERT statement. For example, when the expression like "b = a + 1" is evaluated, a column's value which was previously specified can be referred.

CREATE TABLE tbl (a INT, b INT);
INSERT INTO tbl SET a = 0, b = a + 1;

In the above example, the previous version cannot evaluate b's value, but the fixed version can evaluate b's value as 1 by referring a's value.

The evaluation of an assignment expression is performed from left to right. The default value is assigned if the column's value is not specified, and the NULL is assigned if the default value is not defined.

Globalization

SHOW COLLATION statement and CHARSET, COLLATION and COERCIBILITY function(CUBRIDSUS-9404)

SHOW COLLATION statement which shows the collation information. CHARSET, COLLATION and COERCIBILITY function return the charset, collation and coercibility of its argument.

Support collation for table(CUBRIDSUS-9403)

9.1 allows collation for tables while 9.0 Beta only allowed collation for columns.

CREATE TABLE address_book (id INTEGER, name STRING, address1 STRING, address2 STRING) COLLATE utf8_en_cs;

synccolldb utility to set the Database collation according to the system collation(CUBRIDSUS-9495)

"cubrid synccolldb" utility changes the database collation that is managed in the system catalog table according to the system collation that is kept in $CUBRID/conf/cubrid_locales.txt).

% cubrid synccolldb testdb

In addition, it shows a message which required execution of the cubrid synccolldb utility to synchronize the existing database collation created with the locale library collation by executing the script that created the locale library (make_locale.sh for Linux, make_locale.bat for Windows).

To check compatibility and synchronize your existing databases, run:
    cubrid synccolldb <database-name>

The cubrid synccolldb utility changes the collation defined in the system catalog table, not the collation of general tables and the charset of data.

Support collation with expansion sort by backward accents(CUBRIDSUS-9407)

Support collation with expansion(utf8_fr_exp_ab) which sorts Canadian French based on backward accents. Canadian French sorting by backward accents means sorting based on the order of accents located from the end of the string. The weight is checked from the end of the string.

Normal Accent Ordering : cote < coté < côte < côté
Backward Accent Ordering : cote < côte < coté < côté

Sharding

"cubrid shard getid" command to print the shard ID for the shard key(CUBRIDSUS-9547)

"cubrid shard getid" command prints the shard ID for the shard key.

The following example prints the shard ID for shard key 1 on shard1, the shard proxy:

% cubrid shard getid -b shard1 1

The -f option dumps all information of the shard ID.

% cubrid shard getid -b shard1 -f 1

Support Windows(CUBRISUS-9549)

CUBRID SHARD is now available from Windows.

Driver

[JDBC][CCI] Support load balancing(CUBRIDSUS-8675)

Provide a load balancing feature which let applications connect to the primary host and the hosts specified in altHosts in a random order when altHosts was added to the connection URL of CCI and JDBC. In the following example of a connection URL, this functionality is activated when the value of loadBalance is set to true.

jdbc:cubrid:host1:port1:demodb:::?altHosts=host2:port2,host3:port3&loadBalance=true

[CCI] cci_close_query_result function(CUBRIDSUS-9269)

Different from the JDBC driver which provided two methods to close resultset and statement respectively, the CCI driver had one function, cci_close_req_handle, to close both. The cci_close_query_result function closes the given resultset. When a new function is not called, the memory for resultset is occupied until the statement is closed. So, the memory usage may be increased.

In the revised version, when the cci_close_req_handle function is called without calling the cci_close_query_result function, both resultset and the statement are closed like the previous version.

Configuration

check_peer_alive system parameter(CUBRIDSUS-9308)

Introduce check_peer_alive system parameter to set whether to execute the procedure to check if the database server process (cub_server) and the client process that connected to the database server process have run normally or not. The types of client processes are including the broker application server (cub_cas) process, the replication log reflection server (copylogdb), the replication log copy process (applylogdb), and the CSQL interpreter (csql).

When a server process and a client process do not receive any response for a long time (e.g., 5 seconds or longer) while waiting for data via the network after they have been connected, they check if the opponent normally operates or not based on the configuration. If they decide that the opponent does not normally operate, they disconnect the connection by force.

When the ECHO(7) port is blocked by the firewall configuration, the server process or the client process may mistake the opponent process as terminated. Set the parameter to none to avoid this problem.

ENABLE_MONITOR_HANG broker parameter(CUBRIDSUS-7558)

Add a functionality to block access to the corresponding broker when it is determined that a certain rate of CASs is hung. This functionality is enabled when the ENABLE_MONITOR_HANG parameter is set to ON.

When a CAS hang continues for more than one minute, the broker process determines that the CAS is hung. If the broker process is determined as abnormal based on the number of CASs, applications attempting to access the broker are blocked and led to the alternative hosts (altHosts) specified in the access URL.

Administration Utility

Provide replication delay information(CUBRIDSUS-9525)

"cubrid applyinfo" utility now also shows information about the replication delay. It provides delay to copy transaction logs and apply ones. The following example shows how to output the replication delay information:

% cubrid applyinfo -L /home/cubrid/DB/testdb_nodeA -r nodeA -a -i 3 testdb

...

*** Delay in Copying Active Log ***
Delayed log page count         : 4
Estimated Delay                : 0 second(s)

 *** Delay in Applying Copied Log ***
Delayed log page count         : 1459
Estimated Delay                : 22 second(s)

tranlist utility(CUBRIDSUS-9601)

"cubrid tranlist" utility which shows the transaction information of the database. DBA and DBA groups are only allowed to use it.

% cubrid tranlist -s testdb

Tran index         User name      Host name      Process id          Program name
-------------------------------------------------------------------------------------
   1(ACTIVE)         PUBLIC        myhost            1822           broker1_cub_cas_1
   2(ACTIVE)            dba        myhost            1823           broker1_cub_cas_2
   3(COMMITTED)         dba        myhost            1824           broker1_cub_cas_3
-------------------------------------------------------------------------------------

killtran utility with -q option shows the query execution information(CUBRIDSUS-9588)

cubrid killtran with -q(--query-exec-info) option shows the query execution information of each transaction.

% cubrid killtran -q testdb

Tran index  Process id  Program name   Query time   Tran time  Wait for lock holder        SQL ID  SQL Text
--------------------------------------------------------------------------------------------------------------------------------
  1(ACTIVE)      22982   b1_cub_cas_1        0.00       0.00                    -1                 *** empty ***
  2(ACTIVE)      22983   b1_cub_cas_2        1.80       1.80                     1  5377225ebc75a  update [ta] [ta] set [a]=5 wh
--------------------------------------------------------------------------------------------------------------------------------

The status of a query execution includes the following information:

  • Tran index: Transaction index
  • Process id: Client process ID
  • Program name: Client program name
  • Query time: Total execution time of a query being executed (unit: seconds)
  • Tran time: Total execution time of the current transaction (unit: seconds)
  • Wait for lock holder: The list of transactions which hold the lock when the current transaction is in lock waiting
  • SQL Text: The query statement being executed (up to 30 characters)

killtran utility can remove transactions with SQL ID(CUBRIDSUS-9653)

killtran utility has ability to remove transactions which executes a designated SQL.

% cubrid killtran --query-exec-info testdb

Tran index  Process id  Program name             Query time   Tran time  Wait for lock holder     SQL ID      SQL Text
--------------------------------------------------------------------------------------------------------------------------------------------
  1(ACTIVE)    26650    query_editor_cub_cas_1        0.00         0.00                    -1  *** empty ***
  2(ACTIVE)    26652    query_editor_cub_cas_3        0.00         0.00                    -1  *** empty ***
  3(ACTIVE)    26651    query_editor_cub_cas_2        0.00         0.00                    -1  *** empty ***
  4(ACTIVE)    26653    query_editor_cub_cas_4        1.80         1.80               2, 1, 3  cdcb58552e320  update [ta] [ta] set [ta].[a]=
--------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID: cdcb58552e320
Tran index : 4
update [ta] [ta] set [ta].[a]= ?:1  where ([ta].[a]> ?:0 )

% cubrid killtran --kill-sql-id=cdcb58552e320 -f testdb

-i option of killtran utility accepts multiple transactions(CUBRIDSUS-9601)

You can specify several transactions with -i option of the killtran utility to remove them at once.

% cubrid killtran -i 1,3,8 testdb

Log the query which exceeded the specified time and the query execution plan information(CUBRIDSUS-6987)

With the system parameter sql_trace_slow_msecs, add a function to log the query statements which exceeded the specified time and the query execution plan information to a log. When the value of the system parameter sql_trace_execution_plan is yes, the SQL statement, query execution plan, and the cubrid statdump information are recorded in the server error log file and the broker application server (CAS) log file. When the cubrid plandump is executed, the corresponding SQL statement and the query execution plan are output.

However, the corresponding information is recorded in the server error log file only when the value of the error_log_level parameter is NOTIFICATION.

Show parameters of broker and shard(CUBRIDSUS-7771)

"cubrid broker info" and "cubrid shard info" command shows the parameter information of the broker.

New options for broker_log_converter and broker_log_runner(CUBRIDSUS-8804)

cubrid broker_log_converter, the broker log conversion utility prints the query ID comment before the query statement with -i option. cubrid broker_log_runner utility which re-executes a query with the output file converted by the cubrid broker_log_converter utility has two new options: -s option which prints the statdump information for each query and -a option which executes a query with auto commit mode.

Behavioral Changes

Globalization

Change collation coercibility level(CUBRIDSUS-10057)

Change the coercibility level which indicates the level of converting the collation with high coercibility to the collation with low coercibility, as shown in the following table. The coercibility of binary collations is changed to eliminate inconsistency of LIKE query result between the host variable and the string constant.

CREATE TABLE tbl (s STRING COLLATE utf8_bin);
INSERT INTO tbl VALUES ('bbb');
SET NAMES utf8 COLLATE utf8_gen_ci;

-- The following query normally outputs one result:
SELECT * FROM t WHERE s LIKE '_B_';

-- The following query outputs 0 results in the 9.0 Beta version, but outputs one result in version 9.1.
PREPARE st from 'SELECT * FROM tbl WHERE s LIKE ?';
EXECUTE st USING '_B_';
Collation Coercibility Argument(Operand) of the Expression
0 Operand with the COLLATE modifier
1 When Column has a non-binary collation
2 When Column has a binary collation except for the case with ISO-8859-1 charset
3 When Column has a binary collation and ISO-8859-1 charset (iso88591_bin)
4 When the SELECT value and the expression have a non-binary collation
5 When the SELECT value and the expression have a binary collation except for cases with ISO-8859-1 charset
6 When the SELECT value and the expression have a binary collation and ISO-8859-1 charset (iso88591_bin)
7 Special Functions (USER(), DATABASE(), SCHEMA(), VERSION())
8 When the constant string has a non-binary collation
9 When the constant string has a binary collation except for cases with ISO-8859-1 charset
10 When the constant string has a binary collation and ISO-8859-1 charset (iso88591_bin)
11 Host variables, user-defined variables

Prevent SQL functions whose argument is not coercible to the other's collation from execution(CUBRIDSUS-9920)

When two or more arguments have been given to the IF, CASE, DECODE, or FIELD function, the function can be executed only when the collation can be converted to either of the two.

If a collation of the input argument is binary, it is compatible. As shown in the following example, when the utf8_bin string and the iso88591_bin string are input, the utf8_bin string is converted to the iso88591_bin charset:

SELECT IF (1, _utf8'a', _iso88591'b') AS `if`, CHARSET (IF (1, _utf8'a', _iso88591'b')) AS `charset`;
   if                   charset
============================================
  'a'                   'iso88591'

If the collation of the input argument is non-binary, it cannot convert either of the two, causing an error.

SELECT * FROM t1
WHERE IF (id % 2, _utf8'a' COLLATE utf8_en_cs, _utf8'b' COLLATE utf8_en_ci) = CONCAT (a, '');
ERROR: before '  = CONCAT(a, ''); '
'if ' requires arguments with compatible collations.

Hash partitioning with a non-binary collation column is not allowed(CUBRIDSUS-10156)

The table is not hash-partitioned by using the non-binary collation string.

SET NAMES utf8 COLLATE utf8_de_exp_ai_ci;

-- version 9.1 does not allow a table to be operated, as shown below:
CREATE TABLE t2 (code VARCHAR(10)) COLLATE utf8_de_exp_ai_ci PARTITION BY HASH (code) PARTITIONS 4;
INSERT INTO t2(code) VALUES ('AE');
INSERT INTO t2(code) VALUES ('ae');
INSERT INTO t2(code) VALUES ('Ä');
INSERT INTO t2(code) VALUES ('ä');

-- 9.0 Beta version had a problem that would cause four rows to be output, instead of two rows, 'ä' and 'Ä', when the following query was executed:
SELECT * FROM t2 WHERE code='ä';

Driver

[JDBC][CCI] Change array execution functions to commit whenever it executes a query under auto commit mode(CUBRIDSUS-6148)

cci_execute_array, cci_execute_batch function of CCI and Statement.executeBatch PreparedStatement.executeBatch method of JDBC commits, whenever it executes an individual query under auto commit mode, while the previous versions commit once for entire execution.

[JDBC] PreparedStatement.setBoolean method(CUBRIDSUS-9205)

There was a problem in which only the BIT type values were allowed as binding the values for PreparedStatement.setBoolean method. Now, the BIT type values are excluded but all numeric types such as SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE, and MONETARY or all character types such as CHAR and VARCHAR can be bound.

[CCI] Change error handling of array/batch execution functions of CCI(CUBRIDSUS-9364)

When executing several queries at once in the CCI application, if an error has occurs from at least one query among the results of executing queries by using the cci_execute_array function, the cci_execute_batch function, an error code of the corresponding query was returned from 2008 R3.0 to 2008 R4.1. This problem has been fixed to return the number of the entire queries and check the error of each query by using the CCI_QUERY_RESULT_* macros from 2008 R4.3 version.

The CCI_QUERY_RESULT_ERR_NO macro checks the error number of a specific query failed among the entire query results. The error indicator is also provided to identify wheterh an error is from CAS(-1) or DBMS(-2) from the return values of CCI_QUERY_RESULT_RESULT macro.

The related CCI_QUERY_RESULT_* macros are like below.

  • CCI_QUERY_RESULT_RESULT
  • CCI_QUERY_RESULT_ERR_NO
  • CCI_QUERY_RESULT_ERR_MSG
  • CCI_QUERY_RESULT_STMT_TYPE
  • CCI_QUERY_RESULT_OID

[CCI] Change usage of user and password from cci_property_set and cci_connect_with_url function(CUBRIDSUS-9393)

The usages of DB user and password of the cci_property_set and cci_connect_with_url function are unified. The two functions now operate as follows:

  • When the argument and the URL are specified, the argument value has a higher priority than the URL value.
  • When either of the two is NULL, the one that is not NULL is used.
  • When both of the two are NULL, NULL is used.
  • If the DB user argument is NULL, "public" is set. If the password argument is NULL, NULL is set.
  • If the password argument is NULL, URL setting is used.

In the previous versions, when the DB user and password of DATASOURCE were set in the cci_property_set function, the DB user argument had to be set. If the password argument was not set, it was set to NULL. In addition, when the password argument was NULL, the password of the URL argument was used. When an application sets the DB user and password of the cci_connect_with_url function, NULL argument for a DB user was interpreted as "public" user. If NULL was given as the password argument, the password of the URL was used.

Default Setting Changes

Remove CUBRID_LANG and add CUBRID_CHARSET and CUBRID_MSG_LANG environment variable(CUBRIDSUS-9719)(CUBRIDSUS-9468)

Remove CUBRID_LANG and add CUBRID_CHARSET that sets the database charset and CUBRID_MSG_LANG that designates the charset of each message. The CUBRID_CHARSET environment variable is mandatory. When the CUBRID_MSG_LANG environment variable is omitted, it inherits the CUBRID_CHARSET environment variable.

Change the upper limit of the sort_buffer_size system parameter to 2G(CUBRIDSUS-9582)

Change the upper limit of the sort_buffer_size system parameter to 2G. In previous versions, the server was abnormally terminated when sort_buffer_size was set to a value larger than 2G and more thant 2G of sort_buffer was actually used, such as loading a huge index.

Change the usage of call_stack_dump_activation_list system parameter(CUBRIDSUS-9836)

The operation method of cubrid.conf according to the specification of the call_stack_dump_activation_list parameter has been changed.

  • When the value of call_stack_dump_activation_list was specified, some error numbers were set by default. 9.1 sets only the specified error codes.
  • Add DEFAULT keyword for call_stack_dump_activation_list. The DEFAULT keyword is replaced with "-2, -7, -13, -14, -17, -19 , -21, -22, -45, -46, -48, -50, -51, -52, -76, -78, -79, -81, -90, -96, -97, -313, -314, -407, -414, -415, -416, -417, -583, -603, -836, -859, -890, -891, -976, -1040, -1075".

If the value of call_stack_dump_activation_list is not specified, it is set to "-2, -7, -13, -14, -17, -19 , -21, -22, -45, -46, -48, -50, -51, -52, -76, -78, -79, -81, -90, -96, -97, -313, -314, -407, -414, -415, -416, -417, -583, -603, -836, -859, -890, -891, -976, -1040, -1075" by default, as it was.

Remove OFF option from KEEP_CONNECTION broker parameter(CUBRIDSUS-5316)

Remove OFF option from the broker parameter, KEEP_CONNECTION. In 2008 R4.x and earlier versions, when it is set to OFF, the user-defined variable and the LAST_INSERT_ID, ROW_COUNT, PREPARE statement are not executed normally.

Remove SELECT_AUTO_COMMIT broker parameter(CUBRIDSUS-9326)

The SELECT_AUTO_COMMIT broker parameter was removed, as it is no longer needed.

Change APPL_SERVER_MAX_SIZE_HARD_LIMIT broker parameter(CUBRIDSUS-10260)

Change the permitted range of the value of APPL_SERVER_MAX_SIZE_HARD_LIMIT broker parameter as 1 to 2,097,151. If the value is out of the range, it is not allowed for the broker to be executed. In addition, when the value of APPL_SERVER_MAX_SIZE_HARD_LIMIT is changed to a value smaller than APPL_SERVER_MAX_SIZE by using broker_changer, a warning message is displayed.

Change the default value of SQL_LOG_MAX_SIZE broker parameter(CUBRIDSUS-9944)

Change the default value of SQL_LOG_MAX_SIZE of cubrid_broker.conf from 100MB to 10MB. If a user of a previous version wants to leave SQL logs equivalent to the existing volume of the SQL logs after upgrade, please set the value of SQL_LOG_MAX_SIZE to 100,000 (unit: KB).

HA

Allow SELECT statements when the node is in to-be-active state(CUBRIDSUS-8896)

The SELECT statements can be executed when the node is in to-be-active state.

Other

Changes on "cubrid broker status" command(CUBRIDSUS-9602)

Changes on "cubrid broker status" command are as follows:

  • Only the status of the executing brokers are shown when the SERVICE parameter is ON.
% cubrid broker status SERVICE=ON
  • with -b option,
  • The first 20 characters of the broker name are shown. When the broker name exceeds 20 characters, '...' is printed right after the first 20 characters of the broker name.
  • The REQ item is no longer provided.
  • To show QPS and TPS as unsigned 64-bits integer. In previous versions, negative numbers was shown when an overflow happened.
  • Add #CONNECT to provide the sum of the number of connections to the CAS processes belongs to the broker.
  • The SELECT, INSERT, UPDATE, DELETE and OTHERS items show the number of query executions for each. However -f option is given, these items are not printed.
  • UNIQUE-ERR-Q shows the number of unique key violation errors.
  • without -b option,
  • "CLIENT_WAIT" and "CLOSE_WAIT" are shown instead of "CLIENT WAIT" and "CLOSE WAIT".
  • The broker configuration is no longer provided.
  • When -l option is given, the information of CAS whose status is "CLOSE_WAIT" is excluded.

Improvements and Fixes

Performance and Optimization

Improve HA replication performance(CUBRIDSUS-8114)

The HA replication performance is significantly improved. Changes on the master node is reflected on the slave nodes in real time. The following table shows the delay time with a heavy workload of YCSB benchmark:

Version Delay Time (sec)
CUBRID 9.0 Beta 2238.73
CUBRID 9.1 1.18

Improve DELETE performance when INSERT and DELETE are repeated(CUBRIDSUS-5222)

DELETE operation took longer as INSERT and DELETE were repeated. Improve to maintain the initial DELETE performance, even though there have been many INSERT and DELETE operations.

Improve the performance of accessing table after recreating TRUNCATE table or rebuilding an index(CUBRIDSUS-6501)

Heavy execution of TRUNCATE a table or index rebuilds brought performance degradation of accessing the table.

Improve performance of INSERT ON DUPLICATE KEY UPDATE and REPLACE statement(CUBRIDSUS-8337)

Improve the performance by searching the index directly, rather than executing an internal SELECT statement to find the records which violate the unique constraints. The performance of server side execution of INSERT ON DUPLICATE KEY UPDATE statement is also enhanced. It does not require an internal UPDATE statement any more.

Improve analytic function performance(CUBRIDSUS-8487)

The analytic function performance has been improved.

  • Improve the performance when the analytic functions in a query share the same window.
  • Improve the performance of analytic functions for many groups.
  • Improve the performance of analytic functions for partly sorted data.
-- Q1: Improved 2 times
SELECT *
FROM (SELECT ROWNUM AS rn,
             AVG (c1) OVER (PARTITION BY p1) a1,
             AVG (c1) OVER (PARTITION BY p1) a2
      FROM t) x
WHERE x.rn > 999999;

-- Q2: Improved 2.45 times
SELECT *
FROM (SELECT ROWNUM AS rn,
             AVG (c1) OVER (PARTITION BY p1 ORDER BY o1) a1,
             AVG (c1) OVER (PARTITION BY p1 ORDER BY o1) a2
      FROM t) x
WHERE x.rn > 999999;

-- Q3: Improved 5.6 times
SELECT *
FROM (SELECT ROWNUM AS rn,
             AVG (c1) OVER (PARTITION BY p1 ORDER BY o1) a1,
             AVG (c1) OVER (PARTITION BY p1 ORDER BY o1) a2,
             AVG (c1) OVER (PARTITION BY p1 ORDER BY o1) a3
      FROM t) x
WHERE x.rn > 999999;

-- Q4: Improved by about 15%
SELECT *
FROM (SELECT ROWNUM AS rn,
             AVG (c1) OVER (PARTITION BY p1) a1
      FROM t) x
WHERE x.rn > 999999;

-- Q5: Improved by about 25%
SELECT *
FROM (SELECT ROWNUM AS rn,
             AVG (c1) OVER (PARTITION BY p1 ORDER BY o1) a1
      FROM t) x
WHERE x.rn > 999999;

Improve multi-key range optimization(CUBRIDSUS-10278)(CUBRIDSUS-6091)

The multi-key range optimization is significantly improved.

  • Multi-key range optimization can be appied for the cases two or more ORDER BY columns or a range with orderby_num().

    SELECT *
    FROM tbl
    WHERE a IN (1, 2, 3)
    USING INDEX idx
    ORDER BY col1 DESC, col2 DESC
    LIMIT 2;
    
    SELECT *
    FROM tbl
    WHERE a IN (1,3)
    ORDER BY b, c DESC
    FOR orderby_num() BETWEEN 5 AND 10;
    
  • Support Multi-key range optimization on some JOIN queries.

    SELECT *
    FROM tbl1 t JOIN tbl2 s
    ON s.b = t.b
    WHERE t.a in (1,3) AND t.b = 1
    ORDER BY t.c DESC, d
    LIMIT 10;
    
  • Fix to show query execution plans for multi-key range optimization. It is now done during query compilation stage. However, multi-key range optimization is applied based on the final result size defined by the LIMIT clause or orderby_num() predicate. For example, when the final result size is larger than the limit, the multi-key range optimization execution plan is not applied.

Improve 'skip order by' optimization(CUBRIDSUS-7418)

'skip order by' optimization can be applied even though a sorting column is coerced but the order of the result is same.

CREATE TABLE t (a DATETIME);
CREATE INDEX i ON t (a);

SELECT * FROM t
WHERE a > '0000-00-00 00:00:00'
ORDER BY CAST (a AS DATE);

Improve LIMIT clause optimization for GROUP BY query(CUBRIDSUS-6400)

Scan is immediately stopped on reaching the result count of LIMIT clause of GROUP BY query when 'GROUP BY skip' optimization is applied.

CREATE TABLE t (i INTEGER, j INTEGER);
CREATE INDEX idx ON t (i);

SELECT i, j
FROM t
WHERE i > 0
GROUP BY i
LIMIT 5;

Improve query optimization of the LIMIT clause (CUBRIDSUS-7661)

The query is immediately stopped when it just reaches the result count of LIMIT clause. It does not go further to find the N+1 record any more.

SELECT * FROM t1 WHERE a > 0 AND b = 1 LIMIT 3;

Reduce time to fork new CAS processes(CUBRIDSUS-9067)

As the number of application connections grows, it is required to fork new CAS processes upto MAX_NUM_APPL_SERVER broker parameter. 9.1 reduces time to fork new CAS processes. For example, when MIN_NUM_APPL_SERVER is 100 and MAX_NUM_APPL_SERVER is 400, it took over 30 seconds to fork 300 new CAS processes. 9.1 takes 3 seconds.

Resource

--max-writesize-in-sec option of addvoldb utility(CUBRIDSUS-9521)

Add an option, --max_writesize-in-sec for 'cubrid addvoldb' utility to limit the disk usage to add a new volume.

% cubrid addvoldb -C --db-volume-size=2G --max-writesize-in-sec=1M testdb

Improve utilization of temporary volume to sort overflow records(CUBRIDSUS-9772)

A temporary volume is expanded/added only if the existing volumes are fully used while sorting overflow records.

Fix memory leak of 'cubrid shard status' command(CUBRIDSUS-10393)

Fix memory leak of "cubrid shard status -c -s 1" command which periodically prints the shard status information.

Reduce the possibility of external sorting(CUBRIDSUS-1339)

Fix to reduce the possibility of external sorting through more precise forecasting of the memory space required to sort the query results.

Several volumes created at once when volume was automatically increased(CUBRIDSUS-10295)

Fix not to automatically add several volumes at once even multiple clients simultaneously suffer lack of usable space. A volume extension is now done only when there's no usable one.

Stability

DB server process hang due to internal issues of lock manager(CUBRIDSUS-10329)(CUBRIDSUS-10009)

Fix an issue that lock manager might cause server process hang while acquiring a lock. Fix another internal issue due to an incorrect lock timeout.

Purge unneeded archive logs(CUBRIDSUS-9848)

Fix to clearly purge the unneeded archive logs. This issue can happen when a checkpoint operation is executed when there are many data flush operations.

Error that might occur when a checkpoint occurs while allocating a new page in standalone mode(CUBRIDSUS-10444)

Fix the error "Skip invalid page in checkpoint" might occur when a checkpoint operation occurred while allocating a new page in the standalone mode.

CAS hang in CLIENT WAIT state(CUBRIDSUS-10238)

Fix an issue can be happened when a CAS was terminated by a signal or abnormally terminated at a particular timepoint, the CAS or broker process would wait indefinitely.

Abnormal CAS termination when network socket error occurred in CAS while receiving DB server error(CUBRIDSUS-10401)

Fix abnormal termination of CAS process when a network socket error happens while it is receiving a DB server error. In previous versions, SHARD CAS was abnormally terminated when "cubrid shard start" was executed while the MAX_NUM_APPL_SERVER value of cubrid_shard.conf (which set the number of shard CAS processes) was larger than the max_clients value of cubrid.conf (which set the maximum number of connections for the server) in the SHARD environment.

SQL Function and Operator

Error when an expression is given for the PARTITION BY clause of analytic function(CUBRIDSUS-9579)

Fix an error that would occur when an expression is given for the PARTITION BY clause of analytic function.

SELECT v.a, ROW_NUMBER() over(PARTITION BY 1 + 0) r
FROM (VALUES (1), (2), (3)) v (a);

In 9.0 beta version, the following error would occur:

Semantic: System error (generate order_by) in ..\..\src\parser\xasl_generation.c (line: 5466)
select [v].[a], row_number() over (partition by 1+0) from (values (1),(2),(3)) [v] ([a]);

The following rules show how CUBRID handles the expressions in ORDER BY and PARTITION BY clause of the OVER clause:

  • ORDER BY constant (ex: 1): The constant specifies the column location of the SELECT list.
  • ORDER BY constant expression (ex: 1+0): The constant expression is ignored and not used for ordering/partitioning.
  • Expression which is not configured with the ORDER BY constant (ex: i, sin(i+1)): The expression is used for ordering/partitioning.

str_to_date function always returned the millisecond as 0(CUBRIDSUS-9553)

Modified a problem in which the STR_TO_DATE function always returned the millisecond as 0.

SELECT STR_TO_DATE ('2012-10-31 23:49:29.123', '%Y-%m-%d %H:%i:%s.%f');

Error while executing BLOB_FROM_FILE and CLOB_FROM_FILE function(CUBRIDSUS-7596)

An error "Semantic: Cannot coerce blob to type unknown data type." would occur when executing BLOB_FROM_FILE and CLOB_FROM_FILE function.

Error when overflow occurs as adding the TIME type value to the BIGINT type(CUBRIDSUS-3735)

An incorrect result was returned when overflow occurred as adding the TIME type value to the BIGINT type. Instead of the incorrect result, an error is now output.

SELECT CAST (9223372036854775807 as bigint) + TIME'11:59:59 pm';

Error when string convertible to double type was entered to the host variable argument to SUM function and AVG function(CUBRIDSUS-8789)

An "ERROR: Invalid data type referenced" error would occur when a string convertible to the double type was entered as the host variable argument of the SUM function and the AVG function. This error has been fixed.

CREATE TABLE tbl (a INTEGER);
INSERT INTO tbl VALUES (1),(2);

PREPARE STMT FROM 'SELECT AVG (?) FROM tbl';
EXECUTE STMT USING '1.1';

Error when using the analytic function for a query statement including the GROUP BY clause(CUBRIDSUS-7270)

Fix to use the analytic function for the query statement that includes the GROUP BY clause.

SELECT a, ROW_NUMBER() OVER (ORDER BY a) FROM tbl GROUP BY a;
-- In previous versions, the following error would occur:
ERROR:  before '  from tbl group by a; '
Nested or invalid use of aggregate function.

Error when a subquery has been specified as an input argument of the SQL function in the query statement without the FROM clause(CUBRIDSUS-9949)

When a subquery was specified as the input argument of the SQL function for the query statement without the FROM clause, the error "ERROR: syntax is ambiguous" would occur. This error has been fixed.

SELECT INET_NTOA ((SELECT 3232235530));

Fix to return NULL when NULL has been given to the SQL function or the operator(CUBRIDSUS-10324)

Fix to return NULL when NULL is given as an operand or argument, except for special operators (e.g., IS NULL) and SQL functions (e.g., NVL).

SELECT POW ('a', NULL);
-- In previous versions, the following error would occur when the above query was executed. Since version 9.1, NULL has been returned:

ERROR: before ' , null); '
Cannot coerce 'a' to type double.

Fix not to create a function index for the SPACE function(CUBRIDSUS-10419)

The SPACE function would return a series of spaces, which are meaningless for index scan. This has been fixed, and now no function index is created for the SPACE function now.

CREATE INDEX i_tbl_col ON tbl (SPACE (col1));
-- After the fix, the following message is output when the above query is executed:
'space ' function cannot be used for function based index.

Fix not to allow function-based index when the argument of the TO_CHAR function is string type(CUBRIDSUS-8977)

When the first argument of the TO_CHAR function is string type, the given argument is returned as the function result. In this case, no function-based index is created.

Abnormal server process termination when ENUM type column and string were given to the IN operator as operands(CUBRIDSUS-10586)

When the ENUM type column and a string (not a set) were given as the operand of the IN operator, the server process would be abnormally terminated without returning an error. The RHS operand of the IN operator requires a set or a subquery. In this case, the string should be enclosed in parentheses to specify the set type.

CREATE TABLE t1 (fruit ENUM ('apple', 'orange', 'peach', 'banana', 'strawberry'));
INSERT INTO t1 VALUES ('orange');

-- In 9.0 Beta version, the server process is abnormally terminated when the following query is executed:
SELECT * FROM t1 WHERE fruit IN 'apple';

-- The normal query is as follows:
SELECT * FROM t1 WHERE fruit IN ('apple');

SQL

Incorrect query result when the left outer join query included the WHERE clause condition(CUBRIDSUS-8867)

Fix the problem that would cause an incorrect query result to be output when the WHERE clause was included in the query in which the left outer join was nested three times or more, as shown in the following example:

SELECT *
FROM tblA LEFT OUTER JOIN tblB ON tblA.pkey = tblB.pkey
          LEFT OUTER JOIN tblC ON tblB.p2key = tblC.p2key
          LEFT OUTER JOIN tblD ON tblC.p3key = tblD.p3key
WHERE tblD.p3key = 1;

Failure to execute the LOB type operation after changing the database name(CUBRIDSUS-8905)

Fix the problem that would cause the LOB type operation, such as the CHAR_TO_BLOB function, to fail to execute, since the directory information was not set for the BLOC/CLOB type as changing the database name.

% cubrid createdb --db-volume-size=20m testdb
% cubrid renamedb testdb testdb2
% cubrid server start testdb2

% csql -u dba testdb2

csql> CREATE TABLE tbl(b BLOB);
csql> INSERT INTO tbl VALUES(CHAR_TO_BLOB('1'));

ERROR: before ' )); '
External storage is not initialized because the path is not specified in "databases.txt".

Session operations, such as LAST_INSERT_ID function, could continuously fail under network failure or HA failover(CUBRIDSUS-7549)(CUBRIDSUS-7669)

Fix a problem in which, although CASs were not supposed to share session ID for normal actions, when one of the CASs sharing the session ID due to a network failure or HA failover was terminated earlier than the others, session operation execution continuously failed in the applications connected to the remaining CASs. Session operations include: LAST_INSERT_ID functions, PREPARE statements, user session variables defined through SET, and ROW_COUNT() functions.

Query result would be wrong when there is a DESC column in the multi-column index with OR conditions(CUBRIDSUS-9314)

Previously, a query result would be wrong when there was a DESC column in the multi-column index, and the OR condition was given for the part of keys. This problem has been fixed.

CREATE TABLE foo(col1 INTEGER, col2 INTEGER, col3 INTEGER);
CREATE INDEX idx_foo ON foo(col1, col2 DESC, col3);
INSERT INTO foo VALUES(1,10,100);
INSERT INTO foo VALUES (1,11,100);
PREPARE s FROM 'SELECT col1,col2 FROM foo WHERE col1=? AND ((col2=? AND col3<?) OR col2>?);';
EXECUTE s USING 1, 10, 100, 10;

Auto commit was not executed when multiple queries have been executed at once in the auto commit mode(CUBRIDSUS-7606)

Fix a problem in which auto commit was not executed when multiple queries have been executed at once in the auto commit mode, for example, when executing multiple queries as "CREATE TABLE a(col int);INSERT INTO a VALUES (1);".

Server was abnormally terminated when executing SELECT for the view that included the ORDER BY clause(CUBRIDSUS-9331)

Fix a phenomenon in which a server was abnormally terminated when executing SELECT for the VIEW that included the ORDER BY clause, except the case that an asterisk (*) has been used for the SELECT list.

CREATE VIEW au AS
SELECT
    tbla.a_id AS a_id,
    tbla.u_id AS u_id,
    tbla.a_date AS a_date,
    tblu.u_name AS u_name,
FROM
    tbla LEFT JOIN tblu ON tbla.u_id = tblu.u_id
ORDER BY tbla.a_date ASC;

SELECT u_name FROM au;

Error that created a view with the same name as the table(CUBRIDSUS-3091)

Fix an error that allowed the creation of a view with the same name as the table.

CREATE TABLE t1 (a INTEGER, b INTEGER);
CREATE VIEW t1 AS SELECT * FROM t1;
ERROR: Class t1 already exists.

View not to be created when the ORDER BY clause referred to column that was hidden in the SELECT List(CUBRIDSUS-9345)

Fix a problem that would cause a view to not be created when the ORDER BY clause referred to a column hidden in the SELECT list.

CREATE TABLE foo (i INTEGER, j INTEGER);
CREATE VIEW v AS SELECT i FROM foo ORDER BY j;

Incorrect result or an abnormal server termination when executing the DISTINCT query for a view including the ORDER BY clause(CUBRIDSUS-9880)

Fix a problem that would cause the output of an incorrect result or an abnormal server termination when executing the DISTINCT query for a view including the ORDER BY clause.

CREATE TABLE t (s CHAR(10), i INTEGER);
INSERT INTO t VALUES ('xxxx', 1);
INSERT INTO t VALUES ('yyyy', 2);

CREATE VIEW v AS SELECT s s_v, i i_v FROM t ORDER BY s;

SELECT DISTINCT t1.i_v FROM v t1, v t2;

Problem using the unchanged table to execute a query for a view after changing the name of the table used for creating the view(CUBRIDSUS-8536)

The unchanged table was used to execute a query for the view after changing the name of the table used for creating the view. This problem has been fixed.

CREATE TABLE foo (a INTEGER PRIMARY KEY, b VARCHAR (20));
INSERT INTO foo VALUES (1, 'foo');

CREATE TABLE bar (a INTEGER PRIMARY KEY, b VARCHAR (20));
INSERT INTO bar VALUES (1, 'bar');
CREATE VIEW v1 (a INTEGER, b VARCHAR (20)) AS SELECT * FROM foo;

-- Change the name from foo to foo_old and from bar to foo.
RENAME foo AS foo_old;
RENAME bar AS foo;

-- In the previous versions, 'bar' was output as a result of Q1 and 'foo' as a result of Q2. In version 9.1, 'bar' is output for both.
SELECT b FROM foo; -- Q1
SELECT b FROM v1;  -- Q2

View which referred to the view itself(CUBRIDSUS-3090)

Fix to prevent the creation of a view referring to the view itself. In previous versions, a view that referred to the view itself could be created, and an error would be returned when a query for the view was executed.

CREATE VIEW v2 AS SELECT * FROM t1;
-- In the current version, execution of the following query is not allowed:
CREATE OR REPLACE VIEW v2 AS SELECT * FROM v2;

Abnormal application termination when executing INSERT for a view that included the LIMIT clause(CUBRIDSUS-9940)

CREATE TABLE t (s VARCHAR);
CREATE VIEW tv AS SELECT s FROM t ORDER BY s LIMIT 2;

INSERT INTO tv VALUES ('a');

A violation of conditions not to be detected when executing the MERGE statement for a view including the WITH CHECK OPTION clause(CUBRIDSUS-10219)

In 9.0 Beta version, a query could be executed without detecting violation of the WITH CHECK OPTION condition while executing the MERGE statement as shown below:

CREATE TABLE t1 (a INTEGER, b INTEGER);
INSERT INTO t1 VALUES (1, 500);

CREATE TABLE t2 (a INTEGER, b INTEGER);
INSERT INTO t2 VALUES (1, 400);
INSERT INTO t2 VALUES (2, 200);

CREATE VIEW v AS SELECT * FROM t1 WHERE b < 300 WITH CHECK OPTION;

MERGE INTO v USING t2 ON (t2.a = v.a)
WHEN NOT MATCHED THEN INSERT VALUES (t2.a, t2.b);

SELECT query for the view created using the VALUES clause to cause an error(CUBRIDSUS-9982)

CREATE VIEW vw as VALUES (1 AS col1, 'first' AS col2);
SELECT * FROM vw;
-- In 9.0 Beta version, the following error would occur:
ERROR: There are more attributes in class vw than columns in the query specification.

Failure to execute the MERGE statement for a view including the WITH CHECK OPTION clause(CUBRIDSUS-9174)

Fix an error that would occur when executing the MERGE statement for the view that included the WITH CHECK OPTION clause.

CREATE TABLE t1 (a INTEGER, b INTEGER);
INSERT INTO t1 VALUES (1, 100);
INSERT INTO t1 VALUES (2, 200);

CREATE TABLE t2 (a INTEGER, b INTEGER);
INSERT INTO t2 VALUES (1, 99);
INSERT INTO t2 VALUES (2, 999);

CREATE VIEW v as SELECT * FROM t1 WHERE b < 150 WITH CHECK OPTION;

MERGE INTO v USING t2 ON (t2.a = v.a)
WHEN MATCHED THEN UPDATE SET v.b = t2.b;
-- The above query should be executed successfully, but the following error message was output in the previous versions:
ERROR: Check option exception on view v.

Incorrect MERGE statement that might cause frequent modifications of record in the target table of the MERGE statement(CUBRIDSUS-7489)

When an incorrect record was specified as a target of UPDATE in the target table of the MERGE statement, the record would be updated several times without returning an error. This problem has been fixed. A query should be given to update each record just once while executing the whole of the MERGE statement.

CREATE TABLE t1 (a INTEGER, b INTEGER);
INSERT INTO t1 VALUES (1, 100);

CREATE TABLE t2 (a INTEGER, b INTEGER);
INSERT INTO t2 VALUES (1, 200);
INSERT INTO t2 VALUES (1, 300);

-- After the update, an error is returned when the following query is executed.
MERGE INTO t1 USING t2 ON (t1.a = t2.a)
WHEN MATCHED THEN UPDATE SET t1.b = t2.b;

Malfunction of the MERGE statement(CUBRIDSUS-9158)

Fix the malfunction when the subquery which refers the source table on MERGE statement is used as target table.

MERGE INTO t1 USING (SELECT * FROM t1 WHERE b < 3) t2 ON (t1.a = t2.a)
WHEN MATCHED THEN UPDATE SET t1.b = 1000 DELETE WHERE t1.a > 1;

In addition, fixed the phenomenon in which the data included in the partitioned table would not be DELETED by using the MERGE statement after UPDATING the table.

MERGE INTO t2 USING t1 ON (t1.id1 = t2.id2)
WHEN MATCHED THEN UPDATE SET t2.col1 = 'updated', t2.col2 = t1.col1
DELETE WHERE t2.col1 = 'updated';

MERGE statement not to be allowed when the original table was the same as the target table(CUBRIDSUS-10207)

In 9.0 beta version, the "Cannot affect the source table in a MERGE statement." error would be returned when the original table was same as the target table, as shown below:

MERGE INTO tbl t USING tbl s ON (t.a = s.a)
WHEN MATCHED THEN UPDATE SET t.b = 'updated';

Number of rows affected by UPDATE JOIN would be incorrectly reported(CUBRIDSUS-7185)

Fix the phenomenon that would result in one row being updated by the UPDATE JOIN query several times and would output an incorrect number of affected rows.

CREATE TABLE t1 (a INTEGER);
INSERT INTO t1 VALUES (1), (1), (1), (1);

CREATE TABLE t2 (b INTEGER);
INSERT INTO t2 VALUES (1), (1), (1), (1);

UPDATE t1 m1, t2 m2 SET m1.a = 100, m2.b = 100 WHERE m1.a = m2.b;
-- After the modification, 8 rows are normally output. In 9.0 Beta version, 32 rows were output.
8 rows affected.

Fix not to define the ROWNUM condition and the INST_NUM condition in the on clause of the OUTER join and the explicit INNER join(CUBRIDSUS-10366)

Fix so as not to define the ROWNUM condition and the INST_NUM condition in the ON clause of the OUTER join and the explicit INNER join. However, it is allowed to define the ROWNUM, INST_NUM condition in the WHERE clause of the query.

DELETE t1, t2 FROM t1 LEFT OUTER JOIN t2 ON t1.b = t2.b AND ROWNUM < 100;
-- In 9.0 Beta version, the following error would occur:
ERROR: System error (generate inst_num or orderby_num) in ../../src/parser/xasl_generation.c (line: 6889)

-- In 9.1 version onward, the following error is returned:
ERROR: before ' ; '
INST_NUM()/ROWNUM expression not allowed in join condition.

"ON DELETE CASCADE" in the foreign key which referred to the default key of the child table, did not work(CUBRIDSUS-3493)

The foreign key that referred to the default key of the child table inherited from a specific table had the "ON DELETE CASCADE" attribute. However, the record of the table that referred to the attribute was not deleted, even when DELETE has been executed for the child table. This error has been fixed.

CREATE TABLE pk_super10 (id INTEGER PRIMARY KEY);
CREATE TABLE pk20 UNDER pk_super10 (A INTEGER);
CREATE TABLE fk20 (id INTEGER);
ALTER TABLE fk20 ADD CONSTRAINT FOREIGN KEY (id) REFERENCES pk20 (id) ON DELETE CASCADE;

INSERT INTO pk20 VALUES (1,1), (2,2), (3,3);
INSERT INTO fk20 VALUES (1), (1), (2);

DELETE FROM pk20 WHERE a = 1;
SELECT COUNT(*) FROM fk20;  -- In previous versions, 3 was returned because DELETE CASCADE did not work. In the fixed version, DELETE CASCADE works successfully and 1 is returned.

Incorrect result when column of ORDER BY clause did not exist in the SELECT list of subquery(CUBRIDSUS-8931)

Fix a problem that would cause an incorrect result or an error (for 9.0 Beta version only) when the column of ORDER BY clause did not exist in the SELECT list of subquery.

SELECT a FROM foo WHERE a IN (SELECT a FROM foo WHERE b = 'AAA' ORDER BY b, c);

Failure to execute INSERT by using the Host variable for the NUMERIC type column(CUBRIDSUS-9500)

When executing INSERT by using the host variable for the NUMERIC type column, the type of input value was changed into the default precision(15), scale(0) of the NUMERIC. This problem has been fixed.

CREATE TABLE tb2 (a NUMERIC (4,4));

PREPARE STMT FROM 'INSERT INTO tb2 VALUES (?)';
EXECUTE STMT USING 0.1;

SELECT a FROM tb2;
    a
==========
    0.0
EXECUTE STMT USING 0.5;
ERROR: A domain conflict exists on attribute "a".

Incorrect query result when the maximum value condition and NOCYCLE of the LEVEL Pseudo column were specified to the CONNECT BY clause of the hierarchy query statement(CUBRIDSUS-9581)

Fix a problem that would cause the output of an incorrect query result when the maximum value condition and NOCYCLE of the LEVEL pseudo column were specified to the CONNECT BY clause of the hierarchy query statement.

SELECT LEVEL FROM db_root CONNECT BY NOCYCLE LEVEL <= 5;
    level
=============
    1
    2
    3
    4
    5

Problem of entering 0 when the current Date/Time was entered in the trigger action syntax(CUBRIDSUS-9596)

Fix a problem of entering 0 (zero datetime) when Date/Time was entered by using SYSDATE, SYSTIME, SYSTIMESTAMP, and SYSDATETIME in the trigger action syntax.

CREATE TABLE testtbl (field1 INTEGER);
CREATE TABLE resulttbl (ts TIMESTAMP);

CREATE TRIGGER batchtestresult AFTER INSERT ON testtbl
EXECUTE AFTER INSERT INTO resulttbl VALUES (SYSTIMESTAMP);

INSERT INTO testtbl VALUES(1);

SELECT * FROM resulttbl;
  ts
===============================
  12:00:00 AM 00/00/0000

Failure to RENAME the table that included the AUTO_INCREMENT column(CUBRIDSUS-9691)

Fix an error that occurred when changing the name of a table that included the AUTO_INCREMENT attribute.

Error when the INSERT ON DUPLICATE KEY UPDATE statement referred to the column of SELECT statement(CUBRIDSUS-8337)

As shown in the following example, an error that occurred when the INSERT ON DUPLICATE KEY UPDATE statement referred to the column of SELECT statement was fixed:

INSERT INTO t1 (field_1, field_2, field_3)
SELECT t2.field_a, t2.field_b, t2.field_c FROM t2 ON DUPLICATE KEY UPDATE t1.field_3 = t2.field_c;
ERROR: t2.field_c is not defined.

Syntax Error when a Query Statement followed the ORDERY BY Clause(CUBRIDSUS-6920)

Fix a problem that would cause a syntax error to occur when the ORDER BY clause was followed by a sentence, i.e., "SELECT ~ ORDER BY ~ UNION SELECT ~ ", since the ORDER BY clause was recognized as if it was at the end of the query statement.

SELECT * FROM tbl1 ORDER BY a UNION SELECT * FROM tbl2 ORDER BY b;

For your information, in the previous versions, each SELECT statement had to be enclosed in parentheses, as shown below:

(SELECT * FROM tbl1 ORDER BY a) UNION (SELECT * FROM tbl2 ORDER BY b);

Sorting does not work when index full scan for ENUM type column with NOT NULL constraint is performed(CUBRIDSUS-10841)

Fix a problem that sorting does not work when index full scan for ENUM type column with NOT NULL constraint is performed.

CREATE TABLE t1 (e ENUM('a', 'b', 'c', 'd', 'e') NOT NULL);
CREATE INDEX idx_t1 ON t1 (e);
INSERT INTO t1 VALUES (3), (1), (2), (5), (4);

SELECT * FROM t1 ORDER BY 1;

Incorrect Query Result after Deadlock State Caused by the click counter Query(CUBRIDSUS-5009)

When two or more applications simultaneously executed INCR/DECR functions, a deadlock would occur and then an incorrect query result would be returned. This problem has been fixed.

T1 T2
-- autocommit off -- autocommit off
CREATE TABLE t1(a INT PRIMARY KEY);  
INSERT INTO t1 VALUES (1),(4),(7);  
COMMIT;  
INSERT INTO t1 VALUES (3);  
  DELETE FROM t1 WHERE a=4;
SELECT INCR(a) FROM t1 WHERE a=3; -- BLOCKED  
  SELECT INCR(a) FROM t1 WHERE a=1; -- BLOCKED
  -- a deadlock is detected
0 rows selected.  
SELECT INCR(a) FROM t1 WHERE a=3;  
0 rows selected.  
-- the same as above  

-96 Error when an INSERT statement was re-executed after a table was created under Auto Commit OFF and then a unique key violation occurs(CUBRIDSUS-10239)

The -96 error would occur when the INSERT statement was re-executed after a table had been created under Auto Commit OFF, and then a unique key violation would occur while executing the INSERT statement. This error has been fixed.

% csql testdb --no-auto-commit
CREATE TABLE tbl(col1 INTEGER UNIQUE);
INSERT INTO tbl SELECT 500 + ROWNUM FROM db_class a, db_class b;
INSERT INTO tbl SELECT ROWNUM FROM db_class a, db_class b;
ERROR: Operation would have caused one or more unique constraint violations. INDEX u_t_i(B+tree: 0|139|540) ON CLASS t(CLASS_OID: 0|486|2). key: 501(OID: 0|551|358).
INSERT INTO tbl SELECT 500 + ROWNUM FROM db_class a, db_class b;
ERROR: Media recovery may be needed on volume "/home1/cubrid1/CUBRID/databases/testdb/testdb".

Error of unique key violation caused by a key lock error while executing the range query of DELETE and UPDATE(CUBRIDSUS-9382)

When several applications were executing the range query of DELETE and UPDATE, a unique key violation error would occur due to a key lock error. This error has been fixed.

Failure to execute the INSERT statement due to the error of splitting the index node, as a result of many duplicate keys(CUBRIDSUS-9829)

As the list of duplicate record addresses was large due to many duplicate keys, the INSERT statement failed to execute by selecting an incorrect node to split the index leaf node. This problem has been fixed.

Failure to create an index when the key size of the single column index was larger than 1/4 of the database page size(CUBRIDSUS-10570)

Fix a problem that would cause a failure to create an index when the key size of the single column index was larger than 1/4 of the database page size.

CREATE TABLE tbl (col1 VARCHAR (10), col2 CHAR (4096));
INSERT INTO tbl VALUES ('1007', '100001');
INSERT INTO tbl VALUES ('1009', '100001');

-- In the previous versions, it was impossible to create the index as shown below:
CREATE INDEX tbl_idx2 ON tbl (col2);
ERROR: Schema manager internal corruption detected.

Expand to use logical expressions without parentheses(CUBRIDSUS-7392)

Modify logical expressions to be used without parentheses.

-- In the previous versions, parentheses needed to be defined in the logical expression of the following query:
SELECT * FROM t1 ORDER BY (code > 10);
SELECT SUM((code>10)) FROM t1;

Problem that would cause an incorrect query result to be returned when false conditions have been given as an operand of the OR operator(CUBRIDSUS-10475)

CREATE TABLE tab0 (col1 INTEGER);
INSERT INTO tab0 VALUES (514);
INSERT INTO tab0 VALUES (698);

-- The following query should output 0 case; however, two cases have been output in the previous versions:
SELECT * FROM tab0 WHERE (col1 BETWEEN 9 AND 2) OR (col1 BETWEEN 5 AND 4);

Problem causing NULL to be returned when retrieving the table created Using a CREATE Statement that included the sentence set operator(CUBRIDSUS-10105)

Fix a problem that would cause NULL to be returned when retrieving the table created using the CREATE statement that included the sentence set operator (UNION, DIFFERENCE, INTERSECT).

CREATE TABLE t1 AS SELECT '1' a UNION SELECT '2' a;

-- In previous versions, the incorrect result has been output, as shown below:
SELECT * FROM t1;
  a
======================
  NULL
  NULL

Failure to convert the string which indicated the hexadecimal number starting with "0x" to a floating point value in Windows(CUBRIDSUS-10384)

SELECT CAST ('0x1111' AS float);
VALUES (1), ('123'), ('0x75');
-- In the previous versions, the following error message would be output when the above queries were executed:
ERROR: Cannot coerce value of domain "character" to domain "float".

Error occurring when lob type data was read in the 32-bit version(CUBRIDSUS-10437)

UPDATE tbl SET b = CHAR_TO_CLOB ('2test') WHERE a = 1;

Problem executing query statements other than the SELECT statement even when only the query plan would be read(CUBRIDSUS-9771)(CUBRIDSUS-10352)

In 9.0 Beta version, statements other than the SELECT statement, such as INSERT, UPDATE, DELETE, REPLACE, TRIGGER, and SERIAL, were executed even when the query plan could be read for the query optimization level. This problem has been fixed. In previous versions, some SQLs were abnormally terminated. The query plan for query statements other than the SELECT statement is not output.

SET OPTIMIZATION LEVEL 514;
REPLACE INTO tbl (col1, col2, col3) VALUES (1, 2, 3);

Globalization

Fix to apply the COLLATE modifier to the expression(CUBRIDSUS-9401)

The COLLATE modifier can be applied to the expression. The following two queries returns the same result.

SELECT * FROM t WHERE (col > 'a') COLLATE utf8_en_ci;
SELECT * FROM t WHERE col COLLATE utf8_en_ci > 'a' COLLATE utf8_en_ci;

Support for the COLLATE Modifier for ENUM Type(CUBRIDSUS-8700)(CUBRIDSUS-9943)

The COLLATE modifier can be used for the ENUM type.

SET NAMES utf8 COLLATE utf8_en_ci;

-- After the update, the following sentences can be executed normally:
CREATE TABLE tbl (a ENUM ('A','B') COLLATE utf8_en_ci);
INSERT INTO tbl VALUES ('a');

The ENUM column is the index value corresponding to the ENUM domain and is basically recognized as a number type. If the COLLATE modifier is used for an ENUM type column, the type is recognized as a VARCHAR type. It is not possible to convert the ENUM type column to another charset using the ALTER TABLE MODIFY statement.

FIND_IN_SET, POSITION, REPLACE, INSTR, LOCATE, SUBSTRING_INDEX functions to regard the collation(CUBRIDSUS-6319)(CUBRIDSUS-8393)

Fix the FIND_IN_SET, POSITION, REPLACE, INSTR, LOCATE, and SUBSTRING_INDEX functions to consider the collation.

SELECT FIND_IN_SET ('b', 'a,B,c' COLLATE 'iso88591_en_ci') r;
            r
=============
            2
SELECT POSITION ('a' IN 'A') r0, POSITION ('a' IN 'A' COLLATE 'utf8_en_ci') r1;
       r0           r1
==========================
    0            1

LIKE Search not to be Case-sensitive for non-case-sensitive Collation(CUBRIDSUS-8391)

Fix the problem that caused the LIKE search to be case-sensitive for non-case-sensitive collation.

CREATE TABLE t (v STRING COLLATE utf8_en_ci);
INSERT INTO t VALUES ('I'), ('i');

-- In the previous versions, 'I' was excluded from the query result, but both of 'I' and 'i' are output in version 9.1.
SELECT * FROM t WHERE v LIKE '%i%';

Change index creation for the column with collation(CUBRIDSUS-7737)

Fix to consider the collation to create the index for columns that had collation. In 9.0 Beta version, the indexes created for the columns with collation other than iso88591_bin, utf8_bin, euckr_bin, iso88591_en_cs, and utf8_en_cs, utf8_ko_cs should be recreated.

CREATE TABLE tbl (id INTEGER, s STRING COLLATE utf8_en_ci);
CREATE INDEX ix1 ON tbl (s DESC);

In addition, in the creation of an index for a column with expandable collation, it has been improved to use the minimum prefix by using the key separator. In 9.0 Beta version, the entire string should be used instead of the minimum prefix. For your information, the prefix index cannot be created for a column with expandable collation.

CREATE TABLE t1 (s1 VARCHAR(200) COLLATE utf8_ja_exp);
CREATE INDEX i ON t1 (s1(5));
ERROR: before ' ; '
Prefix index is not allowed on attribute 's1' (has collation with expansions).

Problem that caused unicode normalization not to be executed(CUBRDISUS-8685)

Fix the problem that would cause unicode normalization to not be executed, even when normalization was set. The normalization-related tag of the LDML file, used in 9.0 beta version, is not used any longer. The following two system parameters are used:

  • unicode_input_normalization: Specifies whether to execute unicode composition to save data.
  • unicode_output_normalization: Specifies whether to execute unicode decomposition to read data.

Fix not to set the foreign key constraints between the combinations of columns with non-compatible collation(CUBRIDSUS-8742)

Fix to not set foreign key constraints between combinations of columns with non-compatible collation.

CREATE TABLE dim (s STRING COLLATE utf8_en_ci PRIMARY KEY);
CREATE TABLE fact (s STRING COLLATE utf8_en_cs PRIMARY KEY);

-- After the update, an error is output when the following query is executed::
ALTER TABLE fact ADD CONSTRAINT FOREIGN KEY(s) REFERENCES dim(s);

Abnormal termination when the string is converted to a collation of a charset that did not exist(CUBRIDSUS-10158)

When an attempt was made to convert a string to the collation of an unusable charset, the csql process or the cub_cas process would be abnormally terminated. This problem has been fixed. In the following example, the test database allows the French charset only:

% echo fr_FR > $CUBRID/conf/cubrid_locales.txt
% make_locale.sh -t64
% cubrid createdb test
% csql -u dba test -S

However, it was attempted to convert the string to German collation, which was not allowed. In this case, 9.0 Beta version was abnormally terminated.

SET NAMES utf8;

CREATE TABLE t (i INTEGER, s STRING COLLATE utf8_fr_exp_ab);
INSERT INTO t VALUES (1,'cote '), (2,'coté'), (3,'côte '), (4,'côté');

SELECT * FROM t ORDER BY CAST (s AS STRING COLLATE utf8_de_exp);

Problem of CHR function to use the default system charset only(CUBRIDSUS-8934)

Expand the CHR function to use the charset of the client specified as well as the default system charset.

SET NAMES utf8;
SELECT CHR (14909886) c;
   c
======================
  'ま'
SELECT CHR (15041963 USING utf8) c;
   c
======================
  '八'

Expand the CHR function to use the COLLATE modifier(CUBRIDSUS-9939)

Expand the CHR function to use the COLLATE modifier.

SELECT * FROM tbl WHERE CHR (65) COLLATE utf8_bin > 'a';

Expand the argument of CLOB_TO_CHAR function to specify the charset of the string to be converted(CUBRIDSUS-9467)

Expand the CLOB_TO_CHAR function to specify the string charset to be converted.

SELECT CLOB_TO_CHAR (contents USING utf8) FROM documents;

Modify to verify the error of setting the attribute value of LDML file when creating the locale library(CUBRIDSUS-10000)

Modify to output an appropriate message for a detected error by verifying the error of setting the attribute value of the LDML (Locale Data Markup Language) file given as an input for creating the locale library by using the make_locale tool (.sh for Linux extension, .bat for Windows extension). An example of an incorrect attribute value setting is given as follows:

  • When setting the same duplicate collation type
  • When it's out of the range of collation ID (0-255)

In addition, it is modified to not be case-sensitive regarding the attribute value.

COLLATE modifier not being applied to the SQL function result normally(CUBRIDSUS-10043)

Fix a problem that would cause an incorrect result of the SQL function to be output because the COLLATE modifier was not applied normally.

CREATE TABLE tbl (col1 CHAR(10) COLLATE utf8_gen_ci, col2 CHAR(10) COLLATE utf8_gen_ai_ci);
INSERT INTO tbl VALUES ('bbb', '%Bb_%');

-- In the previous versions, one case is output when executing the following query. After the update, the COLLATE modifier is normally applied and 0 case is output.
SELECT col1, TRIM (REPLACE (col1,'b','B')), TRIM (col2)
FROM tbl
WHERE TRIM (REPLACE (col1,'b','B')) COLLATE utf8_gen LIKE TRIM (col2) COLLATE utf8_gen;

Incorrect LIKE query result for the fixed-length string in the EUC-KR charset DB(CUBRIDSUS-9199)

In the EUC-KR charset DB, when the LIKE query is processed for the fixed-length string, all the space characters (A1 A1) of the EUC-KR charset and the ASCII space are regarded.

CREATE TABLE t (col CHAR(10));
INSERT INTO t VALUES ('ab');

-- In the previous versions, LIKE search was unavailable.
SELECT * FROM t WHERE col LIKE '_b';

Incorrect sorting of some characters in the euckr_bin collation(CUBRIDSUS-10493)

Fix a phenomenon that would cause the incorrect sorting of some characters of the euckr_bin collation. The space characters (A1 A1) of EUC-KR charset are considered as the smallest value, like the ASCII spaces.

No type conversion because of the space characters in the CHAR type of EUC charset(CUBRIDSUS-10555)

Modify to process the EUC space characters (A1 A1) included in the fixed-length string in the CAST, TO_DATE, TO_TIME, TO_TIMESTAMP, TO_DATETIME, STR_TO_DATE, and TO_NUMBER functions.

Error when executing queries for view, function index, filtered index, and trigger if the charset of client was different from the System Charset(CUBRIDSUS-10561)

Fix a problem that would cause an error to occur when executing queries for View, Function Index, Filtered Index, and Trigger if the charset of client was different from the system charset. Fixed a problem that would cause an error to occur when executing queries for View, Function Index, Filtered Index, and Trigger when the charset of client was different from the system charset.

Generally, the current client charset is used to parse the query statement. The constant string included in View, Function Index, Filtered Index, and Trigger which require parsing the query statement or conditional expression saved can be normally parsed by using the system charset (if there is no specified charset).

-- When the system charset is specified to ko_KR.utf8

SET NAMES euckr;

-- In 9.0 Beta version, an error would occur when executing the following query:
SELECT * FROM db_class;

The SHOW TABLES sentence caused a syntax analysis error, as shown in the above. This error has been fixed in version 9.1.

-- When the system charset has been specified to ko_KR.euck

SET NAMES utf8;

-- In 9.0 Beta version, an error would occur when executing the following query:
SHOW TABLES;

Support for REVERSE Function, BETWEEN Condition, Collation of hierarchy operator(CUBRIDSUS-10423)

Modify the REVERSE function and the BETWEEN condition to consider the collation.

SET NAMES utf8;
SELECT LOCATE ('0', REVERSE ('22'));

Expand to apply the COLLATE modifier to the PRIOR, CONNECT_BY_ROOT hierarchy operator.

CREATE TABLE tbl (
  id VARCHAR COLLATE utf8_en_cs,
  parentid VARCHAR COLLATE utf8_en_ci,
  msg VARCHAR (32)
);

INSERT INTO tbl VALUES ('a', NULL, 'A');
INSERT INTO tbl VALUES ('b', NULL, 'B');
INSERT INTO tbl VALUES ('c', 'A', 'AA');
INSERT INTO tbl VALUES ('d', 'a', 'AB');
INSERT INTO tbl VALUES ('e', 'b', 'BA');
INSERT INTO tbl VALUES ('f', 'b', 'BB');
INSERT INTO tbl VALUES ('g', 'f', 'BBA');
INSERT INTO tbl VALUES ('h', 'g', 'CBA');

SELECT *
FROM tbl
CONNECT BY (PRIOR id) COLLATE utf8_en_cs = parentid
ORDER BY id;

SELECT id, parentid, msg, CONNECT_BY_ROOT id
FROM tbl
WHERE (CONNECT_BY_ROOT id) COLLATE utf8_en_ci < 'D'
START WITH parentid IS NULL
CONNECT BY PRIOR id = parentid
ORDER BY id;

make_locale.sh script not to be executed in Ubuntu Linux(CUBRIDSUS-10647)

Fix a problem that would cause make_locale.sh not to be executed in the dash shell.

Character case of the ISO-8859-1 Charset not to be converted for the expanded ASCII code(CUBRIDSUS-10624)

Fix a problem that would cause the character case of the ISO-8859-1 charset to not be successfully converted for the expanded ASCII code (C0 - FE region). The following problem would occur when using a user identifier which included characters of the expanded ASCII region:

  • Case of user name and group name was not converted to uppercase
  • General schema jobs, such as CREATE and ALTER, were not allowed
  • String processing, such as string conversion, was not processed normally

Error in collation determination method for expressions with two arguments(CUBRIDSUS-10560)

Fix an error in the collation determination method for expressions with two arguments.

SET NAMES utf8;

CREATE TABLE t (s1 STRING COLLATE utf8_bin, s2 STRING COLLATE iso88591_bin);
INSERT INTO t VALUES ('a', 'b');

-- In 9.0 Beta version, an error would occur when executing the following query:
SELECT s1 || s2 FROM t;

Incorrect error message when executing a PREPARE statement exceeding the allowable limit(CUBRIDSUS-8998)

When a PREPARE statement was executed that exceeded the maximum number (20) allowed for one application, "ERROR: Too many prepared statements." should be output. But "ERROR: No error message available." was output instead. This problem has been fixed.

Incorrect error message when a file with the wrong path was given to CLOB_FROM_FILE function and BLOB_FROM_FILE function(CUBRIDSUS-10320)

Fix to output the error message "Invalid Path" when a file with an invalid path was entered to CLOB_FROM_FILE function and BLOB_FROM_FILE function.

SELECT CLOB_FROM_FILE ('file:/home/cubrid/databases/lob/ces_722/image_t.00001357286783349177_590');
-- In 2008 R4.3 or lower versions, the following error was output:
ERROR: Cannot coerce clob to type unknown data type.

-- In 9.0 Beta version, the following error was output:
Attempted to create string with illegal length - 1545942208

Phenomenon causing an incorrect error to be output when the argument of the FORMAT function was not numeric(CUBRIDSUS-10426)

Fix a phenomenon that would cause an incorrect error to be output when the argument of the FORMAT function was not numeric.

SELECT  FORMAT ('nan', 2);

-- In previous versions, the following error message would be output when the above queries were executed:
ERROR: No error message available.
SELECT  FORMAT (CAST('nan' as double), 2);

-- In previous versions, the following error would be output when the above queries were executed:
ERROR: Execute: Query execution failure #1326.

Partitioning

Support for PARTITION clause to define a specific partition(CUBRIDSUS-9492)(CUBRIDSUS-9935)

Support PARTITION clause to specify the partition name without defining the partitioned table. The PARTITION clause can be defined after the partitioned table. It can be used for all AQLs which allow partition, as well as the SELECT statement.

-- Support type in the previous versions
SELECT * FROM athlete2__p__event2;

-- The following query is the same as the existing sentence in the above:
SELECT * FROM athlete2 PARTITION (event2);

When the PARTITION clause is defined in the INSERT statement and the partition specified is different from the definition, an error is returned.

CREATE TABLE t (i INTEGER)
PARTITION BY RANGE(i) (
  PARTITION p0 VALUES LESS THAN (10),
  PARTITION p1 VALUES LESS THAN (100)
);

-- success
INSERT INTO t PARTITION (p0) VALUES (2);

-- error -1108
INSERT INTO t PARTITION (p0) VALUES (20);

If a specific partition is directly referred for a query which has the WHERE clause, partition pruning is not executed and the performance may be improved (slightly). In addition, query processing methods not allowed for the partitioned table, such as INDEX JOIN, ORDER BY/GROUP BY Omit Optimization, Multiple Key Range Optimization, and INDEX SKIP SCAN can be used.

Support the click counter (INCR function and DECR function) for partitioned table(CUBRIDSUS-9157)

Fix to use the click counter (INCR function and DECR function) for the partitioned table. In the previous versions, as the INCR function and the DECR function did not work normally, the value did not increase or decrease.

Support the INSERT ON DUPLICATE KEY UPDATE statement and the REPLACE statement for partitioned table(CUBRIDSUS-8337)

Fix to use the INSERT ON DUPLICATE KEY UPDATE statement and the REPLACE statement for a partitioned table.

Partition pruning being unavailable when the query condition of the range partitioned table was "Column > Maximum Value"(CUBRIDSUS-7792)

Partition pruning was not available but the entire partition table was searched when the query condition of the range partitioned table was "Column > Maximum Value". This problem has been fixed and No Result is immediately returned.

CREATE TABLE t (i INTEGER) PARTITION BY RANGE (i) (PARTITION p0 VALUES LESS THAN (21));

SELECT * FROM t WHERE i >= 21;

-- After the update, the following condition is considered as the conditions above for partition pruning:
SELECT * FROM t WHERE i > 20;

Output 0 as the COUNT(*) value even if there were data when partition was removed from the partition table(CUBRIDSUS-9338)

When partition was removed after executing several INSERT, UPDATE, or DELETE for the partition table, 0 was output as the COUNT(*) value even if there were data. This problem has been fixed.

CREATE TABLE TBL (i INTEGER PRIMARY KEY)
PARTITION BY RANGE (i) (
  PARTITION p0 VALUES LESS THAN (20),
  PARTITION p1 VALUES LESS THAN (40),
  PARTITION p2 VALUES LESS THAN MAXVALUE
);
INSERT INTO tbl SELECT ROWNUM FROM db_class;

ALTER TABLE tbl REMOVE PARTITIONING;

SELECT COUNT (*) FROM tbl;

In the previous versions, the database with the corresponding partitioned table showed Data Mismatch when the cubrid checkdb command was executed. Even when the cubrid checkdb -r command was executed, the data was not recovered.

% cubrid checkdb testdb

Some inconsistencies were detected in your database.
Please consult error_log_file /home/CUBRID/log/testdb_checkdb.err for additional information.

View for a specific partition not to be updatable(CUBRIDSUS-10264)

Fix a problem that caused the view for a specific partition to not be updatable.

CREATE TABLE tbl (col1 INTEGER, col2 INTEGER)
PARTITION BY RANGE (col1) (
  PARTITION p0 VALUES LESS THAN (5),
  PARTITION p1 VALUES LESS THAN (10)
);

CREATE VIEW v1 AS SELECT * FROM tbl PARTITION (p0);
INSERT INTO v1 VALUES (1, 1);
ERROR: Not allowed access to partition: 's1__p__p0'

Not to be normally partitioned when the partition key value in the partitioned table had a different charset from the column(CUBRIDSUS-9904)

As shown in the following example, data was not normally partitioned when the partition key value in the partitioned table had a different charset from the column. This has been fixed in order to not allow the partition key of which charset is different from the column.

CREATE TABLE t (c CHAR(50) COLLATE utf8_bin)
PARTITION BY LIST(c) (
  PARTITION p0 VALUES IN (_utf8'x'),
  PARTITION p1 VALUES IN (_iso88591'y')
);

Incorrect number of affected rows to be output when an error occurred while updating the partitioned table through the view(CUBRIDSUS-9011)

Fix a problem that would cause an incorrect number of affected rows to be output when an error occurred while updating the partitioned table through the view.

CREATE TABLE t (i INTEGER) PARTITION BY RANGE (i) (PARTITION p0 VALUES LESS THAN(3));
INSERT INTO t VALUES (1), (2);
CREATE VIEW v as SELECT * FROM t;
UPDATE v SET i = i + 1;
-- incorrect affected rows
2 rows affected.
ERROR: Appropriate partition does not exist.

HA

Add log information for changing the node status in HA environment(CUBRIDSUS-9748)

Add the following log information for changing the node status in HA environment. This information is written in an error log file when the error_log_level system parameter value is the error value or a lower value.

  • The log information of cub_master process is saved in the $CUBRID/log/<hostname>_master.err file as follows:

    HA generic: Send changemode request to the server. (state:1[active], args:[cub_server demodb ], pid:25728).
    HA generic: Receive changemode response from the server. (state:1[active], args:[cub_server demodb ], pid:25728).
  • The log information of cub_server process is saved in the $CUBRID/log/server/<db_name>_<date>_<time>.err file as follows:

    Server HA mode is changed from 'to-be-active' to 'active'.

Repeating restart of the replication log reflection process of a master node when a slave node was changed to the maintenance mode(CUBRIDSUS-9380)

Fix a problem of repeating restart of the replication log reflection process of a master node when a slave node was changed to the maintenance mode in the HA environment.

Not to be replicated when the default key configured with multiple columns included the DESC direction(CUBRIDSUS-9774)

CREATE TABLE t (a INTEGER, b INTEGER, c INTEGER, PRIMARY KEY (a, b DESC, c));
INSERT INTO t VALUES (1, 1, 1);

Improve replication rebuilding script(CUBRIDSUS-10327)

Fix to create the $repl_log_home directory set when it was not in the ha_make_slavedb.sh script. Also fixed the problem that caused a replication log of the master node, previously copied from the slave node, to not be deleted while the script was being executed if $CUBRID_DATABASES was different from $repl_log_home.

Not recognizing the last host name when there were several host names set in databases.txt(CUBRIDSUS-10344)

In the HA environment, if there were several host names set in the databases.txt file and the last host name was a string that included the first host name, the last host name would not be recognized correctly. This problem has been fixed. For example, in the previous versions, if the host name list was set to "node:node1", node1 was not recognized correctly. So, when "node" is the slave node and the "node1" is the master node, the RW Broker Common Application Server (CAS) cannot recognize and connect to "node1".

ALTER ... CHANGE COLUMN statement has not been reflected to the slave node(CUBRIDSUS-9649)

Fix a phenomenon in which the ALTER ... CHANGE COLUMN statement has not been reflected to the slave node.

ALTER TABLE t1 CHANGE i2 i0 INTEGER FIRST;

Phenomenon causing schema to be replicated to the slave node with the user who connected to the DB first from the application(CUBRIDSUS-6511)

Schema was replicated to the slave node with the user who had connected to the DB first from the application. For example, when user1 account had connected to and disconnected from the DB and then user2 account connected to create a table, the table owned by user1 was created in the slave node. This problem would persist until the Broker Common Application Server (CAS) was restarted. This problem has been fixed.

Fix to not replicate the TRUNCATE statement for the table without the default key to the slave node(CUBRIDSUS-6502)

Since the table without the default key should not be replicated to the slave node, the delay that resulted from applying the TRUNCATE statement to the slave node was fixed by preventing replication of the TRUNCATE statement for the table.

Replication mismatch to the serial when INSERT was executed for the table with the AUTO_INCREMENT column if automatic commit mode was set to OFF(CUBRIDSUS-9045)

Fix replication mismatch to the serial that managed the AUTO_INCREMENT when INSERT was executed for the table with the AUTO_INCREMENT column if automatic commit mode was OFF. AUTO_INCREMENT is managed as a serial value on the db_serial system catalog table.

Sharding

Fix to handle as an error when two or more shard hints indicated different shard(CUBRIDSUS-9395)

When two or more shard hints in a query statement indicated different shards, in the previous versions the shard of the first hint would be used. This has been fixed to be handled as an error if the shard is not identical for all hints.

SELECT *
FROM student
WHERE (id = /*+ shard_key */ 250) OR (id = /*+ shard_key */ 22);

Failure to SELECT for multiple SHARDs with an independent transaction(CUBRIDSUS-9529)

In the SHARD environment, when SELECT was executed for one SHARD with an independent transaction and then SELECT was executed for the other SHARD, it would fail. This problem has been fixed.

Abnormal termination of the command which output the metadata of SHARD when executing the command(CUBRIDSUS-10375)

While executing multiple shard proxies by setting the MAX_NUM_PROXY parameter of shard.conf to 2 or the larger value, when the command which output the metadata of CUBRID SHARD was executed using "cubrid shard status -m", the command would be abnormally terminated. This problem has been fixed.

Correct the PROXY_LOG_FILE included in the default setting of shard.conf to PROXY_LOG_DIR(CUBRIDSUS-10381)

Correct the incorrect PROXY_LOG_FILE parameter setting which has been included in the default shard.conf file to PROXY_LOG_DIR.

Fix to handle as an error when the binding array shard key value was in different shards(CUBRIDSUS-9826)

In the Shard environment, if the shard key value of the array to be bound by the functions, such as JDBC PreparedStatement.executeBatch() and CCI cci_execute_array(), which bind several values to process queries at once, is in different shards, the function is handled as an error.

No Response when receiving many prepare requests (CUBRIDSUS-10041)

When there are many prepare requests for a query in the Shard environment, if one of the request has not been processed within the query timeout or PROXY_TIMEOUT, the remaining prepare requests were not responded to. This problem has been fixed.

In shard proxy, even if rcTime of connection URL is specified, the behavior to return to the original connection did not occur(CUBRIDSUS-10823)

Fix a problem that the connection didn't return to the original one after the rcTime(rcTime is specified in ocnnection URL) when shard proxy is failed over.

Now output an appropriate error message when the CUBRID SHARD proxy failed to authorize(CUBRIDSUS-8839)

No error message has been output when the CUBRID SHARD proxy failed to authorize. This has been fixed to output an appropriate error message as shown in the following example:

Authorization error.(Address is rejected)

Driver

[JDBC][CCI] Improved query timeout calculation (CUBRIDSUS-9585)

The query timeout period has been modified to stop query execution when the query timeout period requested by the application has expired, by sending the time remaining to the server.

[JDBC][CCI] Phenomenon that would cause query timeout and failure to occur in the middle of fetch when query timeout has been set(CUBRIDSUS-9043)

When query timeout was set using the broker parameter MAX_QUERY_TIMEOUT, JDBC's setQueryTimeout method, and the queryTimeout attribute of the connecting URL, query timeout would occur in the middle of fetch, and a failure would also occur. This has been fixed, and now no query timeout occurs in the middle of fetch. The timeout occurs only when the query is being executed.

In addition, query timeout does not occur for the cci_execute_batch function and the cci_execute_array function.

[JDBC][CCI] Now the remaining queries are not executed when deadlock occurs while an array/batch execution function executes queries(CUBRIDSUS-9692)

In the previous versions, when a deadlock occurred while an array/batch execution functions, such as JDBC's PreparedStatement.executeBatch method or CCI's cci_execute_array and cci_execute_batch, were executing each query statement, the remaining queries would be continuously executed. This has been fixed, and now the remaining queries are no longer executed when deadlock occurs. For other general errors, the remaining queries are continuously executed.

[JDBC][CCI] Error occuring when there was no semicolon (;) at the end of the SHOW EXEC STATISTICS ALL sentence(CUBRIDSUS-7430)

An error would occur when there was no semicolon (;) at the end of the sentence while the application was executing the SHOW EXEC STATISTICS ALL statement. This problem has been fixed. In previous versions, this error did not occur in the CUBRID Manager, Query Browser, and CSQL, but did occur in the JDBC or CCI applications.

[CCI] Add CCI_U_TYPE_ENUM type(CUBRIDSUS-10236)

Add CCI_U_TYPE_ENUM, the CCI database type (u_type) for the ENUM type column.

CCI_U_TYPE_ENUM can be retrieved or sent through CCI_A_TYPE_STR.

cci_bind_param (req, 1, CCI_A_TYPE_STR, (char *) buffer, CCI_U_TYPE_ENUM, CCI_BIND_PTR);
cci_get_data (req, 1, CCI_A_TYPE_STR, &buffer, &ind);

[JDBC][CCI] Fix to return "ENUM", instead of "VARCHAR", as the ENUM column type(CUBRIDSUS-10236)

Fix to return "ENUM", instead of "VARCHAR", as the ENUM column type.

  • Retrieves the type name through JDBC DatabaseMetaData
DatabaseMetaData dbmd = conn.getMetaData();
rs = dbmd.getColumns(null, null, "tbl", null);
while (rs.next()) {
 String type = rs.getString("type_name");
 System.out.println("Column type is:" + type);
}
  • Retrieves the type format through CCI's cci_schema_info function and cci_get_result_info function

    res_col_info = cci_get_result_info (req, &stmt_type, &col_count);
    
    for (i = 1; i <= col_count; i++)
      {
        printf ("resultset metadata - column type: %d\n",
                 CCI_GET_RESULT_INFO_TYPE (res_col_info, i));
      }
    

[JDBC] Fix to output a variety of information in the message when an exception occurs(CUBRIDSUS-9611)

Fix to output the Broker Common Application Server (CAS) ID, CAS PID, SESSION ID, and connection URL in the message when an exception occurs in JDBC.

[JDBC] executeBatch method has failed to be executed(CUBRIDSUS-9496)

Fix a phenomenon in which the executeBatch method of JDBC has failed to be executed and the "Cannot communicate with the broker or received invalid packet" error message was returned.

[JDBC] Repeated DML Execution when a query was retried because of query plan fail while executing multiple queries with one execution function(CUBRIDSUS-8472)

When queries were retried from the first query when the query plan has failed while executing several queries with one execution function, some of the DML executed was not rolled back and repeated execution was made. This problem has been fixed.

-- The following MULTI_STMT is considered as one task:
String MULTI_STMT = "INSERT INTO T1 VALUES (1, 1); INSERT INTO T1 VALUES (2, 2); UPDATE T2 SET A = 0";
PreparedStatement p = c.prepareStatement(MULTI_STMT);
boolean retval = p.execute();

[JDBC] Updatable resultSet with a cursor type of TYPE_FORWARD_ONLY in automatic commit mode was processed as an exception when updateRow() was executed for the set(CUBRIDSUS-9313)

When updateRow() was executed for the updatable resultSet with a cursor type of TYPE_FORWARD_ONLY in automatic commit mode, it was processed as an exception, "Internal server error". This problem has been fixed. In previous versions, the resultSet could be updated only when the cursor type was TYPE_SCROLL_SENSITIVE. After the update, the resultSet can be updated for all cursor types (TYPE_FORWARD_ONLY, TYPE_SCROLL_INSENSITIVE, TYPE_SCROLL_SENSITIVE).

conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE);
...
rs.updateNull("col2");
rs.updateRow();

[JDBC] Problem causing reconnection to altHosts to not be tried even when connecttimeout has been set in the connection URL(CUBRIDSUS-10240)

Fix a problem that caused reconnection to altHosts to not be tried but an error would occur even when connectTimeout has been set in the connection URL and occurred in JDBC. This problem has been fixed.

[JDBC] Exception when calling PreparedStatement.clearParameter() method even when the application has not closed the statement(CUBRIDSUS-10274)

An exception 'Connection or Statement might be closed' would occur when the PreparedStatement.clearParameter() method was called while the user did not disconnect the DB connection and closed the statement in the JDBC application. This problem has been fixed.

[CCI] Statement has Not been Removed from the Statement Pool when an SQL was Prepared Twice(CUBRIDSUS-9397)

When a SQL statement has been prepared twice on one DB connection, the second statement was not closed on the statement pool being used even if the statement was closed. This problem has been fixed.

[CCI] CCI application has been abnormally terminated when NULL has been entered as the pointer value in the function which used the T_CCI_ERROR structure pointer in the CCI application(CUBRIDSUS-9435)

Fix a phenomenon in which the CCI application has been abnormally terminated when NULL has been entered as the pointer value in the function which used the T_CCI_ERROR structure pointer in the CCI application such as cci_datasource_release..

[CCI] CAS_NO_MORE_DATA Error when completing fetch after executing the SELECT query in asynchronous mode(CUBRIDSUS-7170)

The CAS_NO_MORE_DATA error would occur when completing fetch after executing the SELECT query in asynchronous mode, instead of the CCI_NO_MORE_DATA error. This problem has been fixed.

res = cci_execute (req, CCI_EXEC_ASYNC, 0, &error);

[CCI] CCI application for windows infinitely waited when no db server to connect(CUBRIDSUS-9361)

Fix a phenomenon in which the CCI application for windows infinitely waited when it could not acces the database like the case when there was no DB server to connect.

[CCI] Abnormal termination of application when null was entered to the argument of the multiple query processing functions which would receive the query result(CUBRIDSUS-9600)

When NULL is entered in the argument of cci_execute_batch, cci_execute_array, and cci_execute_result which will receive the query result, the "invalid argument" error occurs.

[CCI] Add cci_is_holable function (CUBRIDSUS-8803)

Add cci_is_holable function which returns whether the request handle has a holdable cursor or not.

holdable = cci_is_holdable (req_handle);

[CCI] Abnormal operation of application when the closed connection was reused in DATASOURCE(CUBRIDSUS-9586)

The application operated abnormally when the CCI application reused the closed DATASOURCE connection. This has been fixed to output an error.

con_1 = cci_datasource_borrow (ds, &error);
cci_prepare_and_execute (con_1, query, 1, &exec_retval, &error);
ret = cci_datasource_release (ds, con_1, &error);

ret = cci_prepare_and_execute (con_1, query, 1, &exec_retval, &error); /* invalid access */

[CCI] Abnormal termination of application when calling cci_execute() after calling cci_datasource_release() with setting pool_prepared_statement property=OFF(CUBRIDSUS-10847)

Fix a problem that an application was abnormally terminated when calling cci_execute() after calling cci_datasource_release() with setting pool_prepared_statement property=OFF.

[CCI] -20004 error when prepared statement is performed(CUBRIDSUS-10840)

Fixed a problem that -20004 error(CCI_ER_COMMUNICATION) occurred when a driver retried "prepare" operation and CAS was restarted simultaneously; "prepare" retrial is performed if CAS_ER_STMT_POOLING error occurred during running cci_execute() with autocommit mode ON.

[CCI] Error when calling the cci_execute function twice as executing multiple query statements by using the function(CUBRIDSUS-10424)

If the first query type was different from the last query type when executing multiple query statements using the cci_execute function, e.g., the first one is the UPDATE statement and the last one is the SELECT statement, the "Invalid cursor position" error would occur when the function was called twice. This problem has been fixed.

static char *MULTI_STMT = "UPDATE T1 SET B = A; SELECT A FROM T1 WHERE A = ?;";

result = cci_execute (request, CCI_EXEC_QUERY_ALL, 0, error); /* 1st */

result = cci_execute (request, CCI_EXEC_QUERY_ALL, 0, error); /* 2nd */

[CCI] Fix not to operate the cci_set_allocator function in Windows(CUBRIDSUS-10669)

In Windows, the function to allocate or deallocate memory was changed when the cci_set_allocator function was used. This has been fixed so as not to allow the cci_set_allocator function to change the function.

[PHP] Query not being stopped at the server when webpage loading was stopped(CUBRIDSUS-5855)

Even when webpage loading has been stopped on the PHP application website, the query being executed at the server was not stopped. This problem has been fixed. In previous versions, if a user repeatedly stops loading a webpage requiring a long query execution time, the unnecessary connection resource in the web server gets larger.

Utility

Add CSQL command to limit the length of a column when printing the result(CUBRIDSUS-9643)

Add CSQL session command, which limits the lenght of a column when CSQL interpreter prints the result.

  • ;STring-width limits the length of printing the column of character string or bit string types. In addition, command line option, --string-width is provided.
  • ;COLumn-width limits the length of printing the column without distinguishing types.

Add ";line-output", the CSQL session command(CUBRIDSUS-9537)

Add CSQL session command, ";line-output" which works like --line-output option of CSQL Interpreter.

csql> ;line-output on
csql> ;line-output off

Change to print the SQL running time as default in CSQL interpreter(CUBRIDSUS-9537)

Now CSQL interpreter prints the SQL running time as default when SQL is run. You can change whether printing the running time or not by using ";time" CSQL session command.

Fix to allow additional connection by using --sysadm in the CSQL even when the maximum number of connection to server has been exceeded(CUBRIDSUS-9478)

Fix to allow only one connection with the system administrator mode (--sysadm) in the CSQL even when the maximum number of connection to server, which has been set by the value of the system parameter max_clients has been exceeded.

% csql -u dba --sysadm testdb

Fix to output an error saying that windows did not support ;historylist and ;historyread commands of CSQL when the commands are attempted to be executed(CUBRIDSUS-10190)

The Windows version does not support the ;historylist and ;historyread session commands. If the session command is entered, an error message is output.

Change to output the host name when ;database is executed in the CSQL connected to the local host DB(CUBRIDSUS-5243)

Change to output the host name, not the "localhost", when executing ;database in the CSQL connected to the local host DB.

csql> ;database
testdb@cubridhost

Annotation not to be processed normally when the annotation included a single quote in CSQL(CUBRIDSUS-8999)

Fix the problem that caused the session command not to be executed because the annotation was not correctly processed in the following case:

csql> -- it's a line comment.
csql> ;exit

Schema object with unicode decomposed characters as its identifier could not be retrieved in CSQL(CUBRIDSUS-9923)

When unicode decomposed characters were included in the table name or the trigger name, the schema object could not be retrieved using the ;SCHEMA session command and the ;TRIGGER session command of CSQL. This problem has been fixed.

Failure of executing ";read", the CSQL session command, when the file name was followed by a space(CUBRIDSUS-10357)

Fix an error that would occur when executing the following command when the name of test.txt was followed by a space.

csql> ;read test.txt
ERROR: Too many file names have been entered.

Abnormal termination occurring when a long query statement has been executed with the Windows-version CSQL(CUBRIDSUS-9315)

To avoid stack overflow, the stack memory size for Windows was changed from 1 M to 8 M.

View to include the Java Stored Procedure was not loaded with a unloaded schema file(CUBRIDSUS-1520)

The view to include the Java stored procedure could not be loaded with a schema file which has been unloaded by unloaddb utility. This problem has been fixed.

Data violating the constraints was input when loaddb was set to ignore a specific error(CUBRIDSUS-7650)

If the loaddb utility was set to ignore a specific error by setting the --error-control-file=FILE option, data violating the constraints was entered. This problem has been fixed. For example, when -205 was set to ignore the NULL error, NULL data was entered in the default key column. This has been fixed so that data is not entered when an error occurs.

Incorrect row number where an error has occurred as executing loaddb(CUBRIDSUS-7894)

Fix a problem that would lead to the output of an incorrect row number when an error occurred as the loaddb utility was loading data.

loaddb to fail or load an incorrect value when the schema file included a string ending with a backslash ('\')(CUBRIDSUS-9109)

Fix a problem that would cause the loaddb utility to fail or load an incorrect value when the schema file included a string ending with a backslash ('\').

ALTER TABLE [t] ADD ATTRIBUTE
[dir] character varying(200) DEFAULT 'C:\Databases\',
...;

Unloading a wrong reverse order prefix index(CUBRIDSUS-9966)

An error would occur when loading the wrong reverse order prefix index, as shown in the following example. This error has been fixed.

CREATE TABLE t (a VARCHAR (200));
CREATE INDEX i ON t (a (2) DESC);
% cubrid unloaddb -S testdb
% cat testdb_indexes
CREATE INDEX [i] ON [t1] ([a] DESC (2));

Unloading data of the db_collation system catalog view(CUBRIDSUS-10350)

Fix a problem that would cause data for the db_collation system catalog view to be unloaded while unloading the database of 9.0 Beta version. While migrating the data from 9.0 Beta version to 9.1 version, using the unloaddb/loaddb utility may cause an error while trying to load data for system catalog view. It is recommended to remove db_collation from the unloaded file or use the migrate_90beta_to_91 utility instead of the unloaddb/loaddb utility.

backupdb utility of the Windows version to output an internal error for an incorrect input(CUBRIDSUS-10634)

When a directory that did not exist was entered in the -D option of the cubrid backupdb utility for the Windows version, the "pthread_cond_destroy" error would occur. This problem has been fixed.

Problem preventing the checkdb utility execution from being interrupted(CUBRIDSUS-7452)

Fix the problem that prevented the index check task carried out by the cubrid checkdb utility from being interrupted and stopped.

Query statement terminated by the killtran utility to not return an error(CUBRIDSUS-10133)

When a transaction executing a query was terminated by the killtran utility, no error was returned. This has been fixed to return the "Has been interrupted" error.

Fix to include the original statement of the query when outputting the plandump utility or the slow query log(CUBRIDSUS-9593)

When the query that was a query plan target was output, or a slow query was output in the server error log or the broker log by using the cubrid plandump utility, the rewritten query statement was output in the previous versions. This has been fixed to also output the original query statement entered by the user.

Fix to occur an error when the locale setting of the dumplocale utility was wrong(CUBRIDSUS-10306)

When a locale that does not exist in the locale library is specified in the cubrid_locales.txt file while executing the cubrid dumplocale utility, an error results.

Broker is abnormally terminated as executing broker_log_top when ')' came in front of the string binding value as the first character(CUBRIDSUS-9245)

Fix a problem in which the broker is abnormally terminated as executing broker_log_top when ')' came in front of the string binding value as the first character

Abnormal termination of the broker_log_top utility when the SQL log file type was wrong(CUBRIDSUS-10072)

Fix the problem that would cause the broker_log_top utility and the shard_broker_log_top utility to be abnormally terminated when the SQL log file type was incorrect.

Problem preventing the broker_log_top utility for windows from analyzing the UNIX File type SQL log file(CUBRIDSUS-10139)

Fix a problem preventing the broker_log_top utility for Windows from analyzing the UNIX file type SQL log file.

backup could not be stopped when the checkpoint operation was being executed(CUBRIDSUS-9337)

Fix a problem in which backup could not be stopped by using INTERRUPT (Ctrl+C) when the checkpoint operation was being executed while executing the backup.

Modify to cancel the checkpoint when it took 1 minute or more to execute termination of the database because of the checkpoint(CUBRIDSUS-6867)

The database was set to be terminated by force when it took 1 minute or more to execute termination of the database by using the cubrid server stop command. When it took a long time to operate the checkpoint during shutdown, the database server was terminated by force, causing a long time to recover the database when restarting the database. This was fixed to cancel the checkpoint process and terminate the database when a timeout occurred during shutdown.

The number of client wait CASs is larger than the total of CASs when outputting the broker status information(CUBRIDSUS-10235)

When outputting the broker status information using the cubrid broker status -f command, if the number of CASs increased to be higher than the minimum number and then decreased, the number of CLIENT WAIT CASs was larger than the total number of CASs. This problem has been fixed.

Abnormal termination when interrupting the execution of createdb utility for Windows(CUBRIDSUS-9332)

Fix a problem that would cause abnormal termination when the cubrid createdb utility for Windows was interrupted.

Parameter value is set incorrectly when changing the parameter value using the shard_broker_changer utility(CUBRIDSUS-9956)

Fix a problem causing parameter values such as SQL_LOG, SLOW_LOG, and PROXY_LOG to be changed to the wrong values using the shard_broker_changer utility.

Output an incorrect ERR-Q value when the -s option of cubrid broker status command has been set(CUBRIDSUS-10391)

When the -s option of cubrid broker status command was set, the number of error queries per second was output in the ERR-Q item. This has been fixed to output the number of error queries for N seconds as an option.

Output an incorrect error message when the file argument has been omitted from the ;append session command of CSQL for Windows(CUBRIDSUS-10191)

In CSQL for Windows, the file name argument has been omitted from the ;append session command, and the error message "ERROR: A file name is required." is output. In the previous versions, "ERROR: Permission denied" or "ERROR: Too many file names have been entered." was output.

Fix to output the disk space size required to create the database using createdb utility(CUBRIDSUS-6623)

Fix to output the disk space size required to create the database by using the createdb utility.

Creating database with 512.0M size. The total amount of disk space needed is 1.5G.

If the disk space is not sufficient, the following error message is also output:

Couldn't create database.
Unable to create volumes for database "testdb". Please refer "/home/CUBRID/log/testdb_createdb.err" for additional information.

Improve the error message output when entering the volume size out of the range as creating the DB(CUBRIDSUS-9761)

Fix to output the error message that says that the volume size has been entered in Bytes(B) when the volume size out of the allowable DB size has been entered while creating the DB.

% cubrid createdb testdb --db-volume-size=1121
Couldn't create database.
The given 'db_volume_size', 1121B is out of range. Choose a size between 20.0M and 20.0G.

Configuration, Build, and Installation

Abnormal termination of client and server when defining the undefined system parameter to cubrid.conf(CUBRIDSUS-9765)

In 9.0 Beta version, when a system parameter that did not exist in cubrid.conf was set, the client and server would be abnormally terminated. This problem has been fixed.

Fix to change some system parameters while the system is operating(CUBRIDSUS-7625)

The following system parameters can be changed while the system is operating:

  • index_scan_oid_buffer_size
  • sort_buffer_size
  • temp_file_memory_size_in_pages
  • unfill_factor
  • volume_extension_path
  • lock_escalation
  • checkpoint_every_npages
  • async_commit
  • access_ip_control
  • access_ip_control_file
  • index_unfill_factor

In addition, the following phenomena that occurred when calculating the system parameter value using the ;get command in CSQL have been fixed:

  • CSQL abnormally terminated when ;get access_ip_control_file was executed
  • An incorrect result was output when ;get unfill_factor was executed
  • Normal system parameters could not be retrieved by using the ;get session command after failing to executing the ;get command for the undefined system parameter

Add an option to build the CCI driver only in the CUBRID Source file and distributed the CCI source package(CUBRIDSUS-8247)

Add the --with-cci-only option to the configure script of the CUBRID source file to build the CCI driver only. In addition, distributed the source package that can build the CCI driver only.

Normally executed query had been included in the SQL log, even when the broker parameter SQL_LOG has been set to NOTICE(CUBRIDSUS-10286)

Even when the SQL_LOG value of cubrid_broker.conf was set to NOTICE, the query that has been normally executed was often output. The possibility of this occurring has been reduced.

Fix to allow '/' as the path separator of the broker parameter in Windows(CUBRIDSUS-9318)

Fix to allow '/' as the path separator of the broker parameter in Windows.

D:/dir/dir/file

"Drive name:" to be incorrectly recognized in the ACCESS_CONTROL_FILE file in Windows(CUBRIDSUS-9318)

When the ACCESS_CONTROL_FILE file was written in "db_name:db_user_id:IP_file" format and the IP_file path was written as "Drive Name:Path File Name", the ":" after the drive name was mistaken as the value separator, causing misrecognition of the file.

Wrong Message that the Broker Has Been Normally Operated Even when the Broker Parameter Value was Incorrect in Windows(CUBRIDSUS-9992)

Fix to output an error message saying that the broker cannot be operated when the broker parameter value is incorrect in Windows.

Alarm message based on Java installation while installing the CUBRID engine in Windows(CUBRIDSUS-9917)

If Java 1.6 or higher has not been installed in Windows while installing the CUBRID engine, the message saying that CUBRID Manager could not be used would be output. But this message is not related with the CUBRID engine. Therefore, this has been fixed to output a message saying that the Java stored procedure cannot be used.

Other

Wrong warning message when DML is executed in the environment configured to output a warning(CUBRIDSUS-9458)

Fix a phenomenon of outputting a wrong warning message when normal INSERT/UPDATE/DELETE was executed in the environment configured to output a warning.

Incorrect 64-bit int value exceeding the 32-bit int range in some error messages(CUBRIDSUS-8379)

Fix the problem causing the output of an incorrect 64-bit int value exceeding the 32-bit int range in some error messages.

Fix to write the notification message for the request for database access only once(CUBRIDSUS-9885)

Fix to output only one notification message for the access request to the database server. In the previous versions, two notification messages were output to indicate the start and end of the processing of the request.

Fix to remove the application client access information from the error log and write in the access log(CUBRIDSUS-5962)

When too much application client access information was written in the error log, it became difficult for the user to view the details of specific errors. To solve this problem, a fix was applied to remove the application client access information from the error log and write in the access log.

Send a wrong error instead of the error which has occurred in the db server when an INTERRUPT has occurred(CUBRIDSUS-9450)

When an INTERRUPT occurred, the error which has occurred in the DB server was sent as another error. This problem has been fixed.

-495 error when many clients executed UPDATE for the Same table(CUBRIDSUS-10651)

When many clients simultaneously executed UPDATE for the same table and a query timeout frequently occurred, the -495 error ("Execute: Query execution failure #8560") would occur. This problem has been fixed.

However, when a specific server thread has failed to access a page for update several times due to load being concentrated, a -495 error would occur in the previous version. After the update, the -2 error ("Internal system failure: no more specific information is available.") occurs. This situation may occur when there is a large number of UPDATE requests for the same page within a short period. In this case, the user should retry the failed query.

All cursors are closed when there is an internal error in autocommit mode(CUBRIDSUS-10840)

Fix a problem that all holding cursors were closed by auto-rollback when an internal -20004 error(CAS_ER_STMT_POOLING) in autocommit mode occurred.

Wrong memory usage in the CAS Log when the broker common application server (CAS) is restarted due to the exceeded memory usage(CUBRIDSUS-10181)

Fix to output the memory usage of the corresponding process in the CAS log when the Broker Common Application Server (CAS) is restarted due to excessive APPL_SERVER_MAX_SIZE of cubrid_broker.conf, which is the maximum value of the memory usage.

01/10 18:43:42.217 (0) CAS MEMORY USAGE (101M) HAS EXCEEDED MAX SIZE (101M)

Error log of the broker and the database server was not recreated when it was randomly deleted(CUBRIDSUS-8351)

There was a problem in which an error log of the broker and the database server was not recreated when it was randomly deleted. It has been fixed and the deleted error log is recreated.

Broker was abnormally terminated when an application connected to the broker with a long broker name(CUBRIDSUS-9392)

A broker was abnormally terminated when an application connected to the broker with a long broker name (32 characters or longer). The problem was fixed by allowing the length of a broker name to 63 characters and making the broker not to be started when the broker name was longer than 63 characters.

A message of normal service termination even when the service has not been normally terminated in Windows(CUBRIDSUS-10143)

When the "cubrid service stop" command was executed in Windows, the "++ cubrid service stop: success" message was output even when not all of the processes registered in services of cubrid.conf were not normally terminated. This problem has been fixed.

Incorrect message when the server was started while the server has been running in Windows(CUBRIDSUS-10216)

When it was attempted to start the server while the server was running in Windows, the "cubrid server start: success" message was output. This has been fixed and the "cubrid server is already running" message is output.

Wrong error if cub_master is not started when accessing the server through the broker of Windows version(CUBRIDSUS-10739)

In case that the application accesses the server through the broker of Windows version, the wrong error occurs and CAS runs infinite loop when cub_master process is not started.

In previous version of Windows, below error number will be returned. * On the first trial of access, -21112 (An IOException was caught during reading the inputstream) is returned. * On the second trial of access, -21013 (Cannot connect to a broker) is returned.

In previous and fixed versions of Linux and fixed version of Windows, below error number will be returned. * On the first and second trial of access, -353 (Failed to connect to database server, 'basic', on the following host(s): localhost) will be returned.

Description to CUBRIDService, the Windows Service process(CUBRIDSUS-8917)

Add the following description to CUBRIDService, the Windows service process of CUBRID. This description can be checked in "Control Panel > System and Security > Management Tools > Services > CUBRIDService" of Windows.

Service to execute master,broker,database server and manager server processes for CUBRID.
Service start/stop menu is equal to the command of "cubrid service start/stop".
If you setup "startup type" of this service to "Disabled", you can't use "cubrid service" command.

Phenomenon preventing DB users from being edited in CUBRID Manager(CUBRIDSUS-9971)

In 9.0 Beta version, it was impossible to edit the password of the DB users or change the group on the CUBRID Manager. This has been fixed.

Cautions

New Cautions

DB volume of 9.1 version and 9.0 Beta version were not compatible(CUBRIDSUS-10147)

As the DB volume of 9.1 version and 9.0 Beta version are not compatible, a user upgrading CUBRID 9.0 Beta to version 9.1 should convert the existing DB volume to the DB volume of version 9.1 after installing CUBRID 9.1. For volume migration, the migrate_90beta_to_91 utility for version 9.1 is provided.

% migrate_90beta_to_91 <db_name>

For details, see Upgrade.

DB volume was not compatible between 9.1 version and the versions lower than 9.0 Beta version(CUBRIDSUS-5238)

As the DB volume of version 9.1 and versions lower than 9.0 Beta are not compatible, the user should migrate the data using cubrid unloaddb/loaddb. For more details, see Upgrade.

Note

9.0 Beta version users should upgrade all drivers, broker, and DB server together as migrating DB volume.

Change to CUBRID_CHARSET and CUBRID_MSG_LANG environment variables instead of CUBRID_LANG

Since version 9.1, CUBRID_LANG environment variable is no longer used. To create or run the database, the CUBRID_CHARSET environment variable should be set because it determines the charset type. To output the utility message and the error message, the CUBRID_MSG_LANG environment variable is used. If the setting is omitted, the setting of CUBRID_CHARSET environment variable is used.

Modify how to process an error for the array of the result of executing several queries at once in the CCI application(CUBRIDSUS-9364)

When executing several queries at once in the CCI application, if an error has occurs from at least one query among the results of executing queries by using the cci_execute_array function, the cci_execute_batch function, the error code of the corresponding query was returned from 2008 R3.0 to 2008 R4.1. This problem has been fixed to return the number of the entire queries and check the error of each query by using the CCI_QUERY_RESULT_* macros from 2008 R4.3 and 9.1 version.

In earlier versions of this modification, there is no way to know whether each query in the array is success or failure when an error occurs; therefore, it it requires certain conditions.

...
char *query = "INSERT INTO test_data (id, ndata, cdata, sdata, ldata) VALUES (?, ?, 'A', 'ABCD', 1234)";
...
req = cci_prepare (con, query, 0, &cci_error);
...
error = cci_bind_param_array_size (req, 3);
...
error = cci_bind_param_array (req, 1, CCI_A_TYPE_INT, co_ex, null_ind, CCI_U_TYPE_INT);
...
n_executed = cci_execute_array (req, &result, &cci_error);

if (n_executed < 0)
  {
    printf ("execute error: %d, %s\n", cci_error.err_code, cci_error.err_msg);

    for (i = 1; i <= 3; i++)
      {
        printf ("query %d\n", i);
        printf ("result count = %d\n", CCI_QUERY_RESULT_RESULT (result, i));
        printf ("error message = %s\n", CCI_QUERY_RESULT_ERR_MSG (result, i));
        printf ("statement type = %d\n", CCI_QUERY_RESULT_STMT_TYPE (result, i));
      }
  }
...

From the modified version, entire queries are regarded as failure if an error occurs. In case that no error occurred, it is determined whether each query in the array succeeds or not.

...
char *query = "INSERT INTO test_data (id, ndata, cdata, sdata, ldata) VALUES (?, ?, 'A', 'ABCD', 1234)";
...
req = cci_prepare (con, query, 0, &cci_error);
...
error = cci_bind_param_array_size (req, 3);
...
error = cci_bind_param_array (req, 1, CCI_A_TYPE_INT, co_ex, null_ind, CCI_U_TYPE_INT);
...
n_executed = cci_execute_array (req, &result, &cci_error);
if (n_executed < 0)
  {
    printf ("execute error: %d, %s\n", cci_error.err_code, cci_error.err_msg);
  }
else
  {
    for (i = 1; i <= 3; i++)
      {
        printf ("query %d\n", i);
        printf ("result count = %d\n", CCI_QUERY_RESULT_RESULT (result, i));
        printf ("error message = %s\n", CCI_QUERY_RESULT_ERR_MSG (result, i));
        printf ("statement type = %d\n", CCI_QUERY_RESULT_STMT_TYPE (result, i));
      }
  }
...

In java.sql.XAConnection interface, HOLD_CURSORS_OVER_COMMIT is not supported(CUBRIDSUS-10800)

Current CUBRID does not support ResultSet.HOLD_CURSORS_OVER_COMMIT in java.sql.XAConnection interface.

From 9.0, STRCMP behaves case-sensitively

Until the previous version of 9.0, STRCMP did not distinguish an upppercase and a lowercase. From 9.0, it compares the strings case-sensitively. To make STRCMP case-insensitive, you should use case-insensitive collation(e.g.: utf8_en_ci).
-- In previous version of 9.0 STRCMP works case-insensitively
SELECT STRCMP ('ABC','abc');
0

-- From 9.0 version, STRCMP distinguish the uppercase and the lowercase when the collation is case-sensitive.
export CUBRID_CHARSET=en_US.iso88591

SELECT STRCMP ('ABC','abc');
-1

-- If the collation is case-insensitive, it distinguish the uppercase and the lowercase.
export CUBRID_CHARSET=en_US.iso88591

SELECT STRCMP ('ABC' COLLATE utf8_en_ci ,'abc' COLLATE utf8_en_ci);
0

Existing Cautions

Since the 2008 R4.1 version, the Default value of CCI_DEFAULT_AUTOCOMMIT has been ON(CUBRIDSUS-5879)

The default value for the CCI_DEFAULT_AUTOCOMMIT broker parameter, which affects the auto commit mode for applications developed with CCI interface, has been changed to ON since CUBRID 2008 R4.1. As a result of this change, CCI and CCI-based interface (PHP, ODBC, OLE DB etc.) users should check whether or not the application's auto commit mode is suitable for this.

From the 2008 R4.0 version, the options and parameters that use the unit of pages were changed to use the unit of volume size(CUBRIDSUS-5136)

The options (-p, -l, -s), which use page units to specify the database volume size and log volume size of the cubrid createdb utility, will be removed. Instead, the new options, added after 2008 R4.0 Beta (--db-volume-size, --log-volume-size, --db-page-size, --log-page-size), are used.

To specify the database volume size of the cubrid addvoldb utility, use the newly-added option (--db-volume-size) after 2008 R4.0 Beta instead of using the page unit. It is recommended to use the new system parameters in bytes because the page-unit system parameters will be removed. For details on the related system parameters, see the below.

Be cautious when setting db volume size if you are a user of a version before 2008 R4.0 Beta(CUBRIDSUS-4222)

From the 2008 R4.0 Beta version, the default value of data page size and log page size in creating the database was changed from 4 KB to 16 KB. If you specify the database volume to the page count, the byte size of the volume may differ from your expectations. If you did not set any options, 100MB-database volume with 4KB-page size was created in the previous version. However, starting from the 2008 R4.0, 512MB-database volume with 16KB-page size is created.

In addition, the minimum size of the available database volume is limited to 20 MB. Therefore, a database volume less than this size cannot be created.

The change of the default value of some system parameters of the versions before 2008 R4.0(CUBRIDSUS-4095)

Starting from 2008 R4.0, the default values of some system parameters have been changed.

Now, the default value of max_clients, which specifies the number of concurrent connections allowed by a DB server, and the default value of index_unfill_factor that specifies the ratio of reserved space for future updates while creating an index page, have been changed. Furthermore, the default values of the system parameters in bytes now use more memory when they exceed the default values of the previous system parameters per page.

Previous System Parameter Added System Parameter Previous Default Value Changed Default Value (unit: byte)
max_clients
50 100
index_unfill_factor
0.2 0.05
data_buffer_pages data_buffer_size 100M(page size=4K) 512M
log_buffer_pages log_buffer_size 200K(page size=4K) 4M
sort_buffer_pages sort_buffer_size 64K(page size=4K) 2M
index_scan_oid_buffer_pages index_scan_oid_buffer_size 16K(page size=4K) 64K

In addition, when a database is created using cubrid createdb, the minimum value of the data page size and the log page size has been changed from 1K to 4K.

Changed so that database services, utilities, and applications cannot be executed when the system parameter is incorrectly configured(CUBRIDSUS-5375)

It has been changed so that now the related database services, utilities, and applications are not executed when configuring system parameters that are not defined in cubrid.conf or cubrid_ha.conf, when the value of system parameters exceed the threshold, or when the system parameters per page and the system parameters in bytes are used simultaneously.

Database fails to start if the data_buffer_size is configured with a value that exceeds 2G in CUBRID 32-bit version(CUBRIDSUS-5349)

In the CUBRID 32-bit version, if the value of data_buffer_size exceeds 2G, the running database fails. Note that the configuration value cannot exceed 2G in the 32-bit version because of the OS limit.

Recommendations for controlling services with the CUBRID Utility in Windows Vista and higher(CUBRIDSUS-4186)

To control services using cubrid utility from Windows Vista and higher, it is recommended to start the command prompt window with administrative privileges.

If you don't start the command prompt window with administrative privileges and use the cubrid utility, you can still execute it with administrative privileges through the User Account Control (UAC) dialog box, but you will not be able to verify the resulting messages.

The procedures for starting the command prompt window as an administrator in Windows Vista and higher are as follows:

  • Right-click [Start > All Programs > Accessories > Command Prompt].
  • When [Execute as an administrator (A)] is selected, a dialog box to verify the privilege escalation is activated. Click “YES” to start with administrative privileges.

GLO class which is used in 2008 r3.0 or before is not supported any longer(CUBRIDSUS-3826)

CUBRID 2008 R3.0 and earlier versions processed Large Objects with the Generalized Large Object glo class, but the glo class has been removed from CUBRID 2008 R3.1 and later versions. Instead, they support BLOB and CLOB (LOB from this point forward) data types. (See BLOB/CLOB Data Types for more information about LOB data types).

glo class users are recommended to carry out tasks as follows:

  • After saving GLO data as a file, modify to not use GLO in any application and DB schema.
  • Implement DB migration by using the unloaddb and loaddb utilities.
  • Perform tasks to load files into LOB data according to the modified application.
  • Verify the application that you modified operates normally.

For reference, if the cubrid loaddb utility loads a table that inherits the GLO class or has the GLO class type, it stops the data from loading by displaying an error message, "Error occurred during schema loading."

With the discontinued support of GLO class, the deleted functions for each interface are as follows:

Interface Deleted Functions
CCI

cci_glo_append_data

cci_glo_compress_data

cci_glo_data_size

cci_glo_delete_data

cci_glo_destroy_data

cci_glo_insert_data

cci_glo_load

cci_glo_new

cci_glo_read_data

cci_glo_save

cci_glo_truncate_data

cci_glo_write_data

JDBC

CUBRIDConnection.getNewGLO

CUBRIDOID.loadGLO

CUBRIDOID.saveGLO

PHP

cubrid_new_glo

cubrid_save_to_glo

cubrid_load_from_glo

cubrid_send_glo

Port configuration is required if the protocol between the master and server processes is changed, or if two versions are running at the same time(CUBRIDSUS-3564)

Because the communication protocol between a master process (cub_master) and a server process (cub_server) has been changed, the master process of CUBRID 2008 R3.0 or later cannot communicate with the server process of a lower version, and the master process of a lower version cannot communicate with a server process of 2008 R3.0 version or later. Therefore, if you run two versions of CUBRID at the same time by adding a new version in an environment where a lower version has already been installed, you should modify the cubrid_port_id system parameter of cubrid.conf so that different ports are used by the different versions.

Specifying a question mark when entering connection information as a URL string in JDBC(CUBRIDSUS-3217)

When entering connection information as a URL string in JDBC, property information was applied even if you did not enter a question mark (?) in the earlier version. However, you must specify a question mark depending on syntax in this CUBRID 2008 R3.0 version. If not, an error is displayed. In addition, you must specify colon (:) even if there is no username or password in the connection information.

URL=jdbc:CUBRID:127.0.0.1:31000:db1:::altHosts=127.0.0.2:31000,127.0.0.3:31000 – Error
URL=jdbc:CUBRID:127.0.0.1:31000:db1:::?altHosts=127.0.0.2:31000,127.0.0.3:31000 – Normal

Not allowed to include @ in a database name(CUBRIDSUS-2828)

If @ is included in a database name, it can be interpreted that a host name has been specified. To prevent this, a revision has been made so that @ cannot be included in a database name when running cubrid createdb, cubrid renamedb and cubrid copydb utilities.