Contents

CUBRID 9.0 Release Notes

Release Notes Information

This document contains information about CUBRID 9.0 Beta(Build Number 9.0.0.0478). For the previous versions of the release note, go to the CUBRID Release Note Website: http://release.cubrid.org/en.

For a more detailed description about the previous version of CUBRID 9.0 Beta, see the release notes of the CUBRID 2008 R4.1.

If you are using 9.0 Beta, we strongly recommend that you should upgrade to 9.1. Please migrate the DB volume using "migrate_90beta_to_91 <db_name>" tool after upgrade, because 9.0 Beta and 9.1 are not compatible. For 9.1 release notes, See CUBRID 9.1 Release Notes.

Overview

CUBRID 9.0 has added features supporting a variety of character sets from different languages. It also improved user convenience by adding many different SQL statements, such as analytic functions, the MERGE statement, the JOIN for DELETE/UPDATE, and the ENUM type. Supports function-based and filtered indexes and optimizes the index skip scans. In this version, the partitioning, performance, and stability have been significantly improved. With its CUBRID SHARD feature, it also provides convenience when processing a large volume of data. The throughput and response time of this version have been improved for more than three times that of previous versions by SysBench bench marking test, and the performance of the default SELECT test has been improved for approximately 1.6 times. CUBRID 9.0 has been stabilized by fixing many bugs and performance issues.

CUBRID 9.0 Release includes all the fixes in CUBRID 2008 R4.1 and its previous versions. The key features of CUBRID 9.0 are shown below:

Globalization Support

Added character sets, collations, calendars and number notations of various languages, including Korean, English, Japanese, Chinese, Vietnamese, Cambodian, Turkish, German, Spanish, French, and Italian to provide a database environment suitable for localization.

Middleware Functionality for Database Sharding

Provide the CUBRID SHARD middleware for facilitating access to horizontally partitioned databases across multiple machines. The CUBRID SHARD feature provides a single view that displays databases spread across multiple devices as a single database and transparency that allows users to recognize them without accessing individual databases.

Analytic Functions Using the OVER Clause

Add analytic functions using a new analytic clause called OVER to get a variety of statistics for a specific row set.

MERGE Statement Used to Merge INSERT, UPDATE, and DELETE Queries

Add MERGE statement used to enter, update, or delete data from one or more source tables from a single target table.

JOIN for the UPDATE and DELETE Statements

This version supports JOIN in the UPDATE and DELETE statements.

ENUM Type

Add the ENUM type that is defined with enumeration type character string constants.

a Variety of Index Functionalities, including Function-based and Filtered Indexes, and the Index Skip Scan

Provide a function-based index feature that includes function expressions in columns comprising an index. This version of CUBRID also adds a filtered index feature that includes search conditions in an index. The index skip scan optimization allows users to use a multi-column index from its second column even when the first column is not specified.

Stability and Performance of the Partitioned Table Feature and PROMOTE Statement

Fundamentally enhance the partitioning feature for better stability and performance. The performance of this version has been improved by allowing partitions to be pruned during the query execution process, not during the compiling process. It has also changed the primary key and the unique index to be maintained for each partition rather than for the entire partition. There has been many other stability and performance improvements as well.

Also, the PROMOTE statement that is used to promote a specific partition from a partitioned table to a general table has been added.

HA Stability and Operating Convenience

Fix many stability issues, such as the inconsistent HA replication of data and schema. This version provides a separate control for the HA management process and easier dynamic addition and deletion of nodes in the HA management process.

Pseudo Column in Default

If you specify system functions such as SYSDATE and USER, as default attributes, you can get the result value based on the time at which the query is executed.

Cursor Holdability

The cursor holding has become the defaulting setting to maintain the cursor after a commit in order to allow a DML commit while maintaining the result set.

VALUES Clause

Add VALUES clause used to output the row values specified in the expressions under VALUES clause so that a table mainly comprising constant values can be expressed.

Improved Error Messages

Enhance error messages to make it easier to find the location of an error. The cause of unspecific error messages are now easier to identify.

Fix or Improve Over 1,800 Bugs and Functionalities

Fix or improve big and small bugs and functionalities found in SQL functions, SQL statements, query plans, indexes, triggers, drivers, and utilities. Fixed or improved the issues found in query plans, index scans, locks, repetitive executions of insertion and deletion, memory leaks, and disk usage. Also improved or fixed some issues related to the replication of HA schema and data.

For more information, see What's New in CUBRID 9.0.

New Features

SQL

MERGE statement (CUBRIDSUS-7080)

Add the MERGE statement that is used to select rows from one or more sources and update or insert them to a single table or view. You can specify the condition to determine whether to update or insert rows to the target table or view.

MERGE INTO target_table tt USING source_table st
ON (st.a=tt.a AND st.b=tt.b)
    WHEN MATCHED THEN UPDATE SET tt.c=st.c
    WHEN NOT MATCHED THEN INSERT VALUES (st.a, st.b, st.c);

Analytic functions using OVER clause (CUBRIDSUS-6112)

Add analytic functions to aggregate value based on the result of rows. The added analytic functions are as follows:

AVG, COUNT, MAX, MIN, RANK, ROW_NUMBER, STDDEV, STDDEV_POP, STDDEV_SAMP, SUM, VAR_POP, VAR_SAMP, VARIANCE, and DENSE_RANK.

To obtain a variety of statistics from a specific row set, a new analytic clause called OVER is used together with some existing aggregate functions.

The following is an example of outputting the number of yearly acquired gold medals by countries whose nation_code in demodb starts with 'AU', and the average sum of accumulated gold medals up to a specific year.

SELECT host_year, nation_code, gold, AVG(gold) OVER (PARTITION BY nation_code ORDER BY host_year) avg_gold
FROM participant WHERE nation_code LIKE 'AU%';

UPDATE JOIN and DELETE JOIN statements (CUBRIDSUS-5646)

Add the UPDATE JOIN and DELETE JOIN statements that are used to update or delete one or more tables.

UPDATE a_tbl INNER JOIN b_tbl ON a_tbl.id=b_tbl.rate_id
SET a_tbl.charge = a_tbl.charge * (1 + b_tbl.rate)
WHERE a_tbl.charge > 900.0;

DELETE a, b FROM tbl1 a INNER JOIN tbl2 b
WHERE a.idx=b.p_idx AND b.p_idx=5 AND b.flag=1

ENUM type

Add the ENUM type that is defined with enumeration type character string constants.

CREATE TABLE tbl (
    color ENUM('red', 'yellow', 'blue')
);

INSERT into tbl values ('yellow'), ('red'), (2), ('blue');
SELECT color FROM tbl ORDER BY color ASC;

color
======================
red
yellow
yellow
blue

ORDER BY in the UPDATE statement (CUBRIDSUS-6605)

Make it possible to update in order according to the ORDER BY clause in the UPDATE statement. If the UPDATE statement is executed in the query shown below, the value of 'b' will be updated in the descending order of the size of the value of the column 'a'.

CREATE TABLE t1 (a int, b int);
INSERT INTO t1 VALUES (1,1), (2,2),(3,3),(4,4);
SET @tmp=100;
UPDATE t1 SET b=(@tmp:=@tmp+1) ORDER BY a DESC;

VALUES clause (CUBRIDSUS-7553)

Add the VALUES clause used to create a constant table temporarily instead of an actual table. Although you can get the same result by connecting the UNION ALL statement but you can do it much more easily by using the VALUES clause. It is usually used in the form of joining with another table in the SELECT, UPDATE, and DELETE queries.

VALUES (1 AS col1, 'first' AS col2), (2, 'second'), (3, 'third'), (4, 'forth');

SELECT 1 AS col1, 'first' AS col2
UNION ALL
SELECT 2, 'second'
UNION ALL
SELECT 3, 'third'
UNION ALL
SELECT 4, 'forth';

SHOW CREATE TABLE statement (CUBRIDSUS-6996)

Add the SHOW CREATE TABLE statement used to output the SQL statement that creates a table.

SHOW CREATE TABLE tbl;
TABLE CREATE TABLE
============================================
'tbl' 'CREATE TABLE [tbl] ([id] INTEGER DEFAULT 0 NOT NULL, [phone] CHARACTER VARYING(10), CONSTRAINT [pk_tbl_id] PRIMARY KEY ([id]))'

Partition PROMOTE statement (CUBRIDSUS-7629)

Add the PROMOTE statement to promote some partitions specified by user from a partitioned table to a general table.

CREATE TABLE t(i int) PARTITION BY LIST(i) (
    partition p0 values in (1, 2, 3),
    partition p1 values in (4, 5, 6),
    partition p2 values in (7, 8, 9),
    partition p3 values in (10, 11, 12)
);

ALTER TABLE t PROMOTE PARTITION p1, p2;

ALTER statement to change the owner of tables, views, triggers and stored procedures (CUBRIDSUS-7922)

Add the ALTER statement to specify the owner of tables, views, triggers, and stored procedures.

ALTER TABLE test_tbl OWNER TO PUBLIC;
ALTER VIEW test_view OWNER TO PUBLIC;
ALTER TRIGGER test_trigger OWNER TO PUBLIC;
ALTER FUNCTION test_function OWNER TO PUBLIC;
ALTER PROCEDURE test_procedure OWNER TO PUBLIC;

LEVEL column for CONNECT BY clause (CUBRIDSUS-7526)

Make it possible to use a LEVEL column in a CONNECT BY clause.

SELECT LEVEL FROM db_root CONNECT BY LEVEL <= 10;

OFFSET keyword for a LIMIT clause (CUBRIDSUS-7439)

Make it possible to use the OFFSET keyword in a LIMIT clause. The following two queries execute the same result.

SELECT * FROM tab LIMIT 2, 1;
SELECT * FROM tab LIMIT 1 OFFSET 2;

INET_ATON and INET_NTOA functions (CUBRIDSUS-8230)

Add the INET_ATON and INET_NTOA functions. The INET_ATON function returns numeric value when an IP address is entered, while the INET_NTOA function returns an IP address value when numbers are entered.

SELECT INET_ATON('192.168.0.10');
inet_aton('192.168.0.10')
============================
3232235530
SELECT INET_NTOA(3232235530);
inet_ntoa(3232235530)
======================
'192.168.0.10'

Globalization support

Character set and collation of multiple countries (CUBRIDSUS-7629)

Support the locale (character set and collation) of multiple countries to support multinational languages. The locales added to CUBRID are: English (en_US), German (de_DE), Spanish (es_ES), French (fr_FR), Italian (it_IT), Japanese (ja_JP), Cambodian (km_KH), Korean (ko_KR), Turkish (tr_TR), Vietnamese (vi_VN), and Chinese (zh_CN).

Globalization related system parameters have also been added.

Parameter Name Description
intl_check_input_string Sets whether to check whether the character string is entered appropriately to the character set used. Default value no.
string_max_size_bytes Defines the maximum size of bytes used as a character string argument in a string function or operation. Default value: 1048576 bytes
unicode_input_normalization Sets whether to store unicode to enter in a combined form. Default value: yes
unicode_output_normalization Sets whether to output stored unicode into separated code. Default value: no.
intl_date_lang Sets whether to use the localized date/time format for the character string as an argument in the function to convert a string into the format of date/time. Default value: CUBRID_LANG environment variable
intl_number_lang Sets whether to apply number format to input or output strings in the functions that covert character strings into numbers or numbers into character strings. Default value: CUBRID_LANG environment variable

intl_check_input_string, a parameter to set whether to check the validity of a character set, has also been added.

single_byte_compare, intl_mbs_support parameter is not used any longer.

Sharding

Middleware for database sharding (CUBRIDSUS-4996)

Provide CUBRID SHARD, the middleware for easy access to database horizontally partitioned to multiple devices. The features of CUBRID SHARD are as follows:

  • As the form of middleware to minimize the change of existing applications, CUBRID SHARD enables transparent access to sharded database through commonly used JDBC and CCI interface, which is CUBRID C API.
  • In this functionality, a hint is added to an existing query to indicate a shard in which the query would be executed.
  • It can be composed also with MySQL as backend shard DB as well as with CUBRID.
  • Guarantees the unique characteristics of certain transactions.

Index

Filtered index (CUBRIDSUS-6112)

Support the filtered index that includes a specific criterion. As only part of an index that meets some condition is used, it is also called partial index. As a filtered index is used to index only some rows that meet some condition, a burden of index update can be relieved and the search performance can be improved thanks to a smaller search range.

CREATE UNIQUE INDEX bugs_per_dev ON bugs(Author) WHERE Closed = 0;

SELECT * FROM bugs
WHERE Author= 'madden' AND Subject LIKE '%fopen%' AND Closed = 0
USING INDEX idx_open_bugs;

max_filter_pred_cache_entries parameter has also been added to set the maximum number of filtered index expressions to be cached in the memory. The default value is 1000.

Function-based index (CUBRIDSUS-6112)

Support a function-based index that contains the result value of a specific function. It is used to sort or search data by using a specific function.

CREATE INDEX idx_upper_post ON posts_table(UPPER(keyword));

Index skip scan (CUBRIDSUS-5646)

Add the index skip scan (ISS) feature that allows the later part of an index to be used when the condition (usually =) is not met by the first column but by the following column of the index.

CREATE INDEX idx_t_gen_name on t (gender, name);
SELECT * from t WHERE name = 'SMITH';

Driver

[JDBC] Add getDatabaseMajorVersion() and getDatabaseMinorVersion() (CUBRIDSUS-7530)

The getDatabaseMajorVersion() and getDatabaseMinorVersion() methods of the JDBC DatabaseMetaData interface return major version number and minor version number, respectively, when called.

[CCI] Function to convert to escape character string (CUBRIDSUS-8940)

Add cci_escape_string() to convert to escape character strings used in CUBRID queries.

[CCI] connection function using error message output buffer as input argument (CUBRIDSUS-5633)

Add cci_connect_ex() and cci_connect_with_url_ex(), which are connection functions using error message output buffer as an input argument.

As previous connection functions returned a single error code when an error occurred, users were unable to check specific errors. After this modification, users can check specific error codes through error message buffer.

T_CCI_ERROR error;
connection = cci_connect_ex ("localhost", 33000, "demodb", "dba", "pwd", &error);
connection = cci_connect_with_url_ex ("cci:cubrid:localhost:33000:demodb:::", "dba", "pwd", &error);

System Catalog

Add and delete columns in the HA catalog table (CUBRIDSUS-5456)

Add columns to the db_ha_apply_info table to prevent inconsistent replication due to re-reflection of replication logs and provide more detailed information on the replication log reflection status.

Added Columns Description
committed_lsa_pageid

Page id of the commit log lsa reflected last

The log before last_committed_lsa is not re-reflected when applylogdb is restarted.

committed_lsa_offset

Offset of the commit log lsa reflected last

The log before last_committed_lsa is not re-reflected when applylogdb is restarted.

committed_rep_pageid

Page id of the last replication log lsa

Checks whether the reflection of replication has been delayed.

committed_rep_offset

Offset of the replication log lsa reflected last

Checks whether the reflection of replication has been delayed.

append_lsa_page_id

Page id of the last lsa of replication log in the last reflection of replication.

Stores the append_lsa of the replication log header being processed at applylogdb when replication is reflected.

Checks whether there was a delay when replication log was reflected

append_lsa_offset

Offset of the last lsa of replication log in the last reflection of replication.

Stores the append_lsa of the replication log header being processed at applylogdb when replication is reflected.

Checks whether there was a delay when replication log was reflected

eof_lsa_page_id

Page id of the eof lsa of replication log in the last reflection of replication.

Stores the eof_lsa of the replication log header being processed at applylogdb when replication is reflected.

Checks whether there was a delay when replication log was reflected

eof_lsa_offset

Offset of the eof lsa of replication log in the last reflection of replication.

Stores the eof_lsa of the replication log header being processed at applylogdb when replication is reflected.

Checks whether there was a delay when replication log was reflected

final_lsa_pageid

Page id of the log lsa processed last at applylogdb

Checks whether the reflection of replication has been delayed.

final_lsa_offset

Offset of the log lsa processed last at applylogdb

Checks whether the reflection of replication has been delayed.

required_page_id Page id of the smallest log not to be deleted by log_max_archives parameter and the page number of the log to begin to reflect replication
required_page_offset Offset of the log page to start to reflect replication
log_commit_time The reflected time of the last commit log

The following columns have been deleted.

Deleted Deleted Columns Description
page_id Page of the replication log committed in the slave database
offset Offset of the replication log committed in the slave database

Add columns to the index catalog tables and views

With the new features of filtered index and function-based index, columns have been added to the following catalog tables and views.

The following columns have been added to the _db_index catalog table.

Added Columns Description
filter_expression Conditions of a filtered index
have_function If function-based index = 1, if not = 0

The following columns have been added to the DB_INDEX catalog view.

Added Columns Description
filter_expression Conditions of a filtered index
have_function If function-based index = 'YES', If not = 'NO'

The following column has been added to the _db_index_key catalog table.

Added Column Description
func The function expression of the function-based index

The following column has been added to the DB_INDEX_KEY catalog view.

Added Column Description
func The function expression of the function-based index

Add collation tables and views

The following tables and views have been added for the collation function for globalization.

The _db_collation table has been added.

Added Columns Description
coll_id Collation ID
coll_name Collation name
charset_id Character set ID
built_in

Whether to include collation when installing the product

(0: Not included, 1: Included)

expansions Whether to support expansion (0: Not supported 1: Supported)
contractions Whether to support contraction (0: Not supported 1: Supported)
checksum Checksum of collation files
uca_strength Weight strength

DB_COLLATION view has been added.

Added Column Description
coll_id Collation ID
coll_name Collation name
charset_name Character set name
is_builtin Whether to include it when installing the product
has_expansions Whether to include expansions
contractions Whether to include contractions
uca_strength

Weight strength

(NOT APPLICABLE, PRIMARY, SECONDARY, TERTIARY, QUATERNARY, IDENTITY, UNKNOWN)

Configuration

Session parameter concept to some of client/server system parameters (CUBRIDSUS-8193)

Added the session parameter concept for some client/server system parameters. If a parameter value of one system, client or server system, is changed, the same value will be applied to both systems. The added session parameters are:

default_week_format, string_max_size_bytes, return_null_on_function_errors, alter_table_change_type_strict, plus_as_concat, compat_numeric_division_scale, intl_number_lang, and intl_date_lang

Monitoring

Multi-range optimization frequency to the execution statistics information (CUBRIDSUS-6163)

Added the item of the number of multi-range optimizations to the execution statistics information checked through the cubrid statdump utility or the SHOW EXEC STATISTICS ALL.

Behavioral Changes

SQL

Must specify the name when adding, deleting or changing indexes (CUBRIDSUS-6112)

Change to specify the name always when adding, deleting and chancing indexes. An error will occur when the name of the index is omitted.

Get result values based on the time of query execution when functions, such as SYSDATE and USER, are specified as the default attribute of the column (CUBRIDSUS-4378)

In the previous version, when the SYSTIMESTAMP, SYSDATE, SYSDATETIME, and USER functions were specified as a DEFAULT constraint value, the DEFAULT value was fixed to the result value of the function based on when the table was created. In the new version, the result value will be calculated whenever a query is executed.

CREATE TABLE t (ID int, col TIMESTAMP DEFAULT SYSTIMESTAMP);
ALTER TABLE t add column (uid STRING DEFAULT USER);
INSERT INTO t(ID) VALUES(1); -- The col value will be a result value at the point of query execution.

Not allow the cases in which the input argument of MINUTE, HOUR, SECOND, and TIME_TO_SEC functions is not a character string of time type (CUBRIDSUS-7535)

When the time type character string is an input argument as in the case of MINUTE, HOUR, SECOND, and TIME_TO_SEC functions, the date type character string, such as "YYYY-MM-DD" is not allowed any longer.

// The queries below are not allowed after modification.
SELECT TIME_TO_SEC('2010-01-01');
CREATE TABLE foo(col TIME DEFAULT '2000-01-01');

Error when time invalid as time type value is bound (CUBRIDSUS-7159)

If invalid time, such as "00:00:-1", is bound as the TIME type value, it was accepted as "00:00:00", ignoring the number "-1", in the previous version, but this version has been changed to output an error in this case.

Return NEXTVAL from "set value" when the starting value of the serial is changed (CUBRIDSUS-8157)

If the starting value of the serial is changed, the NEXTVAL of the serial was returned from "set value + 1." The new version has been changed to return from the "set value."

ALTER SERIAL s1 START WITH 10;
SELECT s1.NEXTVAL;
10

Store the value up to the specified length of the character type when the length of altered column from numeric to character exceeds the length of numeric value (CUBRIDSUS-8009)

Previously, when ALTER TABLE ... CHANGE COLUMN... was executed to change a column into a character type column shorter than the length of the numeric value stored in the numeric type column, the value of the column was processed as an empty string when it exceeded the specified length of the character type column. Now the value will be stored up to the specified length.

CREATE TABLE t1 (i1 INT);
INSERT INTO t1 VALUES (1),(-2147483648),(2147483647),(-2147483648),(2147483647);
ALTER TABLE t1 CHANGE i1 s1 CHAR(4);

ER_INTERRUPTED error when interrupt occurs during index scan in descending order (CUBRIDSUS-7316)

In the previous version, if an interrupt occurs during index scan in descending order, ER_DESC_ISCAN_ABORTED error was returned. This has been changed to return the ER_INTERRUPTED error.

Change display of string types in "SHOW COLUMNS FROM" statement (CUBRIDSUS-8533)

In "SHOW COLUMNS FROM" statement, character strings that were shown as STRING(n) and VARBIT(n) have been changed to VARCHAR(n) and BIT VARYING(n) respectively.

In "SHOW COLUMNS" statement, remove single quotation marks included in the output of the default value of a specific type(CUBRIDSUS-5921)

When SHOW COLUMNS is executed to output the table information, single quotation marks were included in the output of the DEFAULT value of CHAR or DATETIME. The new version has been changed to exclude the single quotation marks.

After connecting from applications to DB as a DBA, being able to re-connect it even with an incorrect password (CUBRIDSUS-7192)

In the previous version, when you connected to a DB as a DBA from applications and then tried to reconnect to it as a DBA or another user, you were able to connect to it even with an incorrect password. This problem has been fixed.

Change the action of the login() method (CUBRIDSUS-6307)

In the previous version, if you executed CSQL Interpreter as a DBA, you were allowed to execute login() continuously as another user without typing a password. In the new version, if you execute login() as not a DBA but another user, you are not allowed to execute login() as another user without typing a password.

% csql -u dba demodb
csql> CALL login ('test1', '') ON CLASS db_user; -- Since dba do login() for test1, it is allowed to check without password.
csql> CALL login ('test2', '') ON CLASS db_user; -- Since dba do login() for test1 and login() for test2, it is not allowed to check without password.

Driver

[JDBC, CCI] Change CURSOR HOLDABILITY to run as a default (CUBRIDSUS-8609)

When the SELECT query is executed in JDBC and CCI, Cursor Holdability has been changed to run as a default. Therefore, even when a commit is executed during cursor fetch, you can continue the fetch as the cursor will be maintained. You must close the cursor after using it.

[JDBC] Change to return a NUMERIC type name as NUMERIC instead of DECIMAL (CUBRIDSUS-8387)

In the previous version, the DatabaseMetaData.getColumns() method returned NUMERIC type names as DECIMAL. Now it returns them as NUMERIC.

// An error such as "Caused by: org.hibernate.HibernateException: Wrong column type in mytbl_map for column col2. Found: decimal, expected: numeric(19,0)" occurred if a column with NUMERIC type is specified when configuring mapping between entities by using Hibernate in an earlier version which has not been fixed.

@ManyToMany
@JoinTable(name="mytbl",joinColumns={@JoinColumn(name="col1", columnDefinition="varchar(255)")},inverseJoinColumns={@JoinColumn(name="col2", columnDefinition="numeric(19,0)")})

private Set<MyGroup> accessMyGroups;

[JDBC] Change JAVA return type of the getColumnClassName() method for NUMERIC type columns (CUBRIDSUS-7532)

Change the ResultSetMetaData.getColumnClassName() method to return java.math.BigDecimal instead of the previous java.lang.Double for NUMERIC type columns.

[CCI] Change the unit of lock timeout value returned by the cci_get_db_parameter() function to millisecond (CUBRIDSUS-7538)

Change the unit of the lock timeout value returned by the cci_get_db_parameter() function from second to millisecond.

[CCI] Remove the autocommit attribute used for the URL character string of the cci_connect_with_url() function (CUBRIDSUS-7306)

Remove the autocommit attribute used for the URL character string of the cci_connect_with_url() function.

Change of Default Settings

Change replica nodes to always delete archive log (CUBRIDSUS-8556)

To delete archive logs at replica nodes, you had to change the value of the system parameter force_remove_log_archives to yes. If the value was not set as YES, problems could occur as unnecessary archive logs were accumulated. CUBRID 9.0 Beta has been changed for replica nodes to always delete archive logs regardless of the set value of the force_remove_log_archives parameter.

Modify to set the initial value of the system parameter that sets the maximum number of archive log files to 0 in installation (CUBRIDSUS-6603)

In installing CUBRID, "log_max_archives=0" has been added to cubrid.conf.

If the value of log_max_archivies is 0, archive log files do not occupy the disk space as they are not kept, but when a media failure occurs, it could be impossible to restore the database back to the desired time. To restore a database in case of a media failure, this parameter value should be set properly considering a backup cycle.

Modify the memory usage of the replication log reflection process not to exceed 500MB (CUBRIDSUS-6068)

If the memory usage of the replication log reflection process exceeds 500mb in the HA environment, replication inconsistency could occur. Now the memory usage of the replication log reflection process has been changed not to exceed 500mb. Please note that if you set the ha_apply_max_mem_size value of cubrid_ha.conf as 500 or higher in the previous version, you must change the value to 500 or less after upgrade to 9.0 Beta or higher.

Other

Change the range of error codes (CUBRIDSUS-7666)

Changed the range of error codes for CUBRID application server (CAS), broker server, CCI driver and JDBC driver. CAS uses error codes from -10000 to -10999, broker servers from -11000 to -11999, CCI from -20000 to -20999, and JDBC from -21000 to -21999.

Change the output format of CAS and lockdb utility when the lock_timeout value is -1 or 0 (CUBRIDSUS-8915)

When the value of the system parameter lock_timeout is -1 (infinite wait) or 0 (no wait), the output format of CAS log and lockdb utility has been changed simply to output "Infinite wait" and "No wait" respectively.

Improvements and Fixes

Performance and Resource

Change the UPDATE statements specifying different constants in a SET clause not to be cached separately to query plan cache (CUBRIDSUS-8511)

In the previous version, if the constant value specified in the SET clause of UPDATE statements was different, the queries were regarded differently at the query plan cache despite having the same pattern. The new version allows a single pattern to be maintained by automatically replacing these constants with host variables.

Change the queries of the same pattern with ORDER BY and LIMIT clauses not to be cached separately as different plan to the query plan cache (CUBRIDSUS-8813)

Queries of the same pattern with ORDER BY and LIMIT clauses have been chanced to be stored to the query plan cache as the same plan despite different constant values.

Improve the performance of the execution of the INSERT ON DUPLICATE KEY UPDATE statement (CUBRIDSUS-8287)

Improve the problem that would reduce the speed of the execution of the INSERT ON DUPLICATE KEY UPDATE statement when the statement contains two or more unique keys, including the primary key, in the target table.

CREATE TABLE x (a INT PRIMARY KEY, b INT, c INT, d INT, UNIQUE(b), UNIQUE(c));
CREATE SERIAL s;

INSERT INTO x VALUES (s.NEXT_VALUE, 0, 0, 0) ON DUPLICATE KEY UPDATE d = d+1;

Improve the performance of lock escalation (CUBRIDSUS-5698)

Improve the performance of the lock escalation that converts record locks into table lock when the number of record locks reaches a certain level. According to a test in which one million records are entered to a table with 100 partitions with the lock_escalation parameter value being 5000, the improved lock escalation is 3.5 times faster than before.

Inconsistent data in rollback with regard to key lock and improved the performance of DML (CUBRIDSUS-7080)

Fix the problem of data inconsistency that could occur in the rollback of a INSERT task for a row with an index by modifying the key locking method and improved the performance of the execution of INSERT, DELETE and SELECT for a row with an index.

Improve the performance of the execution of the UNION ALL query statement (CUBRIDSUS-8130)

Improve the performance by not copying the intermediate result row of the earlier query statement but reusing it when generating the result of the UNION ALL query statement.

// In the example below, the more number of tl tables will result in more improvement by modification.
SELECT * FROM t1 UNION ALL SELECT * FROM t2;

It took a long time to finish CREATE when DROP and CREATE tables were invoked repeatedly (CUBRIDSUS-7288)

Previously, the time to finish CREATE would increase when DROP and CREATE tables were invoked repeatedly. This problem has been fixed. For reference, this problem could be avoided by configuring the REUSE_OID option, as in the example below, in the previous version.

CREATE TABLE reuse_tbl (a INT PRIMARY KEY) REUSE_OID

The execution time was gradually increasing when INSERT and DELETE were executed repeatedly (CUBRIDSUS-7654)

Fixed a problem in which the execution time gradually increased when INSERT and DELETE were executed repeatedly.

The access performance, including INSERT, markedly decreased after frequent execution of TRUNCATE statements (CUBRIDSUS-6499)

Fix a problem in which the INSERT speed markedly decreased after frequent execution of TRUNCATE statements.

Memory leak when UNION found only one record exceeding the page size of the database (CUBRIDSUS-7158)

Fix a problem in which the memory would leak when the UNION query found only one overflow record exceeding the page size of the database. The UNION ALL query has no memory leak.

Volume usage of INSERT dramatically incremented after duplicate record values increased in a single index key (CUBRIDSUS-8375)

Fix a problem in which, when overflow OID records were generated due to many duplicate record values in a single index key, volume usage rapidly increased because keys smaller than the relevant key were always entered to a new page.

Some spaces were unable to be reused after defining two or more foreign keys in the same column and dropping the table (CUBRIDSUS-8256)

Fixed a problem in which some spaces were unable to be reused after defining two or more foreign keys only with a different name in a single column and dropping the table.

CREATE TABLE foo (a INT, PRIMARY KEY (a));
CREATE TABLE bar (a INT,
    CONSTRAINT con1 FOREIGN KEY(a) REFERENCES foo (a),
    CONSTRAINT con2 FOREIGN KEY(a) REFERENCES foo (a));

-- INSERT records
...
DROP TABLE bar;
DROP TABLE foo;

Memory usage of the server process dramatically incremented when a large amount of records were inputted through the INSERT ... SELECT statement (CUBRIDSUS-8736)

Fix a problem in which the memory usage of the server process rapidly incremented when a large amount of records (e.g., over 2 million records) were inputted by using the INSERT ... SELECT statement.

Error when a CONCAT_WS function using a query or a function as an argument was entered as the value of the INSERT statement (CUBRIDSUS-6206)

Fixed the "ERROR: Cannot evaluate ' concat_ws('a', cast( SYS_DATE as varchar))'" error that occurred when a CONCAT_WS function using a query or a function as an argument was entered as the value of the INSERT statement.

INSERT INTO tbl VALUES (1,concat_ws('a',SYS_DATE()));

Took a long time to execute a query with a nested subquery in the OR clause (CUBRIDSUS-6031)

Fixed a problem in which it took a long time to execute a query with a subquery in the OR clause.

SELECT col2 FROM tab0
WHERE (A AND B) OR (col3 IN (SELECT i FROM t WHERE X AND Y OR Z AND W) AND D);

DELETE FROM ALL statement was unable to delete the LOB file (CUBRIDSUS-5596)

Fixed an error in which, when deletion was executed with regard to the inheritance hierarchy through the DELETE FROM ALL statement, the LOB file of the relevant table and that of the tables that inherit it were not deleted.

DELETE FROM ALL parent_tbl;

Optimize the temporary volume usage in creating an index (CUBRIDSUS-5528)

Modify not to use temporary volume more than necessary by returning the pages not used any more in the process of creating an index.

Optimize the temporary temp volume size (CUBRIDSUS-5639)

Modify not to create a volume larger than necessary when adding a temporary temp volume.

Generic volume was created under certain situations despite free space on the index volume (CUBRIDSUS-5595)

Previously, when the free space of the index volume file with the largest free space was smaller than 25% of the entire space required to create an index, the index volume was not used but the generic volume was created. This problem has been fixed.

File I/O sync. error message was outputted erroneously even in normal situations in the background archive log file (CUBRIDSUS-8163)

Fix a problem in which the file I/O sync. error message (error code: -599) was outputted even in normal situations in the background archive log file. An I/O error occurred while synchronizing state of volume "/home/cubrid/database/testdb/testdb_lgar_t".... Bad file descriptor

UPDATE statement of other transactions waited long due to the execution of the UPDATE STATISTICS statement (CUBRIDSUS-6981)

Modify not to maintain the latch for the index page during the execution of the UPDATE STATISTICS statement to prevent other transactions from waiting long.

Operators and SQL Functions

Support YYYYMMDDH format as an input argument in the Date/Time function (CUBRIDSUS-8622)

Modify to support the YYYYMMDDH format as an input argument in the Date/Time functions, such as TIME and TO_DATETIME.

SELECT TIME('1104209');
time('1104209')
======================
'09:00:00'
SELECT TO_DATETIME('1104209','YYMMDDH');
to_datetime('1104209', 'YYMMDDH', 'en_US')
=============================================
09:00:00.000 AM 04/20/2011

ADDTIME function returned incorrect results (CUBRIDSUS-8568)

Fix a problem in which the ADDTIME function returned incorrect results.

SELECT ADDTIME('2012-02-02','9:9:9');

Overflow error occurred when % operation, BIT_AND function or BIT_OR function was executed to a column storing the minimum value of INT type (CUBRIDSUS-6203)

Fix a problem in which the "ERROR: Overflow occurred in ..." error occurred when % operation, BIT_AND function or BIT_OR function was executed to a column storing -2147483648, the minimum value of the INT type.

INSERT INTO tbl VALUES (-2147483648);
SELECT i%1009 FROM tbl;
SELECT BIT_AND(i) FROM tbl;

TRIM function outputted incorrect results (CUBRIDSUS-6591)

Fix an error in which the TRIM function, which is an SQL function, deleted one more character than the character string specified to be deleted. As in the following example, when "foo" is TRIMMED from "foook", "ok" should be outputted.

SELECT TRIM('foo' FROM 'foook');

Only small character table names were recognized as an input argument of the INDEX_CARDINALITY function (CUBRIDSUS-6264)

Fix an error in which only small character table names were recognized as an input argument of the INDEX_CARDINALITY function.

Error when executing queries with only the CHAR type input argument of the CONCAT and CONCAT_WS functions (CUBRIDSUS-6524)

Fix a problem in which query execution failed with the message "ERROR: No error message available." when the input arguments of the CONCAT and CONCAT_WS functions consisted of only CHAR type arguments.

CREATE TABLE t1 (a int , b char (20) );
INSERT INTO t1 VALUES (-1, '');
--below statement will throw "ERROR: No error message available." message.

SELECT concat(b) FROM t1;

Error when executing the CLOB_TO_CHAR function (CUBRIDSUS-6520)

Fix a problem in which query execution failed with the message "ERROR: External file "xxxx" was not found." when CLOB_TO_CHAR() was executed after the execution and commit of the REPLACE statement to scan a unique index to a table with a CLOB column.

SELECT id, CLOB_TO_CHAR(text) FROM tbl ORDER BY id;

Error when executing a query with the DATE type input argument of the TIMEDIFF function (CUBRIDSUS-8692)

Fix a problem in which query execution failed with the message "ERROR: Conversion error in time format." when the input argument of the TIMEDIFF function was DATE type.

SELECT TIMEDIFF(TO_DATE('2012-12-2'), TO_DATE('2012-11-2'));

A smaller join result when the CONCAT_WS function was a condition of LEFT OUTER JOIN (CUBRIDSUS-6590)

In the previous version, when the CONCAT_WS function was a condition of the LEFT OUTER JOIN, a smaller join result was outputted because the result value of the function was always regarded as NULL when even a single input argument value of the CONCAT_WS function was NULL due to the error in the process of optimizing the LEFT OUTER JOIN query. This problem has been fixed.

SELECT * FROM t1 LEFT JOIN t2 ON t1.id = t2.id WHERE CONCAT_WS(' ', t1_name, t2_name) LIKE '%In%';

Server would be abnormally terminated when using the INTERVAL value of the DATE_ADD function as a host variable (CUBRIDSUS-7610)

Fix a problem in which the server was abnormally terminated during a query was executed if the INTERVAL value of theDATE_ADD, which is an SQL function, was used as a host variable when the query was prepared. In the previous version, only the INTERGER was available as the type of the input value according to the INTERVAL unit of the DATE_ADD function. Now you can also use VARCHAR.

PREPARE s FROM 'SELECT DATE_ADD(?, INTERVAL ? YEAR_MONTH)';
EXECUTE s USING '2010-01-01', 1;
EXECUTE s USING '2010-01-01', '1-1';

Queries with DISTINCTROW in the argument of the aggregate function were not executed (CUBRIDSUS-7414)

Fix a problem in which the query was not executed with the "Syntax error: unexpected 'DISTINCTROW'" message when the argument of the aggregate function contained DISTINCTROW.

Query execution failed while binding value when all the arguments were a host variable in some SQL functions (CUBRIDSUS-7521)

Fix a problem in which, when all the arguments were a host variable in some SQL functions, such as NULLIF, LEAST, and GREATEST, query execution failed because value was bound and attempted to be typecast to DOUBLE.

preStmt = conn.prepareStatement("select nullif (?, ?)");
preStmt.setString(1, "A");
preStmt.setString(2, "a");
rs = preStmt.executeQuery();

Error when executing the DECODE function that omitted the default argument in the PREPARE Statement (CUBRIDSUS-9134)

Fix a problem in which query execution failed with the message "ERROR: Attribute "val" cannot be made NULL." when the third input argument to specify the DEFAULT value was omitted in the execution of the DECODE function in the PREPARE statement.

PREPARE stmt1 FROM 'UPDATE foo SET del_ts = 100, val=DECODE(name,?,val + ?) WHERE name IN (?)';
EXECUTE stmt1 USING 'seo', 1, 'seo';

Error when executing a query in which the SELECT list contained an INSERT or ELT function with value following it (CUBRIDSUS-6577)

Fix a problem in which query execution failed with the message "ERROR: System error (query result) in ../../src/parser/query_result.c" when the SELECT list contained an INSERT or ELT function and the value came after it.

SELECT INSERT('test',2,1,'hi'), 5;
SELECT ELT(2, 1), 5;

Fixed a problem in which abnormal termination occurred when a USING INDEX statement was included in a query containing an aggregate function in its subquery (CUBRIDSUS-8057)

Fix a problem in which abnormal termination occurred when a USING INDEX statement was included in a query that had an aggregate function in a subquery of the SELECT list.

SQL Statement

Supports SELECT ALL/DISTINCT constant or DISTINCT (Column) (CUBRIDSUS-6080)

Modify to support SELECT ALL or SELECT DISTINCT constants.

SELECT ALL 1;
SELECT DISTINCT 1;

You can also execute aggregate functions by enclosing a column with parentheses or executing DISTINCT of constants.

SELECT SUM(DISTINCT(i)) FROM t;
SELECT SUM(DISTINCT 4) FROM t;

Made it possible to delete derived table names in the FROM clause (CUBRIDSUS-6546)

Improve to delete the name of a derived table in the FROM clause, which had to be necessarily specified.

SELECT * FROM (SELECT sysdate FROM db_root);
// Message before fixed
FROM (subquery) [ AS ] derived_table_name [( column_name [ {, column_name } ... ] )]

// Message after fixed
FROM (subquery) [ [ AS ] derived_table_name [( column_name [ {, column_name } ... ] )] ]

Removal of unused reserved words (CUBRIDSUS-6250)

Remove unused reserved words, such as ALIAS, TYPE, VIRTUAL, TEST, and WAIT, and now it is possible to use them as an identifier, such as the name of a table or column.

The followings are the previous reserved words which are removed on the new version.

ALIAS ASYNC CLUSTER COMPLETION DICTIONARY
EXCLUDE LDB OID OPERATION OPERATORS
OTHERS PENDANT PREORDER PRIVATE PROTECTED
PROXY REGISTER STRUCTURE SYS_USER TEST
THERE TYPE VIRTUAL VISIBLE WAIT

Possible to enter SELECT queries to the DO statement (CUBRIDSUS-6528)

Modify to make it possible to enter SELECT queries to the DO statement.

DO (SELECT count(*) FROM athlete);

Possible to use UTF-8 character set as the identifier, such as the name of a table or column (CUBRIDSUS-7227)

Make it possible to use a UTF-8 character set as an identifier, such as the name of a table or column. For more information, see Casing and identifiers.

Possible to use TO as well as AS between the existing and new names of a column when changing the name of the column (CUBRIDSUS-7477)

Previously, you could use only AS when changing the name of a column. Now you can also use TO instead of AS.

CREATE TABLE t ( a int);
ALTER TABLE t RENAME COLUMN a TO b;

Incorrect query result when an ORDER BY clause was used in the scalar subquery of an IN or NOT IN condition, or several columns existed in SELECT LIST (CUBRIDSUS-7700)

Fix a problem in which an incorrect query result would be displayed when an ORDER BY clause was used in the scalar subquery of an IN or NOT IN condition, or several columns existed in the SELECT list.

// The case number was always 0 when an ORDER BY clause was used in the scalar subquery.
SELECT * FROM tbl WHERE col IN (SELECT col FROM tbl2 ORDER BY b);

// The case number was always 0 when several columns existed in the SELECT list and the case number was 0.
SELECT * FROM tbl WHERE col IN (SELECT a, b FROM tbl2);
SELECT * FROM tbl WHERE col NOT IN (select a,b from tbl2);

Execution failed when the host variable type in an IN clause was date/time (CUBRIDSUS-7916)

Fix a problem in which normal execution failed when the host variable type in an IN clause was date/time.

In the previous version, when the type of the value to be bound was date/time, Q1 worked normally as the data was deleted, but Q2 worked abnormally because the data was not deleted.

DELETE FROM TBL WHERE D = ?;        -- Q1
DELETE FROM TBL WHERE D IN (?);        -- Q2

Incorrect results would be outputted when an ORDER BY clause, a correlated subquery and the ORDER BY clause with a specified column not found in the SELECT list were together (CUBRIDSUS-8640)

Fix a problem in which incorrect results would be outputted when there were a GROUP BY and a correlated subquery and when a column not found in the SELECT list was specified in the ORDER BY clause.

SELECT (SELECT f1.a FROM foo f1 WHERE f1.b=f2.b) as t
FROM foo f2
WHERE f2.b >= 1 and f2.b < 10
    GROUP BY f2.c
    ORDER BY f2.c;

Incorrect results when WITH ROLLUP corrector of a GROUP BY clause was specified (CUBRIDSUS-6518)

Fix a problem in which incorrect results would be produced when the WITH ROLLUP corrector of the GROUP BY clause was specified.

// The results was not outputted with ROLLUP.
SELECT a FROM t1 GROUP BY A WITH ROLLUP;

// The results was outputted with ROLLUP even though it should not be outputted since the column value was NULL and a>1.
SELECT a, COUNT(*) FROM t1 GROUP BY a WITH ROLLUP HAVING a>1;

Problem when the select list format of a subquery was "*, column", the following columns were not outputted (CUBRIDSUS-6589)

Fix a problem in which, when the SELECT list format of a subquery was "*, Column", the following columns were not outputted.

SELECT b FROM (SELECT *, 'hello' AS b FROM t1) t;

Error when the escape character specified in the ESCAPE clause was the last character of the LIKE pattern (CUBRIDSUS-6849)

Fix a problem in which query execution failed with the message "System error" when the escape character specified in the ESCAPE clause was the last character of the LIKE pattern.

CREATE TABLE foo(a char(10));
SELECT * FROM foo WHERE a LIKE 'ab' ESCAPE 'b';
ERROR: System error (db_compress_like_pattern) in ../../src/optimizer/query_rewrite.c (line: 3291)

Error when a query with wildcard (*) and a query with column name, both specified in the SELECT list, were united through UNION (CUBRIDSUS-6588)

Fix a problem in which query execution failed with the message "ERROR: The number of columns, 0, in the left query do not match the number of columns, N, in the right query." when a query with * and a query with column name, both specified in the SELECT list, were united through a statement set operator, such as UNION, despite the same number of columns.

(SELECT * FROM t1) UNION (SELECT a, b FROM t2 );

Incorrect results would be outputted when a comparison condition clause of a query contained a host variable (CUBRIDSUS-5843)

Fix a problem in which an unexpected result was outputted from a query when the comparison condition clause of the query contained a host variable and the type of the values entered to the host variable was converted.

// The result value was correctly outputted if constant was used, not host variable.
SELECT * FROM foo WHERE a > 2.5;
a
=============
3
4
5


// When the a column type was INT and input value was 2.5, an error occurred since it was converted to 3 with INT.

PREPARE stmt FROM 'SELECT * FROM foo WHERE a > ?';
EXECUTE stmt USING 2.5;
a
=============
4
5

Incorrect results were outputted when a query with a subquery containing UION and LIMIT clauses was executed (CUBRIDSUS-6596)

Fix a problem in which incorrect results were outputted when a query with a subquery that contained UNION and LIMIT clauses was executed.

CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES (1);

--The case number was 0 for the query below.
SELECT * FROM ((SELECT a from t1) UNION (SELECT a from t1) LIMIT 1) s1;

Error when a query with its subquery containing a UNION clause used in a WHERE clause was executed (CUBRIDSUS-6530)

Fix a problem in which query execution failed with the message "ERROR: '(select t1.i from t1 t1)<>0' is not union compatible with '(select t2.i from t2 t2)'." when a subquery containing a UNION clause is in the condition of a WHERE clause.

SELECT * FROM t1 WHERE EXISTS ((SELECT i FROM t1) UNION ALL (SELECT i FROM t2));

Incorrect query results were outputted when the OUTER JOIN query was executed in the merge join method (CUBRIDSUS-5703)

Fix an error in which incorrect query results were outputted when an OUTER JOIN query was executed in the merge join method.

SELECT /*+ USE_MERGE */ * FROM tab t
LEFT OUTER JOIN idx i on t.t = i.t
LEFT OUTER JOIN col c on i.i = c.i AND c.c = t.c;

If the number of the rows of ORDERBY_NUM() was bigger than that of ROWNUM in a query where ROWNUM and ORDERBY_NUM() were used together, the incorrect number was outputted as the result (CUBRIDSUS-6676)

In the condition in which the limited number of rows is outputted through ORDERBY_NUM() from the sorted result after limiting the number of rows through ROWNUM, if the number of rows of ROWNUM was bigger than that of ORDERBY_NUM(), the incorrect number was outputted as the result. This problem has been fixed.

SELECT * FROM foo f, bar b WHERE f.a > 0 AND f.a = b.a AND ROWNUM <=4 ORDER BY f.a FOR ORDERYBY_NUM()<=10;

Overflow error was not processed in the TIMEST AMP type (CUBRIDSUS-6004)

Fix a problem in which an overflow error was not processed in the query below with regard to the TIMESTAMP type. The new version returns an overflow error when the below query is executed.

SELECT timestamp'01/19/2038 12:14:07 pm' - CAST(-32768 as smallint);

Error when a subquery with an ORDER BY clause as the argument of the ANY and SOME determiners was executed (CUBRIDSUS-7799)

Fix a problem in which the "ERROR: Aggregate function must have 1 argument: min(t2.id, t2.a)." error occurred when an subquery with an ORDER BY clause as the argument of ANY, SOME determiners was executed.

SELECT * FROM t1 WHERE id > ANY(SELECT id FROM t2 ORDER BY a);

When executing multiple queries after preparing them concurrently, only the first query was executed normally (CUBRIDSUS-7455)

Fix a problem in which, when executing multiple queries repeatedly after preparing them concurrently, only the first execution was made normally and an error occurred from the second execution.

String MULTI_SELECT = "SELECT A FROM T1 WHERE A = ?
; UPDATE T1 SET A = 2 WHERE A = 2; SELECT A, B FROM T1 WHERE A = ?; SELECT A, B, A AS C FROM T1 WHERE A = ?;";

PreparedStatement p = c.prepareStatement(MULTI_SELECT);
...
while(...)
{
    ...

    p.execute();
    ...
}

Key limit optimization was applied incorrectly in a query with the EXISTS and FOR ORDERBY_NUM() BETWEEN conditions (CUBRIDSUS-9198)

Fix a problem in which inappropriate key limit optimization was applied in a query with the EXISTS and FOR ORDERBY_NUM() BETWEEN conditions and incorrect results were outputted.

SELECT cd, tcd, nm
FROM a
WHERE EXISTS (SELECT 1 FROM b
WHERE a.cd = b.cd
AND (b.no = 10000 OR b.uno =10000))
ORDER BY a.nm
FOR ORDERBY_NUM() BETWEEN 1 AND 50;

Error when the earlier VARCHAR type column was smaller than the later column in a UNION query (CUBRIDSUS-9148)

Fix a problem in which query execution failed with the message "ERROR: Execute: Query execution failure #1336." when the front VARCHAR type column was smaller than the later column in a UNION query.

CREATE TABLE u1 (a varchar(1));
CREATE TABLE u2 (a varchar(2));
INSERT INTO u1 values ('1');
INSERT INTO u2 values ('22');
SELECT a FROM (SELECT a FROM u1 UNION ALL SELECT a FROM u2) t(a);

Error when executing CASE or DECODE statements with a PREPARE statement (CUBRIDSUS-6847)

Fix a problem in which query execution failed with the message "ERROR: Semantic: System error (generate var) in ../../src/parser/xasl_generation.c" when a type was not able to be determined, for example, because all the arguments were given as a host variable, in the execution of the CASE or DECODE statements with a PREPARE statement.

PREPARE ST FROM 'SELECT CASE WHEN col=? THEN ? ELSE ? END FROM tbl;'

Malfunction when '%' was used as an escape character of the LIKE statement (CUBRIDSUS-7211)

Fix an error in which a malfunction occurred when '%' was used as an escape character of the LIKE statement.

// When the following query was executed, '%' character at first, random character at second, and 'cab' string after third were retrieved.
SELECT * FROM foo WHERE a LIKE '%%_cab' escape '%';

Overflow error even when division operations were executed with operands within the scope of the NUMERIC type (CUBRIDSUS-6506)

Fixed a problem in which query execution failed with the message "ERROR: Data overflow on data type numeric" when division operations were executed with NUMERIC type operands although an input argument value was a number within the scope of the NUMVER type.

-- An error occurred in the following case.
SELECT 9/1.2345678901211111111;
SELECT -9/1.2345678901211111111;

Precision and scale values for numeric types other than NUMERIC type and date/time types (CUBRIDSUS-6967)

In the previous version, the precision and scale values of numeric types other than NUMERIC type and date/time types were all 0. Now each type returns its precision and scale values. For example, as INT is a 10 digit number at maximum, its precision is 10 and its scale is 0. As DATETIME is expressed as "hh:mi:ss.fff mm/dd/yyyy", its precision is 23 and its scale is 3.

Error when a query was executed with a constant specified in a GROUP BY clause (CUBRIDSUS-6268)

Fix a problem in which query execution failed with the message "ERROR: xxxxxxxx in sort spec is out of range." when a constant was specified in a GROUP BY clause, as in SELECT '' AS group_key FROM tbl GROUP BY group_key.

Error when a condition query using NOT and unary operators (+, -) was executed (CUBRIDSUS-6040)

Fix a problem in which query execution failed with the message "ERROR: 'unknown opcode' operator is not defined on types integer and integer." when the following condition query using NOT and unary operator (-) was executed.

SELECT * FROM tab WHERE NOT - col0 = - col0;

An error that occurred when unary operators + and - were used together in a query statement has also been fixed.

SELECT * FROM tab0 WHERE col0 IN (+ - col0);

Server process would be abnormally terminated when a SELECT query to retrieve a specific column was executed for a view containing an ORDER BY clause (CUBRIDSUS-7140)

CREATE VIEW va AS SELECT code, name, gender, nation_code FROM athlete ORDER BY nation_code;
SELECT code, name FROM va;

Error when an INSERT INTO ... SELECT ? ... query was executed while binding INT type in a CHAR type column (CUBRIDSUS-6563)

Fixed a problem in which query execution failed with the message "ERROR: A domain conflict exists on attribute noname" when binding INT type in a CHAR type column when the INSERT INTO ... SELECT ? FROM db_root query was executed.

CREATE TABLE t ( a CHAR(1));
PREPARE s FROM 'INSERT INTO t SELECT ? FROM db_root';
EXECUTE s USING 1;

Server process was abnormally terminated when one condition of an IN or EXISTS expression was a subquery with the SELECT list comprising conditional expressions (CUBRIDSUS-6482)

Fix a problem in which the server process was abnormally terminated when the condition of the IN or EXISTS expression was a SELECT list comprising conditional expressions, such as simple comparison, ALL, BETWEEN, LIKE, and ISNULL.

SELECT * FROM t1 WHERE v IN (SELECT (1 = 1));
SELECT * FROM t1 WHERE v IN (SELECT (aaa' LIKE 'bbb'));
SELECT * FROM t1 WHERE EXISTS (SELECT (1 < ALL{1,3,4} );

Error when a query containing a USING INDEX clause was executed after the partial rollback to before the schema was changed (CUBRIDSUS-6458)

Fix a problem in which query execution failed with the message "ERROR: Execute: Query execution failure #10842." when a query containing a USING INDEX  clause was executed after the execution of partial rollback to before the schema was changed.

;autocommit off
CREATE TABLE t (id INTEGER, textlabel VARCHAR(255), description VARCHAR(4096));
CREATE INDEX i_t_id_text ON t(id, textlabel);
COMMIT;
SAVEPOINT sp4;
TRUNCATE t;
SELECT * FROM t WHERE id > -1 USING INDEX i_t_id_text(+);
ROLLBACK TO SAVEPOINT sp4;
SELECT * FROM t WHERE id > -1 USING INDEX i_t_id_text(+);

CSQL and CAS was abnormally terminated when a column not defined in the CASE expression or when the LAST_INSERT_ID() was called after the termination of the server process (CUBRIDSUS-5759)

Fix a problem in which CSQL and CAS were abnormally terminated when the CASE expression contained a column not defined.

//IN the statement below, a was not be defined.

UPDATE tbl SET col1 = (CASE WHEN EXISTS (SELECT * FROM tbl2 WHERE LENGTH(a)>0 )
                            THEN (SELECT col2 FROM tbl2 WHERE colx='1' )
                            ELSE (SELECT col1 FROM tbl2 WHERE colx='1' ) END )

A problem in which CSQL and CAS would be abnormally terminated when LAST_INSERT_ID() was called after the termination of the server process has also been fixed.

Applications were abnormally terminated when a subquery of over 128 in depth was executed (CUBRIDSUS-7826)

Fix a problem in which applications were abnormally terminated when a subquery of over 128 in depth was executed.

SELECT * FROM (SELECT * FROM (... (SELECT 1)...)...);

Length of a string type was incorrectly outputted when a DESCRIBE statement was executed (CUBRIDSUS-6432)

Fix an error in which the length of a STRING was outputted incorrectly as -1 when the DESCRIBE statement was executed to output table information. This has been modified not to output the value.

For reference, STRING(n) means the VARCHAR(n) type.

DESCRIBE test_tbl;
Field Type Null Key Default Extra
==========================================
's_name' 'CHAR(1)' 'YES' '' NULL ''
'f_name' 'STRING(30)' 'YES' '' NULL ''
'name' 'STRING' 'YES' '' NULL ''

Precision (p) and scale (s) exceeding the allowable range were allowed to be entered in a DECIMAL(p, s) type (CUBRIDSUS-6505)

Fix an error in which the values exceeding the precision (p) and scale (s) of the DECIMAL(p, s) type were allowed to be entered.

CREATE TABLE t1 (col1 decimal (5, 2));

// An error where 1000.00 value was stored existed when the below query execution succeed.
INSERT INTO t1 VALUES (999.999);

Syntax checking for a DEFAULT expression was not executed in creating a table (CUBRIDSUS-6761)

Modify to execute syntax checking for a DEFAULT expression when creating a table. Previously, in the example below, a table was created while ignoring the input of -9999 after SYSTIMESTAMP. Now in this case, an error will be outputted as it is not correct in terms of syntax.

CREATE TABLE foo(a TIMESTAMP DEFAULT SYSTIMESTAMP - 9999);
Error: Invalid DEFAULT clause. 'sys_timestamp ' cannot be used in a nested expression.

A character string being bound was not automatically converted to a numeric type when multiplication/division operations between host variables were executed (CUBRIDSUS-5506)

Fix an error in which a character string being bound was not automatically converted to a numeric type when multiplication/division operations between host variables were executed.

CREATE TABLE t1 (i1 integer);
PREPARE st FROM 'INSERT INTO t1(i1) VALUES (? * ?)'
EXECUTE st USING '4', '2.2';

Incorrect results of the LAST_IONSERT_ID function would be outputted after failing in the execution of the INSERT statement due to the violation of the UNIQUE key (CUBRIDSUS-6450)

Fix a problem in which incorrect results of the LAST_INSERT_ID function would be outputted after failing in the execution of the INSERT statement due to the violation of the UNIQUE key.

CREATE TABLE t1 ( k int(11) PRIMARY KEY AUTO_INCREMENT, a INT(11) DEFAULT NULL UNIQUE);
INSERT INTO t1 ( a ) VALUES ( 1 );
INSERT INTO t1 ( a ) VALUES ( 2 );

--below sql will fail because of unique constrain in a column.
INSERT INTO t1 ( a ) VALUES ( 1 );

--below should return an old value(2) because of failed insert operation.
SELECT LAST_INSERT_ID();

Error when an invalid ALTER COLUMN statement was executed (CUBRIDSUS-6759)

Fix a problem in which an error was not outputted when an invalid ALTER COLUMN ... SET DEFAULT statement was executed. In the following example, as the column "a" is TIMESTAMP, you cannot use a character string such as "aaa" as a DEFAULT value.

CREATE TABLE foo(a TIMESTAMP);
ALTER TABLE foo ALTER COLUMN a SET DEFAULT 'aaa';

Incorrect value was stored when a value was set through the setBytes() method and inserted to a BIT type column (CUBRIDSUS-6628)

Fix a problem in which an incorrect value was stored when the value of a host variable was set through the setBytes() method and inserted to a BIT type column in JDBC applications.

NULL would be entered into the primary key when the INSERT ... ON DUPLICATE KEY UPDATE statement was executed (CUBRIDSUS-6448)

Fix a problem in which NULL was entered into the primary key when the INSERT ... ON DUPLICATE KEY UPDATE statement was executed, as in the following example.

CREATE TABLE t1( id int AUTO_INCREMENT NOT NULL, c CHAR(1) NOT NULL , counter int NOT NULL DEFAULT 1, PRIMARY KEY(id), UNIQUE KEY(c) ) ;
INSERT INTO t1 (id, c) VALUES (NULL, 'a'), (NULL, 'a') ON DUPLICATE KEY UPDATE id = null, counter = counter + 1;

Incorrect result of a COUNT(*) query when the value of a UNIQUE column was entered as NULL in the execution of the INSERT ... SELECT statement (CUBRIDSUS-8338)

Fix a problem in which an incorrect result of a COUNT(*) query was outputted when the value of a UNIQUE column was entered as NULL in the execution of the INSERT ... SELECT statement. In the previous version, a COUNT query is executed normally when the name of a column is specified, as in COUNT(column name).

CREATE TABLE t1(id INT AUTO_INCREMENT, mgrid INT UNIQUE, dummy INT);
INSERT INTO t1(dummy) VALUES (1);
INSERT INTO t1(dummy) SELECT dummy FROM t1;
INSERT INTO t1(dummy) SELECT dummy FROM t1;

SELECT count(*) FROM t1;

Error when performing INSERT into a list partitioned table with an AUTO_INCREMENT column (CUBRIDSUS-6522)

Fixed a problem in which query execution failed with the message "ERROR: Appropriate partition does not exist." when performing INSERT into a list partitioned table with an AUTO_INCREMENT column.

CREATE TABLE t1 (a int AUTO_INCREMENT PRIMARY KEY)
PARTITION BY LIST (a) (PARTITION p0 VALUES IN (1, 2));

INSERT INTO t1 VALUES (NULL),(NULL);

Server process was abnormally terminated when performing INSERT or UPDATE into a table that has 100 or more constraints for a single column (CUBRIDSUS-6437)

Fixed a problem in which the server process was abnormally terminated when performing INSERT or UPDATE to a table that has 100 or more constraints (e.g., constraints on foreign key) for a single column.

Error when inserting a character string to the BLOB and CLOB types on a Windows 64 bit environment (CUBRIDSUS-6111)

Fixed a problem in which query execution failed with the message "ERROR: Cannot coerce '123' to type clob." when inserting a character string to the BLOB and CLOB types on a Windows 64 bit environment.

CREATE table alltypes(blob_col blob,clob_col clob);
INSERT into alltypes values('123', '123')

Deadlock when the REPLACE statement and the DELETE statement were executed as separate transactions (CUBRIDSUS-5016)

Fixed a problem in which deadlock would occur when the REPLACE statement and the DELETE statement were executed as separate transactions.

CREATE TABLE t1(a int);
CREATE UNIQUE INDEX ON t1 (a);
INSERT INTO t1 VALUES (1),(4),(7);

// set autocommit off
T1: REPLACE t1 (a) VALUE (6);
T2: REPLACE t1 (a) VALUE (7);
T1: DELETE FROM t1 WHERE a=6
T2: REPLACE t1 (a) VALUE (5);

A unique key violation error when using the REPLACE statement (CUBRIDSUS-5786)

Fixed a problem in which a unique key violation error would occur when using the REPLACE statement.

CREATE TABLE t1(id1 VARCHAR(10) UNIQUE);
INSERT INTO t1 VALUES ('a'),('b'),('f'),('k');
COMMIT;

T1: DELETE FROM t1 WHERE id1 = 'f';
INSERT INTO t1 VALUES ('g');
T2: INSERT INTO t1 VALUES ('f');
COMMIT;
T1: ROLLBACK;
T3: REPLACE INTO t1 SET id1 = 'f';

Possible to omit the FROM keyword in the DELETE statement (CUBRIDSUS-6547)

Modified to be able to omit the keyword in the DELETE statement when there is only a single table under the FROM clause.

DELETE tbl;

Incorrect results were outputted when another thread performed SELECT while executing UPDATE in a single table (CUBRIDSUS-8460)

Fix a problem in which incorrect results would be outputted when another thread (transaction isolation level - uncommitted read) executed SELECT while executing UPDATE to a single table.

Error -46 was wrtten to the server log while executing UPDATE and SELECT concurrently (CUBRIDSUS-8347)

There was a problem in which the error -46 “Internal error: slot 17 on page 166272 of volume xxx is not allocated." would occur when executing SELECT and UPDATE (isolation level - uncommitted read) concurrently for the same table. However, the new version has been modified to output a warning rather than an error because a query is executed normally even when the error occurs.

Transaction consistency was not ensured when table lock occurred due to another transaction while repeatedly executing the INSERT statement (CUBRIDSUS-8303)

After a transaction, which would be executed repeatedly after preparing the INSERT statement, was executed first and committed, when the transaction executed INSERT again after another transaction set X-lock to the table, the consistency of transactions was not ensured because locking was not performed accurately due to the lack of the IX-lock for the table. The new version has fixed this problem.

This problem can occur on the CUBRID 2008 R4.1 or higher.

SELECT query for a table deleted by another transaction did not output an error but returned 0 (CUBRIDSUS-7389)

Fix a problem in which a SELECT query for a table deleted by another transaction did not output an error (cannot find the table) but returned 0.

Server process would hang when a deadlock occurred from the click counter (CUBRIDSUS-7378)

Fix a problem in which the server process would hang when a deadlock occurred from the click counter function and the server process failed to detect it. In general, when a deadlock occurs from the click counter, only the update of the click counter is ignored and the other queries are normally executed.

Latch timeout error that occurred when two or more transactions attempted to generate or expand temporary temp volume concurrently (CUBRIDSUS-6667)

Fix a problem in which query execution failed with the message "LATCH ON PAGE(12345|0) TIMEDOUT" when two ore more transactions attempted to generate or expand temporary temp volume concurrently.

REUSE_OID attribute was not copied when copying and creating a REUSE_OID table by using the CREATE TABLE ... LIKE statement (CUBRIDSUS-7371)

Fix an error in which the REUSE_OID attribute was omitted in a newly created table when copying and creating a REUSE_OID table by using the CREATE TABLE ... LIKE statement.

CREATE TABLE src_tbl (i INT) REUSE_OID;
CREATE TABLE dest_tbl LIKE src_tbl;

DEFAULT attributes were added through the ALTER statement in a column with the AUTO_INCREMENT attribute (CUBRIDSUS-6407)

Fixed a problem in which the DEFAULT attribute was added by using the ALTER statement although it is impossible to set the AUTO_INCREMENT and DEFAULT attributes together in the same column.

CREATE TABLE tbl (x INT SHARED 3);
ALTER TABLE tbl ALTER COLUMN x SET DEFAULT 100;
-- The following error was outputted.
SHARED, DEFAULT and AUTO_INCREMENT cannot be defined with each other.

Cache option configuration of serial was not altered through the ALTER SERIAL statement (CUBRIDSUS-7120)

Fixed a problem in which the cache option was not activated or deactivated by the ALTER SERIAL statement.

CREATE SERIAL s2 STARt WITH 5 INCREMENT BY 6 CACHE 5;
ALTER SERIAL s2 NOCACHE;

DEFAULT NULL constraint was set to a primary key column (CUBRIDSUS-7348)

Made it impossible to set the DEFAULT NULL constraint to a primary key column.

// An problem existed where the query below was executed.
CREATE TABLE t(a INT PRIMARY KEY DEFAULT NULL);

Primary key was able to be declared to both the upper class and subclass (CUBRIDSUS-9207)

When a primary key was in the upper class, the key should not be declared in its subclasses because the subclasses would inherit it from the upper class. The new version has fixed a problem in which primary key was declared in subclasses.

CREATE TABLE student (std_id INT PRIMARY KEY);
CREATE TABLE time_table as subclass of student (std_id INT PRIMARY KEY);

Space would not be returned when dropping a table with the REUSE_OID option (CUBRIDSUS-8140)

Fixed a problem in which space was not returned when an internal error occurred in the server while dropping a table with the REUSE_OID option.

Error when the type of the second value to be bound to host variable was not identical to the first one (CUBRIDSUS-7377)

Fixed a problem in which an error would occur when the type of the second value to be bound to a host variable was not identical to the first one and an attempt to bind it into the type of the first one was made. For example, an error occurred when attempting to bind 'A' to the INTEGER type after 1 was already bound to 'SELECT ?'.

Server process would be abnormally terminated when executing a query for a view containing a host variable (CUBRIDSUS-6767)

Fixed a problem in which the server process would be abnormally terminated when executing a query for a view containing a host variable. DDL containing a host variable is not allowed.

CREATE TABLE tree(node INT,parentnode INT,name VARCHAR(30));
INSERT INTO TREE VALUES (1, NULL,'1'),(2, 1,'2'),(3, 1,'3'),(4,2,'4');
PREPARE stmt FROM 'CREATE VIEW v as SELECT parentnode,node,name FROM tree WHERE node < ? START WITH parentnode IS NULL CONNECT BY parentnode=PRIOR node ORDER BY node';
EXECUTE stmt USING 3;
SELECT * FROM v;

DEFAULT value was not copied when creating a view (CUBRIDSUS-6772)

Fixed a problem in which the DEFAULT values of the source table were not copied when the CREATE VIEW AS SELECT ... statement was executed.

CREATE TABLE foo(a INT DEFAULT 222);
CREATE VIEW voo AS SELECT * FROM foo;
INSERT INTO voo VALUES(DEFAULT);

Error when executing a SELECT query without a FROM clause in queries of a view (CUBRIDSUS-6592)

Fixed a problem in which query execution failed with the message "ERROR: There are more attributes in class v1 than columns in the query specification." when executing a SELECT query without a FROM clause in queries of a view.

CREATE VIEW v1 AS (SELECT 1 a) UNION (SELECT 1 a);
SELECT * FROM v1;

Query execution would fail when complex expressions appeared in the SELECT list when creating or changing a view (CUBRIDSUS-8970)

Fixed a problem in which query execution would fail when complex expressions appeared in the SELECT list when creating or changing a view. As in the example below, unless a column alias is not specified for a complex expression in the SELECT list, "SQRT(CAST(100 as double))" will become the column alias. As such expressions are not allowed as an identifier, it is necessary to use signs to enclose the identifier (double quotes, brackets or backticks). The new version has eliminated the problem by internally adding a sign to enclose an identifier when executing a query.

CREATE VIEW vw AS SELECT SQRT(CAST(100 as double));
// If query above is executed, it will be internally performed as follows:
CREATE VIEW vw AS SELECT SQRT(CAST(100 as double)) AS [ SQRT(CAST(100 as double)) ];

Previously, in the HA environment, this problem would occur even when a query was written to prevent it in the master node because a query was re-written to CAST argument values in a function in the slave node.

// If query below is executed, the query will be rewritten in the form of first query in slave node.
CREATE VIEW vw AS SELECT SQRT(100);

Modify not to allow for users to change the records of the db_user, db_trigger system catalog table randomly (CUBRIDSUS-8690)

Modify not to allow for users to INSERT, UPDATE or DELETE the records of the db_user, db_trigger system catalog table randomly.

Error when entering values to a table referenced by a foreign key (CUBRIDSUS-6216)

Fix a problem in which query execution failed with the message "ERROR: The constraint of the foreign key 'fk_pkfk_b' is invalid." when entering values to a table referenced by a foreign key.

Server process was abnormally terminated when creating a table with the DEFAULT value of a column bigger than a page (CUBRIDSUS-6510)

Fix a problem in which the server process was abnormally terminated when creating a table with the DEFAULT value of a column bigger than a page.

Login failed with a username longer than 32 bytes (CUBRIDSUS-6633)

There was a problem in which login fails when a username is longer than 32 bytes. In the new version, you can specify a username up to 32 bytes when creating a DB user account.

CREATE USER a12345678901234567890123456789012345678901234567890;
$ csql -u a1234567890123456789012345678901234567890123456
ERROR: User "a12345678901234567890123456789012345" is invalid.

Allow a CONNECT BY clause to be placed after a START WITH clause (CUBRIDSUS-6548)

Modify to enable a CONNECT BY clause to be placed after a START WITH clause in a hierarchy query.

SELECT * FROM a CONNECT BY PRIOR id = pid START WITH id = 1;

Incorrect query results when a hierarchy query with an ORDER SIBLINGS BY clause was used as a subquery (CUBRIDSUS-7748)

Fix a problem in which incorrect query results were outputted when a hierarchy query containing a ORDER SIBLINGS BY clause was used as a subquery because the query was re-written incorrectly in the query optimizer.

SELECT *
FROM tbl
WHERE id IN (SELECT id FROM tbl
    WHERE yn = 'Y'
    START WITH id = '100002'
    CONNECT BY NOCYCLE PRIOR id = pid
    ORDER SIBLINGS BY sort_col);

Incorrect number of query results in a hierarchy query with the ROWNUM condition (CUBRIDSUS-6365)

Fixed a problem in which the number of query results was not limited by the ROWNUM condition when a hierarchy query with the ROWNUM condition was processed through the index scan.

SELECT oid
FROM foo
WHERE ROWNUM = 1
    START WITH oid = '1234567'
    CONNECT BY PRIOR poid = oid

Server process would be abnormally terminated when restarting the DB server during the execution of a query using the temporary temp volume (CUBRIDSUS-5684)

Fix a problem in which the server process would be abnormally terminated when restarting the database while executing a query using the temporary temp volume.

Query would fail to be processed even when the user provided a solution to a failed auto-increment (CUBRIDSUS-7216)

Previously, the query would fail to be processed even when the user provided a solution, such as restarting the server, to a failed auto-increment that was caused by an abnormal termination of the server process during an auto-increment. This problem has been solved.

Trigger execution failed (CUBRIDSUS-7187)

Fixed a problem in which trigger execution failed as an error occurred or the application was abnormally terminated, as in the examples below.

Case 1) In the previous version, an error occurred when the following query was executed.

CREATE TABLE t1(a INT);
INSERT INTO t1 VALUES(1), (2), (3), (4);
CREATE TRIGGER TRIGG1 BEFORE UPDATE ON t1 EXECUTE DELETE FROM t1;
UPDATE t1 SET a=6;
ERROR: Error evaluating action for "trigg1", Accessing deleted object 0|1100|16.

Case 2) In the previous version, the application was abnormally terminated when the following query was executed.

CREATE TABLE t1(a INT, b INT);
INSERT INTO t1 VALUES (2,2), (3,3);
CREATE VIEW v1 AS SELECT * FROM t1;
CREATE TRIGGER tri_t1_before_update1 BEFORE UPDATE ON t1 EXECUTE DELETE FROM v1;
CREATE TRIGGER tri_t2_before_update2 BEFORE UPDATE ON t1 EXECUTE DELETE FROM t1;

--Test: crash happens.
UPDATE t1 SET t1.a=10;

If a table created by another user had a trigger, query errors with regard to the table were ignored (CUBRIDSUS-7336)

Fixed a problem in which, if a table created by another user had a trigger, errors would not occur even when query execution with regard to the table failed but the message "ERROR: Internal system failure: no more specific information is available." would be outputted when the transaction was terminated later.

Triggered query would not roll back even when a triggered UPDATE/INSERT query failed (CUBRIDSUS-7239)

Previously, the triggered query would not roll back even when a triggered UPDATE/INSERT query failed while auto-commit mode was off. This problem has been fixed. In the following example, executing the Q6 will start the trig1 trigger in the t2 table. At this point, an error would occur because a value in int cannot be entered in the t1 table, and the Q6 should have rolled back. This means that the number of records for the t1 table should be 1, and the number of records for the t2 table should be 0 after executing Q7. In the previous version, the Q6 query that started the trig1 trigger would not roll back when an error occurred.

;autocommit off

CREATE TABLE t1(col1 date);  -- Q1
CREATE TABLE t2(col2 int);  -- Q2
CREATE TRIGGER trig1 AFTER INSERT ON t2 EXECUTE INSERT INTO t1(col1) VALUES(obj.col2);   -- Q3
COMMIT;   -- Q4
INSERT INTO t1(col1) VALUES ('2012-04-30');  -- Q5
INSERT INTO t2(col2) VALUES (1);  -- Q6

ERROR: Error evaluating action for ""trig1"", Execute: Cannot coerce obj.cold2 to type date.

COMMIT;  -- Q7

NULL was returned when CLOB_TO_CHAR() was called in the trigger (CUBRIDSUS-7246)

Fix a problem in which NULL was returned when CLOB_TO_CHAR() was called in the trigger.

Query Plan and Optimization

Storing more query plans than the maximum number of query plans configured in the query plan cache (CUBRIDSUS-8619)

Fix a problem of storing more query plans than the maximum number of query plans configured in the query plan cache through the max_plan_cache_entries system parameter. In the previous version, this problem would cause the usage of server memory and temporary volume to continuously increment.

Query result was 0 if the query contained ORDERBY_NUM() when an ORDER BY clause was removed in the query optimization process (CUBRIDSUS-6060)

Fix an error in which the result was outputted as 0 because orderby_num() was not removed when the query optimization process removed the unnecessary ORDER BY clause.

SELECT orderby_num(), history.*
FROM history
WHERE host_year = '2004'
ORDER BY host_year FOR orderby_num() BETWEEN 1 AND 10;

Application was abnormally terminated when executing the PREPARE statement with the query plan cache disabled (CUBRIDSUS-8094)

Fix a problem in which the application was abnormally terminated when executing the PREPARE statement with the query plan cache disabled by configuring the max_plan_cache_entries system parameter as -1.

CREATE TABLE t(a INT);
INSERT INTO t VALUES (1), (2), (3);
PREPARE STMT FROM 'select count(?) from t';
EXECUTE STMT USING 1;

Incorrect results would be outputted when executing a query after the query plan information was outputted (CUBRIDSUS-7818)

Fix a problem in which incorrect results would be outputted when outputting a query plan by executing the ;plan simple command because the input value was converted to the type of the column corresponding to the value. As shown in the following example, in the previous versions, the input value 23 was converted to 2 and passed to the server when the query plan was outputted and as a result 2 was not included in the result.

CREATE TABLE foo(col INT);
INSERT INTO foo VALUES (1),(2);
;plan simple
SELECT * FROM foo WHERE col < 2.3;

When executing a query after changing the table name in the query not using query plan cache, the query was executed for the previous table (CUBRIDSUS-7637)

Fix a problem in which, when executing a query after changing the table name, the query was executed not for a new table with the name but for the previous table before the change of the name. In the previous version, this problem always occurred because the INSERT statement did not use query plan cache. This problem also occurred in the other queries when the query plan cache feature was disabled by configuring the value of the max_plan_cache_entries system parameter as -1.

// Example of insert
INSERT INTO tbl VALUES (...);
RENAME TABLE tbl AS tbl_old;
RENAME TABLE tbl2 AS tbl;

// When the query was executed, tbl_old will perform INSERT
INSERT INTO tbl VALUES (...);

Error when executing the covering index scan query for a specific OUTER JOIN query (CUBRIDSUS-7868)

Fix a problem in which the "Query execution failure #10946." would occur in the execution when the covering index scan query plan was incorrect for a specific query with the OUTER JOIN condition.

SELECT foo.obj_id, foo.h_id
FROM foo
INNER JOIN table_j ON foo.host_id = table_j.host_id
LEFT OUTER JOIN table_d ON foo.s_id = table_d.svc_no
LEFT OUTER JOIN table_g ON foo.g_id = table_g.svr_grp_no
WHERE table_d.svc_no = foo.s_id
AND foo.s_id = '2152';

Query plan was not outputted when an IN clause had a host variable (CUBRIDSUS-6382)

Fix an error in which the query plan would not be outputted when an IN clause contains a host variable in the form of (?, ?, ...).

csql> ;plan detail
csql> SELECT * FROM tbl WHERE id IN (?, ?, ?);

Results omitting some conditions would be outputted when executing a query with multiple AND and OR conditions (CUBRIDSUS-9193)

Fix a problem in which results omitting some conditions would be outputted due to a query plan bug when executing a query with multiple AND and OR conditions.

SELECT *
FROM it, p
WHERE
(
  p_pkey = l_pkey
  and p_br = 'Br12'
  and p_ct in ('CS', 'BX', 'PK', 'PG')
  and l_qty >= 1 and l_qty <= 1 + 10
  and p_sz between 1 and 5
  and l_sm in ('A', 'RG')
  and l_st = 'DIP'
)
OR
(
  p_pkey = l_pkey
  and p_br = 'Br12'
  and p_ct in ('MG', 'MB', 'MPK', 'MPC')
  and l_qty >= 10 and l_qty <= 10 + 10
  and p_sz between 1 and 10
  and l_sm in ('A', 'RG')
  and l_st = 'DIP'
)
OR
(
  p_pkey = l_pkey
  and p_br = 'Br12'
  and p_ct in ('LG', 'LB', 'LPK', 'LPC')
  and l_qty >= 20 and l_qty <= 20 + 10
  and p_sz between 1 and 15
  and l_sm in ('A', 'RG')
  and l_st = 'DIP'
);

Query would not be deleted in the query plan cache when deleting a column or table including AUTO_INCREMENT (CUBRIDSUS-7872)

When a column or table containing AUTO_INCREMENT was deleted, the related query was not deleted in the query plan cache and consequently the serial was generated with the same name as AUTO_INCREMENT. As a result, a problem occurred with the message "ERROR: Cannot fetch serial object." when a query was executed with the serial. The new version has fixed this problem.

CREATE TABLE tbl ( a INT AUTO_INCREMENT);
SELECT tbl_ai_a.NEXT_VALUE;
DROP TABLE tbl;
CREATE SERIAL tbl_ai_a;
SELECT tbl_ai_a.NEXT_VALUE;

Application would be abnormally terminated when executing the REPLACE statement after configuring the optimization level to generate only a plan without executing a query (CUBRIDSUS-6614)

Fix a problem in which the application would be abnormally terminated when executing a REPLACE statement after configuring the OPTIMIZATION LEVEL as either 2,258 or 514 so that only a plan can be generated without executing a query.

SET OPTIMIZATION LEVEL 2;
DROP TABLE t;
CREATE TABLE t( col1 INTEGER UNIQUE, col2 VARCHAR(128));
INSERT INTO t(col1, col2) VALUES (17, 'operators list');
REPLACE t(col1, col2) VALUES (17, 'personnel list');

UPDATE of the view failed after re-creating the view (CUBRIDSUS-6942)

Fixed a problem in which UPDATE failed when, after re-creating the view, executing the UPDATE statement in the execution of UPDATE, because the view dropped by using the previous query plan was referenced.

CREATE TABLE foo(a int);
INSERT INTO foo VALUES (3);

CREATE VIEW v AS SELECT * FROM foo WHERE a <2;
UPDATE v SET a=3;
DROP VIEW v;

CREATE VIEW v AS SELECT * FROM foo WHERE a <2;
UPDATE v SET a=3;

Modify to select the best among the query plans available for ORDER BY optimization or GROUP BY optimization (CUBRIDSUS-6957)

Fix a problem in which the full index scan would be selected even when a better plan existed among the query plans available for ORDER BY optimization (skip ORDER BY) or GROUP BY optimization (skip GROUP BY).

Modify to return results immediately when LIMIT 0 (CUBRIDSUS-7420)

Modify to return results immediately without executing a query when LIMIT 0. In the previous version, it took long to process a query in some cases because a LIMIT clause was evaluated after executing the query.

SELECT CAST(dt_col AS DATE)
FROM article
WHERE id = '001' AND dt_col < TO_DATE('20120201', 'YYYYMMDD')
ORDER BY CAST(dt_col AS DATE) DESC
LIMIT 0;

Partition

Modify to be possible to use a local index in a partitioned table (CUBRIDSUS-7629)

Modify to be able to create a local index independently comprising an index for each partition of a partitioned table according to the conditions defined internally in the system. All foreign keys and non-unique indexes are a local index. Unique indexes are a local index only when the partition key belongs to a unique index. In the previous version, unique indexes, including primary keys, were all a global index. From CUBRID 9.0 Beta, however, they are a local index if they include a partition key. The performance is improved when using a local index instead of a global index.

Deterioration of the INSERT performance on a partitioned table (CUBRIDSUS-6018)

Fix a problem in which the performance was deteriorated when executing INSERT to a partitioned table.

Enable the execution of the TRUNCATE statement to a partitioned table (CUBRIDSUS-6871)

The new version allows the TRUNCATE to be executed in a partitioned table.

Incorrect SELECT result of a range-partitioned table was outputted when the auto commit mode was OFF (CUBRIDSUS-7127)

Fix a problem in which, if executing SELECT while performing INSERT to a range-partitioned table with the auto commit mode OFF, and additionally executing INSERT and SELECT, the additionally inserted record would not be outputted.

csql>; autocommit off
CREATE TABLE foo (id1 BIGINT) PARTITION BY RANGE(id1) (
    PARTITION p1 VALUES LESS THAN (3),
    PARTITION p2 VALUES LESS THAN (5),
    PARTITION p3 VALUES LESS THAN (8)
);

INSERT INTO foo VALUES (1);

// If Q1 was executed after Q2 had been executed, the 1 record which had performed INSERT before Q1 execution was outputted.
SELECT * FROM foo;    -- Q1
INSERT INTO foo VALUES (1);
INSERT INTO foo VALUES (1);
SELECT * FROM foo;    -- Q2

Incorrect results were outputted when there was the IS NULL OR IS NOT NULL condition when executing SELECT in a range-partitioned table (CUBRIDSUS-4575)

Fixed a problem in which incorrect results were outputted when there was the IS NULL OR IS NOT NULL condition when executing SELECT in a range-partitioned table.

SELECT * FROM tbl
WHERE a IS NULL OR a IS NOT NULL;

Incorrect query results were outputted when executing OUTER JOIN in a partitioned table (CUBRIDSUS-6888)

The new version has fixed a problem in which incorrect query results were outputted when executing OUTER JOIN in a partitioned table.

CREATE TABLE pt1(i int,j int) PARTITION BY HASH(i) PARTITIONS 4;
CREATE TABLE pt2(i int,j int) PARTITION BY HASH(i) PARTITIONS 4;

INSERT INTO pt1 VALUES (1,1),(2,2),(3,3);
INSERT INTO pt2 VALUES (1,1);
CREATE INDEX idx ON pt2(i);

// When the query below was executed, first record was incorrectly outputted like "1, 1, NULL, NULL".
SELECT * FROM pt1 LEFT JOIN pt2 ON pt1.i=pt2.i USING INDEX idx(+);
i j i    j
==============
1 1 1    1
2 2 NULL NULL
3 3 NULL NULL

Constraint conditions were not checked when creating a foreign key in a partitioned table (CUBRIDSUS-6916)

The new version has fixed a problem in which constraint conditions were not checked when creating a foreign key in a partitioned table.

CREATE TABLE t1(i INT PRIMARY KEY);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2(i int,j int) PARTITION BY HASH(i) PARTITIONS 4;
INSERT INTO t2 VALUES (2,2);
ALTER TABLE t2 ADD CONSTRAINT FOREIGN KEY t2(i) REFERENCES t1(i);

Data was inputted into only one partitioned table when entering data into a hash partitioned table partitioned by using the AUTO_INCREMENT column as a partition key (CUBRIDSUS-5622)

Fix a problem in which data would be inputted into only one partitioned table when entering data to a hash partitioned table by using the AUTO_INCREMENT column as a partition key.

CREATE TABLE t(i INT AUTO_INCREMENT) PARTITION BY HASH(i) PARTITIONS 3;
INSERT INTO t(i) VALUES (NULL), (NULL), (NULL), (NULL), (NULL), (NULL), (NULL), (NULL);

Partition pruning would not operate when entering a partition key value as a function in a list partitioned table (CUBRIDSUS-5777)

Fix a problem in which the partition pruning, which excludes partitions not included in the query result from the search condition, would not operate, when entering a partition key value as a function in a list partitioned table.

Partitioned table would be created in a child table (CUBRIDSUS-8703)

In the previous version, a parent table would be deleted even when the parent table contained the column used as the partition key of its child table. To prevent this error, the new version has made it impossible to create a partitioned table in a child table.

Records would not be deleted with the view created with regard to a partitioned table with a CLOB column (CUBRIDSUS-8216)

Fix a problem in which an error would occur when deleting records with the view created with regard to a partitioned table with a CLOB column.

CREATE TABLE t(i INT,c CLOB) PARTITION BY HASH(i) PARTITIONS 4;
INSERT INTO t SELECT ROWNUM,'1' FROM db_class LIMIT 40;
CREATE VIEW v AS SELECT * FROM t;

// When the query below was executed, "ERROR: Semantic: ..." error occurred.
DELETE FROM v;
CREATE TABLE t(i INT,c VARCHAR(10)) ;
CREATE VIEW v AS SELECT * FROM t;

Creating a partitioned table would not be reflected to the slave node in the HA environment (CUBRIDSUS-7724)

Fix a problem in which the partitioned table creation statement would not be reflected to the slave node when creating a partitioned table in the HA environment.

CREATE TABLE t1 (a int PRIMARY KEY)
PARTITION BY HASH (a) PARTITIONS 2;

Error when the inserted records in a partitioned table had to move to another partition through the UPDATE statement of the trigger (CUBRIDSUS-7520)

Fixe a problem in which the "ERROR: Error evaluating action for "tr2", Not allowed access to partition." occurred when the record inserted in a partitioned table would not move to a partition other than the one where INSERT was executed due to the UPDATE statement of the trigger.

CREATE TABLE emp (store_id INT NOT NULL)
PARTITION BY RANGE (store_id) (
  PARTITION P1 VALUES LESS THAN (10 ),
  PARTITION P2 VALUES LESS THAN (20 )
);

CREATE TRIGGER tr2 AFTER INSERT ON emp EXECUTE UPDATE emp SET store_id = store_id + 10 WHERE store_id = obj.store_id;
INSERT INTO emp VALUES (5);

Table recreation failed after the creation and rollback of a partitioned table with an index (CUBRIDSUS-7560)

Fix a problem in which table recreation failed with the message "ERROR: Partition failed." when recreating the same partitioned table after the creation and rollback of a partitioned table with an index.

CREATE TABLE tbl(col1 varchar(10), index (col1))
PARTYTION BY LIST(col1) (
  PARTITION p1 values in ('cubrid', 'mysql', 'abc')
);

Error that occurred when executing loaddb with data files of a partitioned table (CUBRIDSUS-5815)

Fix a problem in which loaddb execution failed with the message "Partitioned failed" if data exists in the partitioned table location when executing loaddb with data files containing a partitioned table.

// Partitioned table
CREATE TABLE record (
  host_year INTEGER NOT NULL,
  event_code INTEGER NOT NULL,
  score VARCHAR(20)
)
PARTITION BY RANGE (host_year) (
  PARTITION before_1996 VALUES LESS THAN (1996),
  PARTITION after_1996 VALUES LESS THAN MAXVALUE
);
// Data file for loaddb. It was changed so that data after class [record] can exist.
%class [record] ([host_year] [event_code] [score])
1992 20288 '22.3'
2000 20101 '04:04.5'
2004 20341 '272.5'

Fix an error in which redefining range partitions would fail when the existing partition had the LESS THAN MAXVALUE condition (CUBRIDSUS-6379)

The new version has fixed an error in which redefining range partitions would fail when the existing partition had the LESS THAN MAXVALUE condition.

CREATE TABLE t1(id int)
PARTITION BY RANGE (id) (
  PARTITION l10 VALUES LESS THAN (10),
  PARTITION l100 VALUES LESS THAN (100),
  PARTITION lar VALUES LESS THAN MAXVALUE
);

ALTER TABLE t1 REORGANIZE PARTITION l100,lar
INTO (PARTITION lar VALUES LESS THAN MAXVALUE);

Partition pruning error with regard to the IN conditional clause in a list partitioned table (CUBRIDSUS-9189)

Fix a problem in which partition pruning would be repeated infinitely when the IN conditional clause contained two or more partitions in a list partitioned table.

DELETE CASCADE would not operate in a partitioned table with a foreign key (CUBRIDSUS-8085)

Fix a problem in which, when there was a partitioned table with a foreign key specifying the action of DELETE CASCADE, the "ERROR: The instance having the foreign key 'f' cannot be dropped." would occur in the partitioned table with a foreign key when deleting the record of the table.

CREATE TABLE dp(i INT PRIMARY KEY);
INSERT INTO dp SELECT ROWNUM FROM db_class LIMIT 40;

CREATE TABLE f(i INTEGER , orderdatekey INTEGER NOT NULL)
PARTITION BY range(i) (
  PARTITION p0 VALUES LESS THAN (200),
  PARTITION p1 VALUES LESS THAN (400),
  PARTITION p2 VALUES LESS THAN (600),
  PARTITION p3 VALUES LESS THAN maxvalue
);

ALTER TABLE f ADD CONSTRAINT FOREIGN KEY f(i) REFERENCES dp(i) ON DELETE CASCADE;
INSERT INTO f SELECT ROWNUM , ROWNUM FROM db_class LIMIT 40;

-- record (4) should be deleted in both table dp and f;
DELETE FROM dp WHERE i=4;

Server process would be abnormally terminated when executing SELECT after the backup of a database with a range-partitioned table with AUTO_INCREMENT columns (CUBRIDSUS-6390)

Fix a problem in which the server process would be abnormally terminated when executing SELECT after the backup of a database with a range-partitioned table with AUTO_INCREMENT columns.

CREATE TABLE u1(id INT AUTO_INCREMENT,v VARCHAR(5))
PARTITION BY RANGE(id) (
  PARTITION l5 VALUES LESS THAN (5),
  PARTITION l10 VALUES LESS THAN (10),
  PARTITION l20 VALUES LESS THAN (20),
  PARTITION l100 VALUES LESS THAN (100)
);

INSERT INTO u1 VALUES (NULL,''),(6,''),(NULL,'');
$ cubrid backupdb testdb

Server process would be abnormally terminated when executing a query to a range partitioned table with AUTO_INCREMENT columns (CUBRIDSUS-6373)

Fix a problem in which the server process would be abnormally terminated when executing a query to a range-partitioned table with AUTO_INCREMENT columns.

CREATE TABLE u(id int AUTO_INCREMENT,v varchar(5))
PARTITION BY RANGE (id) (
  PARTITION l5 VALUES LESS THAN (5),
  PARTITION l10 VALUES LESS than (10),
  PARTITION l20 VALUES LESS than (20),
  PARTITION l100 VALUES LESS than (100)
);

INSERT INTO u VALUES(0,';jf;dfj iouer'),(5,' fdfd'),(10,'dfd '),(15,'fdf'),(20,'a');

Index scan would not operate because the statistical information on partitions of a table was not updated (CUBRIDSUS-7741)

Fix a problem in which index scan would not operate when executing a query due to an error in which statistical information on each partition of a table would not be updated when updating the statistical information of the partitioned table by the UPDATE STATISTICS statement.

After deleting the value in a partitioned table when the auto commit mode was OFF, the value was outputted when searching the record with OID (CUBRIDSUS-2183)

The new version has fixed a problem in which, after deleting the value in a partitioned table when the auto commit mode was OFF, the value was outputted when searching the record with OID.

;autocommit off

CREATE TABLE tbl (id INT)
PARTITION BY RANGE (id + 1) (
  PARTITION p0 VALUES LESS THAN (2),
  PARTITION p1 VALUES LESS THAN MAXVALUE
);

INSERT INTO tbl VALUES (-1);
INSERT INTO tbl values (3);
INSERT INTO tbl values (99) into :xx;

SELECT :xx.id FROM db_root;
COMMIT;

SELECT :xx.id FROM db_root;
DELETE FROM tbl WHERE id = 99;
SELECT :xx.id FROM db_root;

// If query below was executed, the value before deletion was outputted even though record had been deleted like query above.
SELECT :xx.id FROM db_root;

Re-partitioning a table by ALTER TABLE ... PARTITION would not operate normally (CUBRIDSUS-1408)

Fix a problem in which, when repartitioning a table by ALTER TABLE ... PARTITION, the existing partition information would not be deleted.

CREATE TABLE participant2 ( host_year INT, nation CHAR(3), gold INT, silver INT, bronze INT)
PARTITION BY RANGE (host_year) (
  PARTITION before_2000 VALUES LESS THAN (2000),
  PARTITION before_2008 VALUES LESS THAN (2008)
);

ALTER TABLE participant2 REORGANIZE PARTITION before_2000 INTO (
  PARTITION before_1996 VALUES LESS THAN (1996),
  PARTITION before_2000 VALUES LESS THAN (2000)
);

Partitioning would not be executed when creating a table with many partitions through the loaddb utility (CUBRIDSUS-5833)

Fix a problem in which partitions would not be created normally when creating a table with many partitions through the loaddb utility.

ALTER CLASS [col_day]
PARTITION BY LIST ( [yyyymmdd] ) (
  PARTITION [p_20110701] VALUES IN ('20110701'),
  PARTITION [p_20110702] VALUES IN ('20110702'),
  PARTITION [p_20110703] VALUES IN ('20110703'),
  PARTITION [p_20110704] VALUES IN ('20110704'),
  PARTITION [p_20110705] VALUES IN ('20110705'),
  PARTITION [p_20110706] VALUES IN ('20110706'),
  PARTITION [p_20110707] VALUES IN ('20110707'),
  ....
  PARTITION [p_20111231] VALUES IN ('20111231')
);

Index

Error when executing a query using a MAX function or ORDER BY DESC in a table in which data was deleted after the creation of an index (CUBRIDSUS-7988)

Fixed a problem in which, after an index was created while data being in the table and the entire data were deleted, the "ERROR: An I/O error occurred while reading page 65536 of volume (null).... Bad file descriptor" would occur when executing a query to find the last leaf page, such as using a MAX function or ORDER BY DESC in the table.

CREATE TABLE tb2 ( col1 INT PRIMARY KEY, col2 VARCHAR(16));
INSERT INTO tb2 VALUES (1, '1');
CREATE INDEX i_tb2 ON tb2(col1, col2);
DELETE FROM tb2;
SELECT * FROM tb2 ORDER BY 1 DESC,2 DESC;

Malfunction when executing an aggregate function in a column with a PREFIX index (CUBRIDSUS-6579)

Fix a problem in which the "ERROR: No error message available." would occur if the type of the aggregate target column was CHAR and incorrect results would be outputted when the type was VARCHAR, when executing aggregate function in a column with a PREFIX index.

CREATE TABLE t1 (id int PRIMARY KEY, b char(16), INDEX(b(4)));
INSERT INTO t1 VALUES (1, 'xxxxbbbb'), (2, 'xxxxaaaa');

-- When the query below was executed, an error occurred.
SELECT MAX(b) FROM t1;

CREATE TABLE t1 (id int PRIMARY KEY, b varchar(16), INDEX(b(4)));
INSERT INTO t1 VALUES (1, 'xxxxbbbb'), (2, 'xxxxaaaa');

-- When the query below was executed, 'xxxx' was outputted.
SELECT MAX(b) FROM t1;

Modify to share B-Tree if there is already a created index with the same key when creating an index (CUBRIDSUS-6430)

In the previous version, an error (cannot create an index) occurred when an index with the same key already existed when creating an index. The new version has been modified to share the existing index in this case rather than creating another index physically. As in the following example, it is not necessary to create B-Tree for the foreign key separately because the index has already been defined in the b column which creates a foreign key.

CREATE TABLE t1 (
  a INT PRIMARY KEY,
  b INT, KEY(b),
  CONSTRAINT fk FOREIGN KEY (b) REFERENCES t1 (a)
);

Server would be abnormally terminated when there was a class OID search condition and all other conditions satisfied covering index conditions (CUBRIDSUS-7585)

Previously, the server would be abnormally terminated when there was a class OID search condition and all other conditions satisfied covering index conditions, including the SELECT list column of an index and the column in a WHERE statement. This problem has been fixed.

CREATE TABLE a (class_of object, class_name varchar(200));
CREATE TABLE c (class_name varchar(200), i int);
CREATE INDEX ON c (class_name);
INSERT INTO a VALUES (INSERT INTO c VALUES('aa', 1), 'aa');
SELECT c.class_name
FROM a, c
WHERE a.class_of=c AND c.class_name=a.class_name;

Application would be abnormally terminated when a USING INDEX clause was not valid (CUBRIDSUS-7672)

As shown in the following example, there was a problem in which the application would be abnormally terminated when a USING INDEX clause was invalid. This problem has been fixed.

CREATE TABLE t1 (a INT, b CHAR(3), c INT);
CREATE TABLE t2 (a INT, b CHAR(3), c INT);
CREATE INDEX ON t1 (a, c);
CREATE INDEX ON t2 (a, c);

SELECT * FROM (t1,t2) s WHERE c=9 USING INDEX i1;

Modify to be possible to use the CONSTRAINT ... REFERENCES ...statement when creating a foreign key (CUBRIDSUS-6556)

In the previous version, users were able to use only a FOREIGN KEY REFERENCES ... statement when creating a foreign key. Now you can use a CONSTRAINT constraint_name REFERENCES ... statement as well.

CREATE TABLE tbl (id INT CONSTRAINT pk_id PRIMARY KEY);
CREATE TABLE tbl2 (id INT FOREIGN KEY REFERENCES tbl(id))
CREATE TABLE tbl3 (id INT CONSTRAINT fk_id REFERENCES tbl(id));

NOT NULL constraint condition of the primary key column was removed when executing an ALTER ... CHANGE COLUMN statement (CUBRIDSUS-7966)

Fixed a problem in which the NOT NULL constraint condition of the primary key column was removed when executing an ALTER ... CHANGE COLUMN statement.

CREATE TABLE t1_add_default (a INT PRIMARY KEY, b INT);
ALTER TABLE t1_add_default CHANGE COLUMN a a INT DEFAULT 2;
INSERT INTO t1_add_default(a, b) VALUES(NULL, 1);

Error while multiple transactions were executing input/delete for an index with an overflow key, the rollback of one transaction would make the other transactions fail (CUBRIDSUS-8838)

Fixed a problem in which the "Query execution failure" would occur in the other transactions when one transaction was rolled back, when multiple transactions were executing input/delete to an index with an overflow key concurrently.

HA Features and HA Data Replication

Inconsistent replication after changing column type (CUBRIDSUS-7386)

Fixed a problem in which inconsistent replication would occur when the transaction replication log reflection process was restarted, re-reflecting the log before the schema was changed.

Very long transition would not be reflected completely (CUBRIDSUS-7638)

Fixed a problem in which, when the transaction replication log reflection process reflected a very long transaction, it would not reflect the transaction completely while being restarted endlessly.

Add new commands (CUBRIDSUS-5468)

Add the cubrid heartbeat copylogdb/applylogdb <start|stop> db_name peer_host command used to start/stop the transaction log copying/replication log reflection process of a database.

To prevent the server process from being executed separately while multiplexing logs is unavailable (data synchronization between nodes unavailable), the new version has made it impossible to start/stop the database server process in the HA mode by using the cubrid server start/stop command. The act, deact, and deregister commands, which were used in the previous version, are not used any longer.

Improve the performance of DELETE replication (CUBRIDSUS-6436)

The new version has improved the performance of the DELETE replication in the HA replication, providing the number of replications per sec. approximately 1.7 times larger than that in the previous version.

Change the error level to Warning with regard to the errors that could occur in slave nodes in a normal situation (CUBRIDSUS-8937)

Change the error level of the "Internal error: fetching deallocated pageid 0 of volume /CUBRID/databases/testdb_lgat" that would occur in slave nodes from Error to Warning because this error could occur also in a normal situation.

Modify to be possible to use both "," and ":" as the identifier of system parameter values (CUBRIDSUS-5647)

The new version allows users to use both comma (',') and colon (':') as the identifier of HA-related system parameter values, including ha_node_list, ha_replica_list, ha_db_list, ha_copy_sync_mode, and ha_ping_hosts.

No error would occur even when the TCP connection was lost while the 'to-be-active' master node was waiting to be connected to a slave node in the HA environment (CUBRIDSUS-7154)

Previously, no error would occur even when the TCP connection was lost while the 'to-be-active' master node was waiting to be connected to a slave node in the HA environment. This was caused by cub_master erroneously maintaining the waiting information of the lost connection. This problem has been fixed.

The same trigger would be executed again in the slave node even when the trigger result in the master node was already reflected to the slave mode (CUBRIDSUS-8165)

Fix a problem in which incorrect query results would be outputted when the same trigger action was repeated in the slave node although the result of the trigger action in the master node was already replicated to the slave node.

Applications could not connect to the master server even after restoration, when both slave server and ping host are unable to establish connection in the HA environment where the ping host is configured (CUBRIDSUS-7183)

Previously, applications could not connect to the master server even after a connection was restored, when both slave server and ping host were unable to establish connection in the HA environment where a ping host was configured. If the master server was not connected to a ping host in the HA structure where a ping host existed, the structure assumed that the master server could not maintain its role; thus, its role was changed to "slave" and the applications could be used as read-only.

Abnormal termination when the HA feature was started, after stopping the CUBRID HA feature in the node that became the master after a failover (CUBRIDSUS-8906)

Fix a problem in which an abnormal termination would occur when the HA feature was started through cubrid heartbeat start after stopping the CUBRID HA feature through cubrid heartbeat stop in the node that became the master after a failover.

Replica node remained in the configuration information when executing the reload command after deleting the replica node configuration (CUBRIDSUS-8107)

Fix a problem in which replica node remained in the configuration information even after executing the cubrid heartbeat reloadcommand after deleting the ha_replica_list of cubrid_ha.conf.

The condition would be changed to positive numbers when the partition condition of a table included negative numbers (CUBRIDSUS-8461)

Fix a problem in which the condition would be changed to positive numbers if the partition condition of a table included negative numbers in the HA environment. In the previous version, when executing the following example, the partition condition was created incorrectly as (2, 1) instead of (-2, 1).

CREATE TABLE t1 (a INT AUTO_INCREMENT(-2,3) PRIMARY KEY) PARTITION BY LIST(a) (PARTITION p0 VALUES IN (-2, 1));

A failed request was repeated endlessly when requesting write to a slave node (CUBRIDSUS-6266)

In the previous version, when requesting write, such as INCR() and UPDATE, to a slave node, a failed request was retried endlessly rather than outputting an error message (failed to write) in applications, while the message "ERROR: Your transaction has been aborted by the system due to server failure or mode change." was outputted twice in the CSQL Interpreter, and the CSQL Interpreter was terminated. This problem has been fixed.

SELECT query to sort the columns exceeding page buffer size in a slave node would not be executed (CUBRIDSUS-9272)

Fix a problem in which query execution failed with the message "Attempted to update the database when updates are disabled." when executing a SELECT query to sort the columns that exceeded the page buffer size (default 16k) in a slave node in the HA environment.

Server process would be abnormally terminated when terminating a 'to-be-active' server after its clients attempted to connect to it several times (CUBRIDSUS-7756)

Previously, the server process would be abnormally terminated when terminating a 'to-be-active' server after its clients, such as CSQL, attempted to connect to it more times than the number specified in the max_clients of cubrid.conf in the HA environment. This problem has been fixed.

HA management process would be abnormally terminated when terminating the HA feature after the server process was restarted after a failure (CUBRIDSUS-6878)

Fix a problem in which the cub_master process to manage HA would be abnormally terminated when executing cubrid heartbeat stop after the process was restarted after the server process (cub_server) failure in the HA environment.

Log reflection process would be abnormally terminated when stopping the HA feature (CUBRIDSUS-6974)

Fix a problem in which the log reflection process would be abnormally terminated when stopping the HA feature by using cubrid heartbeat stop.

Replication log reflection process would be abnormally terminated after re-establishing replication of a slave node (CUBRIDSUS-9166)

Fix a problem in which the applylogdb process would be abnormally terminated due to an error in which the log page of the archive log to be reflected after the re-establishment of the replication of a slave node in the HA environment was not found.

Error log was recorded although the ETIMEOUT error is a normal action in the server process connected to the transaction replication log copying process operated in the ASYNC mode (CUBRIDSUS-6248)

The new version has modified error logs not to be recorded when the "pthread_cond_wait() failed." (ETIMEOUT) occurred in the server process connected to the transaction replication log copying process operated in the ASYNC mode.

An ETIMEOUT error occur when the thread which records transaction logs in the server process exceeds the operation waiting time, and although this error in the ASYNC mode is normal, the previous version recorded this to the error log file, causing disk I/O unnecessarily.

HA replication script failed when the shell prompt was long (CUBRIDSUS-7524)

Fix a problem of failing to receive the environmental variable check result from other nodes in ha_make_slavedb.sh, the slave node reconfiguration script.

HA Schema Replication

WITH CHECK OPTION clause would not be reflected to a slave node (CUBRIDSUS-8367)

Fix a problem in which the WITH CHECK OPTION clause of the view would not be reflected to a slave node in the HA environment.

CREATE TABLE vc_tb(a INT PRIMARY KEY);
CREATE VIEW fbo_view AS SELECT * FROM vc_tb WHERE a >5 WITH CHECK OPTION;

IF EXISTS clause in the DROP TABLE statement would not be reflected to a slave node (CUBRIDSUS-8901)

Fix a problem in which the IF EXISTS clause of a DROP TABLE statement would not be reflected to a slave node in the HA environment.

DROP TABLE IF EXISTS tbl1;

Logical expression included in a column of the view would not be reflected to a slave node (CUBRIDSUS-7927)

Fix a problem in which the logical expression included in the SELECT list would not be reflected to a slave node in the HA environment.

CREATE VIEW v(a,b,c) AS SELECT a,b, (b='aaa') c FROM t;

View would look different in master and slave nodes (CUBRIDSUS-8366)

Fix a problem in which, when the view created in the master node was outputted to the system catalog table db_vclass in the slave node, parentheses were added to some of the WHERE condition.

CREATE TABLE vc_tb1(a INT PRIMARY KEY);
// The WHERE condition was enclosed with braces, if the view above was outputted in the db_vclass of slave node.
CREATE VIEW vtb1(a INT) AS SELECT * FROM vc_tb1 WHERE a>=1;

Replication would be made in the incorrect order of columns in the slave node when creating a table through the CREATE .... AS SELECT statement (CUBRIDSUS-8071)

The new version has fixed a problem in which replication would be made in the incorrect order of columns in the slave node when creating a table through the CREATE .... AS SELECT.

// Executing following in master node
CREATE TABLE t2 (a INT PRIMARY KEY, b INT, c INT)
AS SELECT t1.c AS a, t1.a AS b, t1.b AS c FROM t1;

// The column order was incorrectly replicated in slave node as follows:
CREATE TABLE [t2] ( [a] INTEGER, [b] INTEGER, [c] INTEGER, PRIMARY KEY ([a]) )
AS SELECT [t1].[c], [t1].[a], [t1].[b] FROM [t1];

Error -414 in the slave node when dropping a table after creating/dropping an index (CUBRIDSUS-8634)

Fix a problem in which the "ERROR CODE = -414 ... Unknown class identifier" would occur in the slave node even in a normal situation, when a commit was made after creating/dropping an index and dropping a table in a single transaction.

DDL statement accompanied by a SELECT statement would not be replicated to a slave node (CUBRIDSUS-8922)

Fix a problem in which a DDL statement accompanied by a SELECT statement, such as CREATE TABLE ... SELECT, CREATE VIEW ... SELECT would not be reflected in the HA environment.

CREATE TABLE t1 (a int primary key, b int)
AS SELECT * FROM (SELECT 1 as a, (SELECT 1) b from db_root ) tt;

Creating an index would not be reflected to the slave node (CUBRIDSUS-6818)

The new version has fixed a problem in which the index creation statement would not be reflected to the slave node when creating an index in the HA environment.

CREATE TABLE tab(a INT AUTO_INCREMENT(1,1) PRIMARY KEY, c VARCHAR(32));
CREATE INDEX i_tab_c ON tab(c);

Creating an index whose name is a reserved word would not be reflected to the slave node (CUBRIDSUS-8054)

Fix a problem in which creating an index whose name is a reserved word would not be reflected to a slave node with the message "Internal system failure: [t2,'create class [t2] ( [a] integer, [b] integer, index none ([a]) ) '] In line 1, column 44 before ' ([a]) ) '" in the HA environment.

CREATE TABLE t2(a INT, b INT, INDEX "none" (a));

Executing TRUNCATE for a table with an AUTO_INCREMENT field would not be reflected to a slave node (CUBRIDSUS-7810)

Fix a problem in which executing TRUNCATE for a table with a AUTO_INCREMENT field would not be reflected in a slave node in the HA environment with the message "Accessing Delete Object."

Serial derived from AUTO_INCREMENT would be left in the slave node even after dropping a table with an AUTO_INCREMENT column (CUBRIDSUS-8885)

Fix a problem in which a table would not be created in the slave node when attempting to re-create AUTO_INCREMENT in the master node with the same name of a table and a column, due to an error in which the serial derived from AUTO_INCREMENT was left in the slave node after dropping a table with the AUTO_INCREMENT column in the master node in the HA environment.

CREATE TABLE t1( id INT PRIMARY KEY AUTO_INCREMENT a INT);
DROP TABLE t1;
CREATE TABLE t1(id INT PRIMARY KEY AUTO_INCREMENT, b INT);

Changing the attributes or name of the AUTO_INCREMENT column would not be reflected to the slave node (CUBRIDSUS-7830)

Fix a problem in which changing the attributes of an AUTO_INCREMENT column or adding the attributes of the AUTO_INCREMENT column and renaming it would not be reflected to the slave node in the HA environment.

// Changing property
CREATE TABLE foo ( a INTEGER PRIMARY KEY AUTO_INCREMENT , b CHAR(10), c DATETIME );
ALTER TABLE foo MODIFY ATTRIBUTE a BIGINT AUTO_INCREMENT;


// Changing name
CREATE TABLE boo (i int PRIMARY KEY);

ALTER TABLE boo ADD COLUMN ai INT AUTO_INCREMENT, RENAME TO u, AUTO_INCREMENT = 100;

ALTER SERIAL statement would not be reflected to a slave node (CUBRIDSUS-8727)

The new version has fixed a problem in which the ALTER SERIAL statement would not be reflected to a slave node in the HA environment.

ALTER SERIAL s1 START WITH 2;

DROP SERIAL statement would not be reflected to a slave node (CUBRIDSUS-8673)

The new version has fixed a problem in which the DROP SERIAL statement would not be reflected to a slave node in the HA environment.

Specifying the NOCACHE option in the creation or change of the serial would not be reflected to a slave node (CUBRIDSUS-9225)

Fix a problem in which specifying the NOCACHE option to the CREATE SERIAL or ALTER SERIAL statements would not be reflected to a slave node.

ALTER INDEX ... REBUILD statement would not be reflected to a slave node (CUBRIDSUS-8090)

Fix a problem in which executing the ALTER INDEX ... REBUILD statement would not be reflected to a slave node in the HA environment with the message "log applier: failed to apply schema replication log. class: "-", schema: 'alter index i_t_b on () rebuild', internal error: -492."

ALTER INDEX i_t_b REBUILD;

ALTER... CHANGE COLUMN statement would not be reflected to a slave node (CUBRIDSUS-8023)

The new version has fixed a problem in which the ALTER... CHANGE COLUMN statement would not be reflected to a slave node in the HA environment.

ALTER TABLE coo CHANGE col1 col1 INT AFTER col3;
// When the query above was executed, the following error occurred in slave node.
log applier: failed to apply schema replication log. class: "coo", schema: "'alter class [coo] change attribute [col1] [col1] integerafter [col3] '", internal error: -492.
ALTER TABLE t1 ALTER COLUMN a set DEFAULT 1;
// When the query above was executed, the following error occurred in slave node.
"alter class [t1] alter column [b] set default default 1", this SQL statement is illegal. So there is something wrong with the "parser_print_xxx" function.

Changing DEFAULT value of a column through the ALTER .. CHANGE statement would not be reflected to a slave node (CUBRIDSUS-8021)

Fix a problem in which changing DEFAULT values through the ALTER .. CHANGE statement would not be reflected to a slave node with the message "log applier: failed to apply schema replication log. class: "t1", schema: "'alter class [t1] alter column [a] set default default 1'", internal error: -492."

ALTER TABLE t1 ALTER COLUMN a SET DEFAULT 1;

ALTER .... AUTO_INCREMENT statement would not be reflected to a slave node (CUBRIDSUS-8203)

Fix a problem in which the ALTER .... AUTO_INCREMENT statement would not be reflected to a slave node in the HA environment with the message "log applier: failed to apply schema replication log. class: "t", schema: "'alter class [t] auto_increment = 5'", internal error: -1056."

ALTER TABLE t AUTO_INCREMENT = 5;

Partitioning a table through an ALTER statement would not be reflected to a slave node (CUBRIDSUS-5266)

Fix a problem in which partitioning a table through an ALTER statement would not be reflected to a slave node in the HA environment.

ALTER TABLE range_tbl
REORGANIZE PARTITION p2 INTO
( PARTITION p3 VALUES LESS THAN (25), PARTITION p4 VALUES LESS THAN (30));

Creating a DESC INDEX to a PREFIX column would not be reflected to a slave node (CUBRIDSUS-8980)

Fix a problem in which creating DESC INDEX to a PREFIX column would not be reflected to a slave node in the HA environment.

ALTER TABLE c2 ADD INDEX i_c (c(5) DESC);

The owner of triggers, serials, stored functions and procedures would be reflected to a slave node incorrectly (CUBRIDSUS-8723)

The new version has fixed a problem in which the owner of triggers, serials, stored functions and procedures would be reflected to a slave node incorrectly as "DBA."

DROP TABLE IF EXISTS statement when the auto commit mode is OFF would not be reflected to a slave node (CUBRIDSUS-8582)

Fix a problem in which executing a DROP TABLE IF EXISTS statement when the auto commit mode is OFF would not be reflected to a slave node with the message "SYNTAX ERROR ... ERROR CODE = -493."

DROP TABLE IF EXISTS m1;
CREATE TABLE m1(i INT,j INT PRIMARY KEY);
COMMIT;

Partitioned table would not be reflected to a slave node (CUBRIDSUS-8985)

Fix a problem in which data of partitions of a partitioned table would not be replicated to a slave node when their type was converted internally in the HA environment.

CREATE TABLE t1(i int PRIMARY KEY )
PARTITION BY RANGE (i) (
  PARTITION p1 VALUES LESS THAN (-2147483648),
  PARTITION p2 VALUES LESS THAN maxvalue
);

Driver

[JDBC] Improve to output a connection URL when outputting an exception error message (CUBRIDSUS-7781)

Improve to output the information on a connection URL together when outputting JDBC exception error messages.

cubrid.jdbc.driver.CUBRIDException: jdbc:cubrid:localhost:33300:testdb:dba:********:
Syntax error: unexpected END OF STATEMENT

[JDBC] The resultset of JAVA stored procedure would not be retrieved in iBatis (CUBRIDSUS-7383)

In the new version, you can check the result value of calling the JAVA stored procedure in the form of a resultmap with the name of the column through ArrayList in iBatis.

[JDBC] ResultSetMetaData.isAutoIncrement() method would not operate normally (CUBRIDSUS-7531)

Previously, when the ResultSetMetaData.isAutoIncrement() of JDBC was called , FALSE was returned regardless of whether AUTO_INCREMENT was set. Now it operates properly according to the configuration of AUTO_INCREMENT.

[JDBC] Applications would output an connection URL to a broker log incorrectly (CUBRIDSUS-7956)

Fix an error in which unnecessary "?" was outputted when JDBC applications outputted a connection URL to a broker log.

The following is an example in which an incorrect URL was outputted to a broker log file in the previous version. Only one "?" should be outputted after dba::.

jdbc:cubrid:10.0.0.1:33000:demodb:dba::??queryTimeout=60000001&connectTimeout=10000

[JDBC] Millisecond would be returned without being reset to 0 when receiving date or time object value (CUBRIDSUS-7352)

Fix a problem in which the millisecond part of the java.sql.Date or java.sql.Time object value would be returned without being reset to 0.

ResultSet rs = connection.createStatement().executeQuery("select time '12:15:00'");
rs.next();
System.out.println(rs.getTime(1).getTime());

[JDBC] Error in which applications allowed the input of negative number years (CUBRIDSUS-8844)

Fix an error in which JDBC applications would allow years out of the proper range (1-9999) when binding Datetime data.

[JDBC] Error in which NullPointerException would occur in some methods of DatabaseMetaData called after the termination of the connection or the close() of DatabaseMetaData (CUBRIDSUS-7807)

Fixed an error in which NullPointerException occurred when calling the getTables(), getColumns(), getTablePrivileges(), getPrimaryKeys(), and getForeignKeys() methods of DatabaseMetaData after the termination of the Connection or the close() of DatabaseMetaData after getting DatabaseMetaData from the Connection of JDBC.

[JDBC] Possible exceptions would not be handled while executing multiple queries by using the same statement object repeatedly (CUBRIDSUS-9015)

Fixed a problem in which the error code -26 would occur repeatedly because the server handle would not be terminated when exceptions occurred while executing multiple queries by using the same statement objects repeatedly in JDBC applications.

public void run(ArrayList<String> sqlList, int max) throws SQLException {
    this.sqlList = sqlList;
    Connection conn = getConnection();
    Statement stmt = conn.createStatement();
    String sql;
    int i = 0;
    while (true) {
        sql = getNextLine();
        if (sql == null)
        break;
        try {
            stmt.execute(sql);
        } catch (Exception e) {
            log(e, i, sql);
        }
        i++;
    }
    stmt.close();
    conn.close();
}

[JDBC] "Statement Pooling" error would occur when executing the SELECT statement with /*+ RECOMPILE */ hint at multiple threads concurrently (CUBRIDSUS-7616)

Fix a problem in which the "cubrid.jdbc.driver.CUBRIDException: Statement Pooling" would occur when executing the SELECT statement with /*+ RECOMPILE */ hint at multiple threads concurrently in JDBC applications.

[JDBC] SELECT query for a system catalog view would fail when the ResultSet.TYPE_SCROLL_SENSITIVE option was used (CUBRIDSUS-7076)

Fix a problem in which the "Semantic: System error (generate attr) in ../../src/parser/xasl_generation.c" would occur when executing fetch after executing a SELECT query for a system catalog view, such as db_class, when the ResultSet.TYPE_SCROLL_SENSITIVE option of JDBC was used.

We recommend you to set ResultSet.TYPE_SCROLL_INSENSITIVE because the ResultSet.TYPE_SCROLL_SENSITIVE option is not currently supported.

[JDBC] Failure to bind the value of scientific notations, such as 2.4E+2, to BigDecimal (CUBRIDSUS-7943)

Fix a problem of failing to bind the value expressed as a scientific notation, such as 2.4E+2, to BigDecimal in JDBC applications.

BigDecimal x = new BigDecimal("240.0");
x = x.stripTrailingZeros(); // Specifying value in the form of science
p.setBigDecimal(1, x);

[JDBC] An exception occurred when binding and executing a valid value after binding and executing a value invalid to the prepared query (CUBRIDSUS-7648)

Fix a problem in which an exception occurred when binding and executing a valid value after binding and executing a value invalid to the prepared query in JDBC applications.

// The Exception like below was occurred.
No error message available.
cubrid.jdbc.driver.CUBRIDException: No error message available.
at cubrid.jdbc.driver.CUBRIDConnection.createCUBRIDException(CUBRIDConnection.java:829)
at cubrid.jdbc.driver.CUBRIDStatement.checkExecuteError(CUBRIDStatement.java:941)
at cubrid.jdbc.driver.CUBRIDStatement.executeCoreInternal(CUBRIDStatement.java:830)
at cubrid.jdbc.driver.CUBRIDStatement.executeCore(CUBRIDStatement.java:791)
at cubrid.jdbc.driver.CUBRIDPreparedStatement.executeQuery(CUBRIDPreparedStatement.java:107)
at cubrid_262_error.main(cubrid_262_error.java:26)

[JDBC] Error when executing the JDBC getColumns() method if backslash was configured to be used as an escape character (CUBRIDSUS-8080)

Fix a problem in which the "unterminated string" occurred when getting the column information of a table through the JDBC getColumns() method if backslash was set to be used as an escape character by configuring the value of the no_backslash_escape system parameter as NO.

CREATE TABLE t2 (col1 INT, col2 VARCHAR(20));

[JDBC] Memory of a resultset would not be released even when executing Resultset.close() unless applications executed Statement.close() (CUBRIDSUS-9206)

Fix a problem in which the memory usage would increment due to a problem in which the memory of a resultset would not be released even when executing Resultset.close() unless JDBC applications executed Statement.close().

[CCI] Improve the program so it is now able to check the driver version only with the CCI library files in Linux (CUBRIDSUS-6954)

CUBRID is able to check the driver version only with the CCI library files now.

$ strings /home/usr1/CUBRID/lib/libcascci.so | grep VERSION
VERSION=9.0.0.0001

For your information, the driver version can also be checked by using the cci_get_version function (major, minor, patch).

Add a connection URL attribute used to allow debugging logs to be written (CUBRIDSUS-6377)

Add a feature that is used to configure debugging logs for CCI connection URLs. logSlowQueries and slowQueryThresholdMillis write slow query log information, logTraceApi writes the beginning and the end of the called CCI functions, and logTraceNetwork writes the network data transmission information of a CCI function to a file.

url = "cci:cubrid:localhost:33000:demodb:::?logSlowQueries=true&slowQueryThresholdMillis=1000&logTraceApi=true&logTraceNetwork=true"

The new version has also changed the operation behavior of logBaseDir which specifies the path of debugging log files to the CCI connection URL. In the previous version, the logBaseDir property was ignored when the logBaseDir value and logFile were together. The new version has been changed to specify the file name as "logBaseDir/logFile" to include the path.

[CCI] The trigger information of the entire database was outputted even when the table name was specified, when getting trigger information from cci_schema_info() (CUBRIDSUS-7675)

The new version has fixed an error in which the trigger information of the entire database was outputted even when the table name was specified, when getting trigger information from cci_schema_info(). Also, it ha also added a feature to search the table name or column name to enter when getting trigger information through the pattern matching of a LIKE clause.

req = cci_schema_info(conn, CCI_SCH_TRIGGER, "tbl%", NULL, CCI_CLASS_NAME_PATTERN_MATCH, &error);

[CCI] Incorrect return value of cci_get_attr_type_str() (CUBRIDSUS-7910)

Fix a problem in which the last character in the return value of cci_get_attr_type_str() would be deleted.

// Source example
char *attr_infos[][2] =
{
    {"aa", "character(1)"},
    {NULL, NULL}
};
...
res = cci_get_attr_type_str(conn, class_name, attr_infos[i][0], buf, buf_size, &error);
...
fprintf(LOG_FD, "%s attr: %s\n\n", attr_infos[i][0], buf);
// The output screen before fixed
aa attr: character(1

[CCI] Error would still occur when executing cci_prepare() even after restarting the server (CUBRIDSUS-8907)

Fix a problem in which an error would still occur when executing a cci_prepare() function even after restarting the server due to an error that occurred in cci_prepare() resulting from the termination of the database server.

[CCI] CCI function would return incorrect error code when lock timeout occurred (CUBRIDSUS-7226)

In the previous version, when lock timeout occurred in the cci_prepare() and cci_execute() functions, error code -75 was wrapped with error code -493 and returned. Now error code -75 delivers it directly.

[JDBC, CCI] Only some queries were committed from a certain time in a transaction in applications (CUBRIDSUS-8563)

Fix a problem in which only some queries were committed after a certain point of time in a transaction, such as server restart and forced CAS restart, on JDBC or CCI applications.

[CCI] Error if CAS was restarted when executing cci_prepare_and_execute() (CUBRIDSUS-9278)

Fix a problem in which the "Cannot communcation with broker" would occur when executing cci_prepare_and_execute() because there was no routine for reconnecting to the restarted CAS.

[CCI] Query timeout would not occur when the value of query timeout was configured as a very small value in cci_prepare_and_execute() (CUBRIDSUS-8102)

Fix a problem in which query timeout would not occur in the cci_prepare_and_execute() function when the value of query timeout was configured as a very small value like 3 milliseconds. Also, the new version has changed query timeout values in the CAS log to be displayed as milliseconds as shown below.

07/11 11:34:52.933 (2) set query timeout to 3000 milliseconds (from app)

[CCI] Applications would be abnormally terminated when a host name was given to a connection URL of cci_connect_with_url() in the simultaneous multi-thread environment (CUBRIDSUS-8318)

Fix a problem in which applications would be abnormally terminated when a host name instead of IP was given to the connection URL of the cci_connect_with_url() function in the environment where multiple threads were executed simultaneously.

[CCI] Incorrect information about in which column an index was created was outputted when fetching the result of the execution of cci_schema_info() (CUBRIDSUS-7174)

Fix a problem in which incorrect values with regard to in which column an index was created were outputted when fetching the result of the execution of cci_schema_info().

[JDBC][CCI] No error occurred even when a connection URL was incorrect in applications (CUBRIDSUS-7967)

Modify to output an error when a connection URL is incorrect in JDBC/CCI applications. The previous version allowed omitting two ":"s after the DB name. Now this is not allowed.

URL=jdbc:CUBRID:192.168.0.1:33000:demodb:::?altHosts=192.168.0.2:33000,192.168.0.3:33000

You can also include "?" in a database name of URL in the Linux version.

[JDBC, CCI] Precision error when binding "" value to a VARCHAR column on JDBC or CCI applications in the CUBRID for Windows (CUBRIDSUS-9306)

Fix a precision error that occurred when binding an empty character string ("") to a VARCHAR type column on JDBC or CCI applications. This problem occurred only in the CUBRID version for Windows.

Utility

Query execution failed when there is any comment after a query and semicolon ; in CSQL (CUBRIDSUS-6381)

Fix a problem in which query execution failed when a comment existed after a query and a semicolon (;) in the CSQL Interpreter unless a semicolon was not added to the end of the comment.

SELECT 1; --

Modify to be possible to re-enter CSQL session commands with UP cursor key (CUBRIDSUS-8646)

In the previous version, you could not re-enter session commands with UP cursor key in the CSQL Interpreter. Now you can do it in the new version.

csql> ;sc tbl
// It enabled to re-enter session commands which was executed right below with the UP cursor key.
csql> ;sc tbl

Query would be changed incorrectly when the query was too long or composed of multiple lines when inserting a query to a file by using the CSQL -i option (CUBRIDSUS-7848)

In the CSQL Interpreter, If a query is very long or composed of many lines when inserting the query to a file by using -i option, an empty character string at the end of buffer to store a long query by dividing it or the end of line was cut out, and this changed the query, causing unintended values to be entered or making an error. This problem has been fixed.

A certain query was executed only when semicolon (;) was entered twice in CSQL (CUBRIDSUS-5963)

Previously, in the environment where the value of the system parameter no_backslash_escapes was set as NO, when inserting a query containing an escape character, as in "INSERT INTO t VALUES(1,''')", in the CSQL Interpreter, the query was executed only when entering a semicolon (;) once more. This problem has been fixed.

Abnormal termination was made when searching schema information after Ctrl+C when executing a Display Query Plan in CSQL (CUBRIDSUS-8456)

Fix a problem in which an abnormal termination was made when searching schema information by using the ;sc <table> command after Ctrl+C when executing a Display Query Plan through ;info plan in the CSQL Interpreter.

CSQL was abnormally terminated when executing a query once more after the termination of the database server (CUBRIDSUS-6256)

Previously, when executing a query in the CSQL Interpreter after terminating the database server, the error message "ERROR: Your transaction has been aborted by the system due to server failure or mode change." was outputted at first but when executing a query again, the CSQL Interpreter was abnormally terminated. This problem has been fixed.

CSQL Interpreter was abnormally terminated when executing a query that contains a special character in the name of a table or column in the CSQL for Windows (CUBRIDSUS-9054)

Fix a problem in which the CSQL Interpreter was abnormally terminated when executing a query that includes a special character, such as '%', in the name of a table or column in the CSQL Interpreter for Windows.

SELECT * FROM [as%];

Add query execution time to an output file after executing broker_log_runner (CUBRIDSUS-8237)

Modify to include query execution time in the output file specified by -o option after the execution of broker_log_runner.

//When it was executed by using broker_log_runner after fixed, prepare/execute/end tran time was additionally outputted.
-------------- query -----------------
SELECT * FROM xx;
cci_prepare exec_time : 0.000
cci_execute exec_time : 0.000
cci_execute:1
---------- query plan --------------
...
---------- query result --------------
...
cci_end_tran exec_time : 0.000
// When an error occurred, the error message was outputted together.
-------------- query -----------------
INSERT INTO t1 VALUES (DATE '11/11/1994');
cci_prepare elapsed time : 0.000
replay_sqllog.txt:

server error : -493 Syntax: before ' VALUES (DATE '11/11/1994'); ' Unknown class "t1". insert into t1 values (date '11/11/1994') cci_end_tran elapsed_time : 0.000
-------------- query ------

Improve the performance of the loaddb utility when using the --error-control-file option (CUBRIDSUS-7424)

In the previous version, when executing the loaddb utility with the --error-control-file option, it was possible to transmit only one row to the database server at a time. The new version has been changed to send multiple rows at a time, improving the performance of the utility when the option was used.

In a test in which 500,000 data was loaded in a table after setting the error code -670 to --error-control-file so that the unique key violation error could be ignored, the new version recorded 1.5 times better performance compared to the previous one when the number of unique key violation errors was 50,000.

--error-control-file option of the loaddb utility would not operate normally (CUBRIDSUS-7484)

Previously, although the --error-control-file option was set so that a certain error could be ignored in the execution of the loaddb utility, the execution of the utility was terminated when the error occurred. This problem has been fixed.

cubrid loaddb -u dba -c 10000 -d tbl.ldb --error-control-file=error_lst testdb

Time: 04/04/12 20:05:47.066 - WARNING *** file ../../src/storage/heap_file.c, line 9678 CODE = -48 Tran = 1, EID = 16661
Accessing deleted object 2|607786|233.

loaddb was terminated when a warning occurred in the server (CUBRIDSUS-6647)

Previously, when a warning occurred in the server while executing the loaddb, the loaddb was terminated, rolling back all the uncommitted records. This has been changed to keep executing the loaddb.

loaddb process was abnormally terminated when loading a data file with a CLOB column (CUBRIDSUS-6330)

Fixed a problem in which the loaddb process was abnormally terminated when executing the loaddb utility with an unloaddb data file containing a CLOB column.

Incorrect error message of the alterdbhost utility(CUBRIDSUS-7790)

Previously, when entering an incorrect host name in the cubrid alterdbhost utility, an incorrect "No error message available." was outputted. The new version has been changed to output a normal message suitable to the situation.

$ cubrid alterdbhost --host=my_wrong_host_name

Abnormal error when canceling the task with Ctrl+C while creating a database (CUBRIDSUS-8864)

Fix a problem in which the "*** FATAL ERROR *** Internal error: logical log page -9 may be corrupted." occurred when canceling the task by pressing Ctrl+C while executing the cubrid createdb.

Change the process status information when executing cubrid heartbeat status (CUBRIDSUS-5212)

The new version has been changed to output more detailed process status information when executing cubrid heartbeat status.

Process Status Status Output Information
Before After
cub_server has been registered to the HA process standby registered registered_and_standby
cub_server has been registered to the heartbeat resource to_be_standby registered registered_and_to_be_standby
cub_server has been registered to the heartbeat resource to_be_active registered_and_active registered_and_to_be_active

backup that was created immediately after the database had been created would not be restored by using the -d option (CUBRIDSUS-7527)

Previously, the cubrid backupdb backup volume that was created immediately after the database had been created would not be restored by executing cubrid restoredb with the -d option. This problem has been fixed.

cubrid backupdb -S demodb
csql -S demodb -c "create table x"
cubrid restoredb -d backuptime demodb

FATAL ERROR ***
No error message available.
Please consult error_log file = /home1/user1/CUBRID/log/demodb_restoredb.err for additional information
... ABORT/EXIT IMMEDIATELY ...<<<---

Server process would hang while restoring the database after a forced termination of the server process (CUBRIDSUS-7217)

Fix a problem in which the server process would hang while restoring the database after restarting the server process after a force termination of it while executing a long transaction.

Option to output the status of brokers on a regular basis would malfunction (CUBRIDSUS-6413)

Fix an error in which the -s option in the cubrid broker status command would not operate normally according to the configuration except when it was set to 1 second.

Database server process would sometimes be abnormally terminated when executing the cubrid checkdb utility in CS mode (CUBRIDSUS-7434)

Previously, the database server process would sometimes be abnormally terminated when executing the cubrid checkdb utility where an unusually large number of non-unique indexes are present in CS mode. This problem has been fixed.

Lock timeout value that was configured in cubrid.conf could be decreased after the query editor is closed in CUBRID Manager (CUBRIDSUS-7462)

Executing the query editor in CUBRID Manager will set the value of lock timeout to 1. Closing the editor will cause the related CAS to reset the value of lock timeout with the value specified in the system parameter lock_timeout_in_secs.

At this time, the value of lock timeout of the CAS would erroneously be decreased to one-thousandth of the original value and this would cause other applications connected to the CAS to use the incorrect value for their lock timeout. This CAS would maintain the incorrect lock timeout value until it was restarted.

However, this error does not occur when the value of lock_timeout_in_secs is set to -1, its default value.

Error when executing compress backup in the CUBRID for 64-bit Windows (CUBRIDSUS_9253)

Fixed an error in which compress backup through "cubrid backupdb -z" failed with the message "No error message available." in the CUBRID for 64-bit Windows.

Restoring a database that backed up a volume containing a file exceeding 2G failed in CUBRID for Windows (CUBRIDSUS-7588)

Fixed a problem in which restoring a database that backed up a volume containing a file exceeding 2G failed with the messages "Trying to format disk volume xxx with an incorrect value [xxx] for number of pages." and "Restoredb cancelled or an error occurred." in CUBRID for Windows.

Modify to output the name of DB user as capital letters in the CUBRID utilities (CUBRIDSUS-8198)

The new version has been changed to output the name of DB user as capital letters in the cubrid utilities.

Error Message

Improve to leave information in the error log file when a critical error occurred while processing database logs (CUBRIDSUS-8652)

The new version has been modified to leave the information on critical errors that occur while processing database logs in the error log file.

Improve to output table, index name and key values when outputting a unique key violation error message (CUBRIDSUS-6885)

The new version has been modified to output table, index name and key values when outputting an error message about the violation of unique key in the error log.

Improve an error message that is outputted when serial is created with incorrect statements (CUBRIDSUS-7119)

The error message that is outputted when serial is created with incorrect statements, such as using a reserved word as a serial name or using values out of the permitted range, has been improved to locate any incorrect part in the message.

CREATE SERIAL s START WITH 5 INCREMENT BY 3 cache -10;
-- An error message before fixed
In line 1, column 52,
ERROR: invalid create serial

CREATE SERIAL identifier {START WITH integer} {INCREMENT BY integer}
serial_min_max {CYCLE|NOCYCLE} {CACHE unsigned_integer|NOCACHE}
-- An error message after fixed
ERROR: In line 1, column 52 before '10; '
Syntax error: unexpected '-', expecting UNSIGNED_INTEGER

Improve the error message that is outputted when the index name is incorrect (CUBRIDSUS-7122)

The new version has been modified to output the index name that resulted in the error message which is outputted when an index name that is outputted when an index is created is incorrect.

CREATE TABLE t(i int);
CREATE INDEX INDEX ON t(i);
ERROR: In line 1, column 14 before ' on t(i); '
Syntax error: unexpected 'index'

Improve to output an error message that is outputted when the host IP is incorrect when running the database (CUBRIDSUS-6189)

The new version has been modified to output an error message to the server error log file if the IP mapped to the host name when running the database is incorrect.

Improve the error message that is outputted when the DDL statement block parameter or the query with WHERE clause block parameter is set to YES so that the message can display situations in details. (CUBRIDSUS-8096)

Previously, when the DDL statement block parameter block_ddl_statement or the query with WHERE clause block parameter block_where_statement was set to YES, the message "Authorization failure." was outputted. In the new version, this message has been improved to display situations in more details.

DDL statement is not allowed by configuration (block_ddl_statement=yes).
Statement without WHERE clause is not allowed by configuration (block_nowhere_statement=yes).

Incorrect error message when renaming a partitioned table name with the name of other existing partitioned tables (CUBRIDSUS-7176)

Fixed a problem in which the name of a partitioned table in an error message was outputted incorrectly when the partitioned table was renamed with the name of other existing partitioned tables.

CREATE TABLE pt1(i int) PARTITION BY hash(i) PARTITION 5;
CREATE TABLE pt2(i int) PARTITION BY hash(i) PARTITION 5;
RENAME pt1 AS pt2;
-- When the query above was executed, partitioned table was outputted.
ERROR: Class "pt2__p__p0" already exists
-- The name is normally outputted after fixed.
ERROR: Class "pt2" already exists

Abnormal error codes and messages when connecting the database with the 2008 R2.2 CCI driver (CUBRIDSUS-8924)

Fixed a problem in which abnormal error codes and messages were outputted when connecting to a database with the 2008 R2.2 CCI driver.

// The example of normal error message
execute error - err_code : -670, err_msg : Operation would have caused one or more unique constraint violations.
// The example of abnormal error message
execute error - err_code : -2, err_msg : yyybOperation would have caused one or more unique constraint violations

Incorrect error messages that occurred when inserting inappropriate arguments into the ADDDATE and SUBDATE functions (CUBRIDSUS-8437)

The new version has fixed a problem in which error messages not suitable to a situation would be outputted when inappropriate arguments were inserted into the ADDDATE and SUBDATE functions.

SELECT ADDDATE('1991-01-01 00:00:00', -10000000);
// An abnormal error message below was outputted.
ERROR: before ' , -10000000); '
No error message available.
SELECT SUBDATE(SYSDATE, 9999999);
// An abnormal error message below was outputted.
ERROR: Execute: Query execution failure #931.

Incorrect error messages when creating an index to a SHARED attribute column (CUBRIDSUS-7132)

The new version has fixed a problem in which error messages not suitable to a situation would be outputted when an index was created to a SHARED attribute column.

CREATE TABLE t1(i INT SAHRED 10);
CREATE INDEX ON t1(i);
// Error message before fixed
ERROR: before ' ); ' i is not defined.
// Error message after fixed
ERROR: Cannot create index on attribute "i", defined with a shared value.

Incorrect error messages when creating serial with invalid maximum or minimum values (CUBRIDSUS-7150)

Fixed a problem in which incorrect error messages were outputted when creating serial with invalid maximum or minimum values.

// If the value was invalid, an error, "Minimum value is invalid" was outputted.
CREATE SERIAL s START WITH 3 INCREMENT BY 1 MAXVALUE 1;

// If the value was invalid minimum value, an error "Maximum value is invalid" was outputted.
CREATE SERIAL s START WITH 3 INCREMENT BY 1 MINVALUE 5;

Improve to output the information on all the other transactions that were waiting for the termination of the transaction in the server error message when lock timeout occurred (CUBRIDSUS-7272)

The new version has been improved to output the information on all the other transactions that were waiting for the termination of the transaction in the lock timeout error message. Also, the lock_timeout_message_type parameter, which was used to configure the message output format of lock timeout, is not used any longer.

Your transaction (index 2, user1@host1|9808) timed out waiting on IX_LOCK lock on class tbl.
You are waiting for user(s) user1@host1|csql(9807), user1@host1|csql(9805) to finish.

Lock timeout errors would occur although query timeout occurred during lock waiting (CUBRIDSUS-7215)

Fix a problem in which a lock timeout error occurred instead of a query timeout error although query timeout occurred during lock waiting.

Configuration, Build and Installation

Improve to allow the dynamic change of the system parameter used to configure whether to dump call stacks for specific error codes (CUBRIDSUS-5717)

Improve to allow the dynamic change of call_stack_dump_activation_list and call_stack_dump_deactivation_list, the system parameters used to configure whether to dump call stacks for specific error codes, while running a database. Also, critical errors have been changed to be included to the call_stack_dump_activation_list without separate configuration.

Modify to be possible to change the location of the unix domain socket file of the cub_master and cub_broker processes (CUBRIDSUS-6910)

In the new version, you can change the location of the unix domain socket file, which is created when executing the cub_master and cub_broker processes through the CUBRID_TMP environment variable in the CUBRID for Linux. As before, the default location of the unix domain socket file of cub_master is /tmp unless it is configured by the CUBRID_TMP environment variable. As before, the default location of the unix domain socket file of cub_broker is $CUBRID/var/CUBRID_SOCK unless it is configured by $CUBRID_TMP. You can avoid the following two problems through the CUBRID_TMP environment variable.

  1. The maximum length of the $CUBRID/var/CUBRID_SOCK path to store the unix domain socket file for cub_broker is 108, but the length sometimes exceeds 108 due to the long installation path of CUBRID.
  2. The /tmp is used to store temporary files for Linux. Some administrators periodically clean up this directory.

Limited the maximum value of the system parameter data_buffer_size to 2G in the 32-bit version (CUBRIDSUS-6257)

In the 32-bit version, the maximum value of the system parameter data_buffer_size is limited to 2G.

Database would not be created when the set buffer size value of the database server process was bigger than the physical memory capacity of the equipment (CUBRIDSUS-7763)

In this version, you can create a database even when the value of the system parameter data_buffer_size, which is used to configure the size of the data buffer cached into the memory by the database server process, is bigger than the physical memory capacity.

Server process would be abnormally terminated when the size of the system parameter data_buffer_size was configured to a value bigger than the available size of the system (CUBRIDSUS-6350)

This new version has fixed a problem in which the server process was abnormally terminated when the size of the system parameter data_buffer_size was set to a value bigger than the available size of the system.

Server process would be abnormally terminated when starting the HA if the system parameter ha_node_list does not have the separator @ (CUBRIDSUS-6474)

The new version has fixed a problem in which the server process was abnormally terminated when executing the cubrid heartbeat start if the ha_node_list of cubrid_ha.conf does not have the separator @, which is used to distinguish group names from member host names, in the HA environment.

CUBRID service would not be terminated when the value of the ha_mode parameter was set to YES in the CUBRID for Windows (CUBRIDSUS-6982)

The new version has fixed a problem in which the CUBRID service would not be terminated when executing cubrid service stop if the value of the ha_mode parameter was set to YES in the CUBRID for Windows.

For reference, only CUBRID for Linux supports the HA feature.

Incorrect error message was outputted when executing a broker utility without the cubrid_broker.conf file (CUBRIDSUS-8932)

In the previous version, when starting a broker without the cubrid_broker.conf file, the broker was not started with the abnormal error message "The socket path is too long (>108): /home/CUBRID/var/CUBRID_SOCK/". The new version has been fixed to output a normal error message.

Previously, when executing cubrid broker status and cubrid broker stop without the cubrid_broker.conf file, the incorrect message "cubrid broker is not running." was outputted but this has been fixed to output a correct error message.

Increased the number of allowed concurrent accesses to the server from 1,024 to 10,000 (CUBRIDSUS-7233)

The maximum value of max_clients, a system parameter for setting the maximum number of concurrent accesses to the database server, has been increased from 1,024 to 10,000. Please note that memory usage will be increased as the value of max_clients is increased, whether or not the clients actually access the database.

Incorrect error message was outputted when the broker parameter value was incorrect (CUBRIDSUS-9280)

The new version has fixed a problem in which an inappropriate error message "Error: can't find cubrid_broker.conf" was outputted when the broker parameter value was incorrect.

SQL_LOG setting of each CAS would be unexpectedly changed when the SLOW_LOG of the broker parameter was dynamically changed (CUBRIDSUS-7592)

Previously, the SQL_LOG setting of each CAS would be unexpectedly changed when the SLOW_LOG of the broker parameter was changed by broker_changer. This problem has been fixed.

broker_changer query_editor SLOW_LOG ON

Expanded the format of writing the ACCESS_CONTROL_FILE that limits broker access (CUBRIDSUS-5449)

In writing the ACCESS_CONTROL_FILE which limits application clients that access a broker, multiple IP list files can be written in a single line or multiple lines for the same database name and the same database user.

dbname:dbuser1:IPread.txt, IPwrite.txt
dbname:dbuser2:IPread.txt
dbname:dbuser2:IPwrite.txt
dbname:dbuser2:IPexternal.txt

CUBRID would not be installed with the Linux sh package when TMPDIR environment variable was set (CUBRIDSUS-8005)

The new version has fixed a problem in which CUBRID would not be installed with the Linux sh package when the value of the TMPDIR environment variable was set.

Dependencies error of the RPM package (CUBRIDSUS-7809)

The dependencies error that occurred when installing the RPM package has been fixed.

Modify to check all the necessary libraries when building CUBRID RPM (CUBRIDSUS-7611)

Now some omitted ones among the libraries required when building CUBRID RPM will be checked.

Specify the build number in the version name of the "control panel > remove programs" screen after installing the Windows version (CUBRIDSUS-9282)

The build number has been specified in the version name of the "control panel > remove programs" screen after installing the Windows version. Previously, it was outputted as 9.0.0 but now it is outputted as 9.00.xxxx.

A warning occurred when building CUBRID source in Ubuntu (CUBRIDSUS-7812)

Fixed a problem in which a warning occurred when using the automake 1.1.11.3 while executing configuration in Ubuntu.

Other

SIGTERM sent to CAS would be delivered to another application (CUBRIDSUS-6693)

The new version has fixed a problem in which the SIGTERM signal the broker sent to CAS to restart CAS would be delivered to another application.

The number of rows affected during query execution would be outputted incorrectly (CUBRIDSUS-7135)

Previously, sometimes the number of rows affected by query execution was outputted incorrectly but this problem has been fixed.

INSERT INTO t1 SELECT ROWNUM FROM db_class;
// 1 was outputted before fixed
1 row affected.
// The normal value is outputted after fixed
52 rows affected.

Cautions

Cautions for CUBRID 9.0 Beta

The DB volume is not compatible with the previous versions (CUBRIDSUS-5238)

As the DB volume is not compatible with the versions before 9.0 Beta, it is necessary to migrate data by using cubrid unloaddb/loaddb. For more information, see Upgrade

Cautions for 9.0 Beta and Previous Versions

Since the 2008 R4.1 version, the default value of CCI_DEFAULT_AUTOCOMMIT has been ON(CUBRIDSUS-5879)

The default value for the CCI_DEFAULT_AUTOCOMMIT broker parameter, which affects the auto commit mode for applications developed with CCI interface, has been changed to ON since CUBRID 2008 R4.1. As a result of this change, CCI and CCI-based interface (for PHP, ODBC and OLE DB) users should check whether or not the application's auto commit mode is suitable for this.

From the 2008 R4.0 version, the options and parameters that use the unit of pages were changed to use the unit of volume size (CUBRIDSUS-5136)

The options (-p, -l, -s), which use page units to specify the database volume size and log volume size of the cubrid createdb utility, will be removed. Instead, the new options, added after 2008 R4.0 Beta (--db-volume-size, --log-volume-size, --db-page-size, --log-page-size), are used.

To specify the database volume size of the cubrid addvoldb utility, use the newly-added option (--db-volume-size) after 2008 R4.0 Beta instead of using the page unit.

It is recommended to use the new system parameters in bytes because the page-unit system parameters will be removed. For details on the related system parameters, see the below.

Be cautious when setting DB volume size if you are a user of a version before 2008 R4.0 Beta (CUBRIDSUS-4222)

From the 2008 R4.0 Beta version, the default value of data page size and log page size in creating the database was changed from 4 KB to 16 KB. If you specify the database volume to the page count, the byte size of the volume may differ from your expectations. If you did not set any options, 100MB-database volume with 4KB-page size was created in the previous version. However, starting from the 2008 R4.0, 512MB-database volume with 16KB-page size is created.

In addition, the minimum size of the available database volume is limited to 20 MB. Therefore, a database volume less than this size cannot be created.

The change of the default value of some system parameters of the versions before 2008 R4.0 (CUBRIDSUS-4095)

Starting from 2008 R4.0, the default values of some system parameters have been changed.

Now, the default value of max_clients, which specifies the number of concurrent connections allowed by a DB server, and the default value of index_unfill_factor that specifies the ratio of reserved space for future updates while creating an index page, have been changed. Furthermore, the default values of the system parameters in bytes now use more memory when they exceed the default values of the previous system parameters per page.

Previous System Parameter Added System Parameter Previous Default Value Changed Default Value (unit: byte)
max_clients - 50 100
index_unfill_factor - 0.2 0.05
data_buffer_pages data_buffer_size 100M (page size=4K) 512M
log_buffer_pages log_buffer_size 200K (page size=4K) 4M
sort_buffer_pages sort_buffer_size 64K (page size=4K) 2M
index_scan_oid_buffer_pages index_scan_oid_buffer_size 16K (page size=4K) 64K

In addition, when a database is created using cubrid createdb, the minimum value of the data page size and the log page size has been changed from 1K to 4K.

Changed so that database services, utilities, and applications cannot be executed when the system parameter is incorrectly configured (CUBRIDSUS-5375)

It has been changed so that now the related database services, utilities, and applications are not executed when configuring system parameters that are not defined in cubrid.conf or cubrid_ha.conf, when the value of system parameters exceed the threshold, or when the system parameters per page and the system parameters in bytes are used simultaneously.

Database fails to start if the data_buffer_size is configured with a value that exceeds 2G in CUBRID 32-bit version (CUBRIDSUS-5349)

In the CUBRID 32-bit version, if the value of data_buffer_size exceeds 2G, the running database fails. Note that the configuration value cannot exceed 2G in the 32-bit version because of the OS limit.

GLO Class which is used in 2008 R3.0 or before is not supported any longer (CUBRIDSUS-3826)

CUBRID 2008 R3.0 and earlier versions processed Large Objects with the Generalized Large Object glo class, but the glo class has been removed from CUBRID 2008 R3.1 and later versions. Instead, they support BLOB and CLOB (LOB from this point forward) data types. For more information, see BLOB/CLOB Data Types.

glo class users are recommended to carry out tasks as follows:

  • After saving GLO data as a file, modify to not use GLO in any application and DB schema.
  • Implement DB migration by using the unloaddb and loaddb utilities.
  • Perform tasks to load files into LOB data according to the modified application.
  • Verify the application that you modified operates normally.

For reference, if the cubrid loaddb utility loads a table that inherits the GLO class or has the GLO class type, it stops the data from loading by displaying an error message, "Error occurred during schema loading."

With the discontinued support of GLO class, the deleted functions for each interface are as follows:

Interface Deleted Functions
CCI

cci_glo_append_data

cci_glo_compress_data

cci_glo_data_size

cci_glo_delete_data

cci_glo_destroy_data

cci_glo_insert_data

cci_glo_load

cci_glo_new

cci_glo_read_data

cci_glo_save

cci_glo_truncate_data

cci_glo_write_data

JDBC

CUBRIDConnection.getNewGLO

CUBRIDOID.loadGLO

CUBRIDOID.saveGLO

PHP

cubrid_new_glo

cubrid_save_to_glo

cubrid_load_from_glo

cubrid_send_glo

Recommendations for controlling services with the CUBRID utility in Windows Vista and higher (CUBRIDSUS-4186)

To control services using cubrid utility from Windows Vista and higher, it is recommended to start the command prompt window with administrative privileges.

If you don't start the command prompt window with administrative privileges and use the cubrid utility, you can still execute it with administrative privileges through the User Account Control (UAC) dialog box, but you will not be able to verify the resulting messages.

The procedures for starting the command prompt window as an administrator in Windows Vista and higher are as follows:

  • Right-click [Start > All Programs > Accessories > Command Prompt].
  • When [Execute as an administrator (A)] is selected, a dialog box to verify the privilege escalation is activated. Click “YES” to start with administrative privileges.

Specifying a question mark when entering connection information as a URL string in JDBC (CUBRIDSUS-3217)

When entering connection information as a URL string in JDBC, property information was applied even if you did not enter a question mark (?) in the earlier version. However, you must specify a question mark depending on syntax in this CUBRID 2008 R3.0 version. If not, an error is displayed. In addition, you must specify colon (:) even if there is no username or password in the connection information.

URL=jdbc:CUBRID:127.0.0.1:31000:db1:::altHosts=127.0.0.2:31000,127.0.0.3:31000 – Error
URL=jdbc:CUBRID:127.0.0.1:31000:db1:::?altHosts=127.0.0.2:31000,127.0.0.3:31000 – Normal

Port configuration is required if the protocol between the master and server processes is changed, or if two versions are running at the same time (CUBRIDSUS-3564)

Because the communication protocol between a master process (cub_master) and a server process (cub_server) has been changed, the master process of CUBRID 2008 R3.0 or later cannot communicate with the server process of a lower version, and the master process of a lower version cannot communicate with a server process of 2008 R3.0 version or later. Therefore, if you run two versions of CUBRID at the same time by adding a new version in an environment where a lower version has already been installed, you should modify the cubrid_port_id system parameter of cubrid.conf so that different ports are used by the different versions.

Not allowed to include @ in a database name (CUBRIDSUS-2828)

If @ is included in a database name, it can be interpreted that a host name has been specified. To prevent this, a revision has been made so that @ cannot be included in a database name when running cubrid createdb, cubrid renamedb and cubrid copydb utilities.