Open Source RDBMS - Seamless, Scalable, Stable and Free

한국어 | Login |Register

Versions available for this page: CUBRID 8.3.1 |  CUBRID 8.4.0 |  CUBRID 8.4.1 |  CUBRID 8.4.3 |  CUBRID 9.0.0 | 

Loading Database

Description

You can load a database by using the cubrid loaddb utility in the following situations:

  • Migrating previous version of CUBRID database to new version of CUBRID database
  • Migrating a third-party DBMS database to a CUBRID database
  • Inserting massive amount of data faster than using the INSERT statement

In general, the cubrid loaddb utility uses files (schema definition, object input, and index definition files) created by the cubrid unloaddb utility .

Syntax

cubrid loaddb [ options ] database_name

[ options ]

-u | -p | -l | -v | -c | -s | -i | -d | -t

--user | --password | --load-only | --verbose | --periodic-commit--schema-file | --index-file | --data-file | --table | --data-file-check-only | --estimated-size | --no-oid | --no-statistics | --ignore-class-file | --error-control-file | --no-logging

  • cubrid: An integrated utility for the CUBRID service and database management.
  • loaddb: A utility loads files which is generated by the unload operation and then creates a new database. It is also used to enter mass data into a database faster than ever by loading the input file written by a user. Database loading is performed in standalone mode with DBA authorization.
  • options: A short name option starts with a single dash (-) while a full name option starts with a double dash (--). The options are case sensitive.
  • database_name: Specifies the name of the database to be created.
Return Value

Return values of cubrid loaddb utility are as follows:

  • 0: Success
  • Non-zero: Failure
Input Files
  • Schema file (database-name_schema): A file generated by the unload operation; it contains schema information defined in the database.
  • Object file (database-name_objects): A file created by an unload operation. It contains information on the records in the database.
  • Index file (database-name_indexes): A file created by an unload operation. It contains information on the indexes defined in the database.
  • Trigger file (database-name_trigger): A file created by an unload operation. It contains information on the triggers defined in the database.
  • User-defined object file (user_defined_object_file): A file in table format written by the user to enter mass data. (For details, see How to Write Files to Load Database.)
Options

The following table shows options available with the cubrid loaddb utility (options are case sensitive).

Option

Description

-u
--user

Enters the database user's account. The default value is PUBLIC.

-p
--password

Enters the database user's password.

-l
--load-only

Skips checking statements and types included in the object file and loads records.

-v
--verbose

Displays detailed information on the data loading status on the screen.

-c
--periodic-commit

Commits the transaction whenever a specified number of records has been entered.

-s
--schema-file

Specifies the schema file created by the unload operation and performs schema loading.

-i
--index-file

Specifies the index file created by the unload operation and loads indexes.

-d
--data-file

Specifies the data file created by the unload operation and loads records.

-t
--table

Specifies the table name after this option if a table name header is omitted in the data file to load.

--data-file-check-only

Performs checking only for statements and types included in the data file, but does not load records.

--estimated-size

Specifies the number of records expected.

--no-oid

Ignores the OID reference relationship included in the data file and loads records.

--no-statistics

Loads records without updating database statistics information.

--ignore-class-file

Specifies the ignoring classes.

--error-control-file

Specifies the file that describes how to handle specific errors occurring during data loading.

--no-logging

Can load data quickly during execution because transaction logs are stored; however, it has risk, which data cannot be recovered in case of error occurred. Thus, you should read the messages in the Remarks section below in this page carefully.

Entering a user account (-u or --user)

The -u option specifies the user account of a database where records are loaded. If the option is not specified, the default value is PUBLIC.

cubrid loaddb -u admin -d demodb_objects newdb

Entering the password (-p or --password)

The -p option specifies the password of a database user who will load records. If the option is not specified, you will be prompted to enter the password.

cubrid loaddb -p admin -d demodb_objects newdb

Loading records without checking syntax (-l or --load-only)

The -l option loads data directly without checking the syntax for the data to be loaded. The following example shows how to load data included in demodb_objects to newdb.

If the -l option is used, loading speed increases because data is loaded without checking the syntax included in demodb_objects, but an error might occur.

cubrid loaddb -l -d demodb_objects newdb

Displaying the loading status information (-v or --verbose)

The following example shows how to display detailed information on the tables and records of the database being loaded while the database loading operation is performed. You can check the detailed information such as the progress level, the class being loaded and the number of records entered by using the -v option.

cubrid loaddb -v -d demodb_objects newdb

Configuring the commit interval (-c or --periodic-commit)

The following command performs commit regularly every time 100 records are entered into the newdb by using the -c option. If the -c option is not specified, all records included in demodb_objects are loaded to newdb before the transaction is committed. If the -c option is used together with the -s or -i option, commit is performed regularly every time 100 DDL statements are loaded. The recommended commit interval varies depending on the data to be loaded. It is recommended that the parameter of the -c option be configured to 50 for schema loading, 1,000 for record loading, and 1 for index loading.

cubrid loaddb -c 100 -d demodb_objects newdb

Schema loading (-s or --schema-file)

The following statement loads the schema information defined in demodb into the newly created newdb database. demodb_schema is a file created by the unload operation and contains the schema information of the unloaded database. You can load the actual records after loading the schema information first by using the -s option.

cubrid loaddb -u dba -s demodb_schema newdb

 

Start schema loading.

Total 86 statements executed.

Schema loading from demodb_schema finished.

Statistics for Catalog classes have been updated.

The following satement loads the triggers defined in demodb into the newly created newdb database. demodb_trigger is a file created by the unload operation and contains the trigger information of the unloaded database. It is recommended to load the schema information after loading the records.

cubrid loaddb -u dba -s demodb_trigger newdb

Index loading (-i or --index-file)

The following command loads the index information defined in demodb into the newly created newdb database. demo_indexes is a file created by the unload operation and contains the index information of the unloaded database. You can create indexes after loading records by using the -i option together with the -d option.

cubrid loaddb -u dba -i demodb_indexes newdb

Data loading (-d or -data-file)

The following command loads the record information into newdb by specifying the data file or the user-defined object file with the -d option. demodb_objects is either an object file created by the unload operation or a user-defined object file written by the user for mass data loading.

cubrid loaddb -u dba -d demodb_objects newdb

Specifying a table name to be loaded (--t or --table)

The following command specifies the table name after this option if a table name header is omitted in the data file to be loaded.

cubrid loaded -u dba -d demodb_objects -t tbl_name newdb

Checking the syntax for the data to be loaded only (--data-file-check-only)

The following is a command that checks the statements for the data contained in demodb_objects by using the --data-file-check-only option. Therefore, the execution of the command below does not load records.

cubrid loaddb --data-file-check-only -d demodb_objects newdb

Number of expected records (--estimated-size)

The --estimated-size option can be used to improve loading performance when the number of records to be unloaded exceeds the default value of 5,000. That is, you can improve the load performance by assigning large hash memory for record storage with this option.

cubrid loaddb --estimated-size 8000 -d demodb_objects newdb

Loading records while ignoring the reference relationship (--no-oid)

The following is a command that loads records into newdb ignoring the OIDs in demodb_objects.

cubrid loaddb --no-oid -d demodb_objects newdb

Loading records without updating statistics information (--no-statistics)

The following is a command that does not update the statistics information of newdb after loading demodb_objects. It is useful especially when small data is loaded to a relatively big database; you can improve the load performance by using this command.

cubrid loaddb --no-statistics -d demodb_objects newdb

Specifying the ignoring classes (--ignore-class-file)

You can specify a file that lists classes to be ignored during loading records. All records of classes except ones specified in the file will be loaded.

cubrid loaddb --ignore-class-file=skip_class_list -d demodb_objects newdb

Specifying the error information file (--error-control-file)

This option specifies the file that describes how to handle specific errors occurring during database loading.

cubrid loaddb --error-control-file=error_test -d demodb_objects newdb

For the server error code name, see the $CUBRID/include/dbi.h file.

For error messages by error code (error number), see the number under $set 5 MSGCAT_SET_ERROR in the $CUBRID/msg/<character set name>/cubrid.msg file.

vi $CUBRID/msg/en_US/cubrid.msg

 

$set 5 MSGCAT_SET_ERROR

1 Missing message for error code %1$d.

2 Internal system failure: no more specific information is available.

3 Out of virtual memory: unable to allocate %1$ld memory bytes.

4 Has been interrupted.

...

670 Operation would have caused one or more unique constraint violations.

...

The format of a file that details specific errors is as follows:

  • -<error code>: Configures to ignore the error that corresponds to the <error code> (loaddb is continuously executed even when an error occurs while it is being executed).
  • +<error code>: Configures not to ignore the error that corresponds to the <error code> (loaddb is stopped when an error occurs while it is being executed).
  • +DEFAULT: Configures not to ignore errors from 24 to 33.

If the file that details errors is not specified by using the --error-control-file option, the loaddb utility is configured to ignore errors from 24 to 33 by default. As a warning error, it indicates that there is no enough space in the database volume. If there is no space in the assigned database volume, a generic volume is automatically created.

The following example shows a file that details errors.

  • The warning errors from 24 to 33 indicating DB volume space is insufficient are not ignored by configuring +DEFAULT.
  • The error code 2 is not ignored because +2 has been specified later, even when -2 has been specified first.
  • -670 has been specified to ignore the error code 670, which is a unique violation error.
  • #-115 has been processed as a comment since # is added.

vi error_file

 

+DEFAULT

-2

-670

#-115 --> comment

+2

Remark

The --no-logging option enables to load data file quickly when loaddb is executed by not storing transaction logs; however, it has risk, which data cannot be recovered in case of errors occurred such as incorrect file format or system failure. In this case, you must rebuild database to solve the problem. Thus, in general, it is not recommended to use this option exception of building a new database which does not require data recovery.