Open Source RDBMS - Seamless, Scalable, Stable and Free

한국어 | Login |Register

Versions available for this page: CUBRID 8.2.1 |  CUBRID 8.3.0 |  CUBRID 8.3.1 |  CUBRID 8.4.0 |  CUBRID 8.4.1 |  CUBRID 8.4.3 |  CUBRID 9.0.0 | 

Broker Logs

There are three types of logs that relate to starting the Broker: access, error and SQL logs. Each log can be found in the log directory under the installation directory. You can change the directory where these logs are to be saved through LOG_DIR and ERROR_LOG_DIR parameters of the broker environment configuration file (cubrid_broker.conf).

Checking the Access Log

The access log file records information on the application client and is saved with the name of broker_name.access. If the LOG_BACKUP parameter is configured to ON in the Broker environment configuration file, when the Broker stops properly, the access log file is saved with the date and time that the Broker has stopped. For example, if broker1 stopped at 12:27 P.M. on June 17, 2008, an access file named broker1.access.20080617.1227 is generated in the log/broker directory. The following is an example of an access log.

The following is an example and description of an access log file created in the log directory:

1 192.168.1.203 - - 972523031.298 972523032.058 2008/06/17 12:27:46~2008/06/17 12:27:47 7118 - -1

2 192.168.1.203 - - 972523052.778 972523052.815 2008/06/17 12:27:47~2008/06/17 12:27:47 7119 ERR 1025

1 192.168.1.203 - - 972523052.778 972523052.815 2008/06/17 12:27:49~2008/06/17 12:27:49 7118 - -1

  • 1 : ID assigned to the application server of the Broker
  • 192.168.1.203 : IP address of the application client
  • 972523031.298 : UNIX timestamp value when the client's request processing started
  • 2008/06/17 12:27:46 : Time when the client's request processing started
  • 972523032.058 : Unix timestamp value when the client's request processing finished
  • 2008/06/17 12:27:47 : Time when the client's request processing finished
  • 7118 : Process ID of the application server
  • -1 : No error occurred during the request processing
  • ERR 1025 : Error occurred during the request processing. Error information exists in offset=1025 of the error log file
Checking the Error Log

The error log file records information on errors that occurred during the client's request processing and is stored with the name of broker_name_app_server_num.err.

The following is an example and description of an error log:

Time: 02/04/09 13:45:17.687 - SYNTAX ERROR *** ERROR CODE = -493, Tran = 1, EID = 38

Syntax: Unknown class "unknown_tbl". select * from unknown_tbl

  • Time : 02/04/09 13:45:17.687 : Time when the error occurred
  • - SYNTAX ERROR : Type of error (e.g. SYNTAX ERROR, ERROR, etc.)
  • *** ERROR CODE = -493 : Error code
  • Tran = 1 : Transaction ID. -1 indicates that no transaction ID is assigned.
  • EID = 38 : Error ID. This ID is used to find the SQL log related to the server or client logs when an error occurs during SQL statement processing.
  • Syntax... : Error message (An ellipsis ( ... ) indicates omission.)
Managing the SQL Log

The SQL log file records SQL statements requested by the application client and is stored with the name of broker_name_app_server_num.sql.log. The SQL log is generated in the log/broker/sql_log directory when the SQL_LOG parameter is set to ON. Note that the size of the SQL log file to be generated cannot exceed the value set for the SQL_LOG_MAX_SIZE parameter. CUBRID offers the broker_log_top, broker_log_converter, and broker_log_runner utilities to manage SQL logs. Each utility should be executed in a directory where the corresponding SQL log exists.

The following are examples and descriptions of SQL log files:

02/04 13:45:17.687 (38) prepare 0 insert into unique_tbl values (1)

02/04 13:45:17.687 (38) prepare srv_h_id 1
02/04 13:45:17.687 (38) execute srv_h_id 1 insert into unique_tbl values (1)

02/04 13:45:17.687 (38) execute error:-670 tuple 0 time 0.000, EID = 39

02/04 13:45:17.687 (0) auto_rollback

02/04 13:45:17.687 (0) auto_rollback 0

*** 0.000

02/04 13:45:17.687 (39) prepare 0 select * from unique_tbl

02/04 13:45:17.687 (39) prepare srv_h_id 1 (PC)

02/04 13:45:17.687 (39) execute srv_h_id 1 select * from unique_tbl

02/04 13:45:17.687 (39) execute 0 tuple 1 time 0.000

02/04 13:45:17.687 (0) auto_commit

02/04 13:45:17.687 (0) auto_commit 0

*** 0.000

  • 02/04 13:45:17.687 : Time when the application sent the request
  • (39) : Sequence number of the SQL statement group. If prepared statement pooling is used, it is uniquely assigned to each SQL statement in the file.
  • prepare 0 : Whether or not it is a prepared statement
  • prepare srv_h_id 1 : Prepares the SQL statement as srv_h_id 1.
  • (PC) : It is outputted if the data in the plan cache is used.
  • SELECT... : SQL statement to be executed. (An ellipsis ( ... ) indicates omission.) For statement pooling, the binding variable of the WHERE clause is represented as a question mark (?).
  • Execute 0 tuple 1 time 0.000 : One row is executed. The time spent is 0.000 second.
  • auto_commit/auto_rollback : Automatically committed or rolled back. The second auto_commit/auto_rollback is an error code. 0 indicates that the transaction has been completed without an error.

The broker_log_top utility analyses the SQL logs which are generated for a specific period. As a result, the information of SQL statements and time execution are outputted in files by order of the longest execution time; the results of SQL statements are stored in log.top.q and those of execution time are stored in log.top.res, respectively.

The broker_log_top utility is useful to analyse the long query. The syntax is as follows:

broker_log_top [options] sql_log_file_list

options : {-t | -F  from_date | -T  to_date}

The results are outputted in transaction unit if the -t option is specified.

SQL statements which are used for a specific period time can be analyzed by using the -F and -T options. The input format is MM[/DD[ hh[:mm[:ss[.msec]]]]], and the part enclosed by [] can be omitted. If you omit the value, it is regarded as that 01 is input for DD, and 0 is input for hh, mm, ss and msec.

-- Set the search range to milliseconds

broker_log_top -F "01/19 15:00:25.000" -T "01/19 15:15:25.180" log1.log

 

-- The part where the time format is omitted is set to 0 by default. This means that -F "01/19 00:00:00.000" -T "01/20 00:00:00.000" is input.

broker_log_top -F "01/19" -T "01/20" log1.log

All logs are outputted by SQL statement if any option is not specified.

The following logs are the results of executing the broker_log_top utility; logs are generated from Nov. 11th to Nov. 12th, and it is displayed in the order of the longest execution of SQL statements. Each month and day are separated by a slash (/) when specifying period. Note that "*.sql.log" is not recognized so the SQL logs should separated by a white space on Windows.

--Execution broker_log_top on Linux

% broker_log_top -F "11/11" -T "11/12" -t *.sql.log

 

query_editor_1.sql.log

query_editor_2.sql.log

query_editor_3.sql.log

query_editor_4.sql.log

query_editor_5.sql.log

 

--Executing broker_log_top on Windows

% broker_log_top -F "11/11" -T "11/12" -t query_editor_1.sql.log query_editor_2.sql.log query_editor_3.sql.log query_editor_4.sql.log query_editor_5.sql.log

The log.top.q and log.top.res files are generated in the same directory where the analyzed logs are stored when executing the example above; In the log.top.q file, you can view each SQL statement, and its line number. In the log.top.res, you can the minimum, maximum and avg. time, and the number of execution queries for each SQL statement.

--log.top.q file

[Q1]-------------------------------------------

broker1_6.sql.log:137734

11/11 18:17:59.396 (27754) execute_all srv_h_id 34 select a.int_col, b.var_col from dml_v_view_6 a, dml_v_view_6 b, dml_v_view_6 c , dml_v_view_6 d, dml_v_view_6 e where a.int_col=b.int_col and b.int_col=c.int_col and c.int_col=d.int_col and d.int_col=e.int_col order by 1,2;

11/11 18:18:58.378 (27754) execute_all 0 tuple 497664 time 58.982

.

.

[Q4]-------------------------------------------

broker1_100.sql.log:142068

11/11 18:12:38.387 (27268) execute_all srv_h_id 798 drop table list_test;

11/11 18:13:08.856 (27268) execute_all 0 tuple 0 time 30.469

 

-- log.top.res file

max           min      avg     cnt(err)

-----------------------------------------------------

[Q1]        58.982    30.371    44.676    2 (0)

[Q2]        49.556    24.023    32.688    6 (0)

[Q3]        35.548    25.650    30.599    2 (0)

[Q4]        30.469    0.001     0.103  1050 (0)

To store SQL logs created in log/broker/sql_log under the installation directory to a separate file, the broker_log_converter utility is executed. The syntax of the broker_log_converter utility is as follows: This example saves queries stored in the query_editor_1.sql.log file to the query_convert.in file.

broker_log_converter  SQL_log_file  output_file

The following example shows that the query in the query_editor_1.sql.log file is converted into the query_convert.in file.

% broker_log_converter query_editor_1.sql.log query_convert.in

To re-execute queries saved in the query file which has been created by the broker_log_converter utility, the broker_log_runner utility is executed. The syntax of the broker_log_runner utility is as follows: This example re-executes queries saved in the query_convert.in in demodb. It is assumed that the IP address of the Broker is 192.168.1.10 and its port number is 30,000.

broker_log_runner options input_file

options : -I cas_ip -P cas_port  -d dbname  [-u dbuser [-p dbpasswd ]]  [-t num_thread] [-r repeat_count] [Q] [ -o result_file

broker_log_runner Utility Options

Option

Description

-I broker_ip

IP address or host name of the CUBRID Broker

-P broker_port

Port number of the CUBRID Broker

-d dbname

Name of the database against which queries are to be executed

-u dbuser

Database user name (default value : public)

-p dbpasswd

Database password

-t numthread

The number of threads (default value : 1)

-r repeat_count

The number of times that the query is to be executed (default value : 1)

-Q

Stores the query plan in result_file.

Name of the file where execution results are to be stored

-o result_file

Name of the file where execution results are to be stored

% broker_log_runner -I 192.168.1.10  -P 30000 -d demodb -t 2 query_convert.in

cas_ip = 192.168.1.10

cas_port = 30000

num_thread = 2

repeat = 1

dbname = demodb

dbuser = public

dbpasswd =

exec_time : 0.001

exec_time : 0.000

0.000500 0.000500 –

 

% broker_log_runner -I 192.168.1.10  -P 30000 -d demodb –o result –Q query_convert.in

 …

%cat result.0

-------------- query -----------------

SELECT * FROM athlete where code=10099;

 

cci_execute:1

---------- query plan --------------

Join graph segments (f indicates final):

seg[0]: [0]

seg[1]: code[0] (f)

seg[2]: name[0] (f)

seg[3]: gender[0] (f)

seg[4]: nation_code[0] (f)

seg[5]: event[0] (f)

Join graph nodes:

node[0]: athlete athlete(6677/107) (sargs 0)

Join graph terms:

term[0]: (athlete.code=10099) (sel 0.000149768) (sarg term) (not-join eligible) (indexable code[0]) (loc 0)

 

Query plan:

 

iscan

    class: athlete node[0]

    index: pk_athlete_code term[0]

    cost:  fixed 0(0.0/0.0) var 0(0.0/0.0) card 1

 

Query stmt:

 

select athlete.code, athlete.[name], athlete.gender, athlete.nation_code, athlete.event from athlete athlete where (athlete.code= ?:0 )

 

 

---------- query result --------------

10099|Andersson Magnus|M|SWE|Handball|

-- 1 rows ----------------------------