Open Source RDBMS - Seamless, Scalable, Stable and Free

한국어 | Login |Register

NBD Benchmark Results

1. About the Test

Outline

The NBD (NHN Internet Bulletin Board Application Database) Benchmark is a benchmark to measure the performance of a DBMS used for a Bulletin Board System (BBS) type of web services. There is no limitation in terms of the configuration of hardware and DBMS, and pricing is not calculated. Therefore, it is a useful benchmark for comparing the performance among database management systems with the same hardware configuration.

In this test, NBD Benchmark Test was conducted on November 29th, 2008 by Wongsae Choi, NHN Corp., to verify the performance of CUBRID R1.1 Database Management System in comparison to OSS DBMS D1, Commercial DBMS D1, and Commercial DBMS D2.

Test Scenario

Configuration of Test Databases

For this test small and medium-sized databases were used. The characteristics of each database are described in the below table.

Configuration of Test Databases

Configuration Unit and Expansion Rules of NBD

The table below defines the configuration unit of NBD database in BBS unit. The NBD database can be expanded by combining each unit element of the below table. Expansion rules are as follows:

  • Step 1 is composed of one small BBS.
  • Up to Step 5, five medium-sized BBS and one small size BBS are added at every step.
  • From Step 6, two large BBS, five medium-sized BBS and one small BBS are added at every step.

Configuration Unit and Expansion Rules of NBD

Table Schema used in the test

The logical schema of database that was established for the NBD Benchmark test is shown below.

Table Schema used in the test

Besides a test was performed for the case in which the field that indicates the click count of a post is separated to tune the performance of OSS DBMS D1. For OSS DBMS D1, the performance deviation is relatively high depending on whether or not the Query Result Cache feature is used.

However, if the click count is managed in the post information table as defined in the current NBD specifications, the Query Result Cache feature of OSS DBMS D1 will not work properly because the click count keeps changing. It is unavoidable to process the result set whenever the click count changes. Therefore, this test compares the schema in which the click count field is separated as an independent table and the schema in which the field is integrated in the post information as defined in the NBD specifications, respectively.

The Schema Integration Model shown below is the schema model defined in the NBD specifications; that is, the click count field is managed as an attribute of the post information table (NBD_ARTICLE_INFO) in this model.

Schema Integration Model

The Schema Separation Model has the click count as a separate table to improve the performance of OSS DBMS D1. This allows queries that only access the post information table (NBD_ARTICLE_INFO) to use the Query Result Cache feature.

Schema Separation Model

The Index Configuration for each table is shown below. The index is configured to accomplish the best performance for each DBMS, and DBAs of the CHINA data analysis team has configured the index for commercial DBMS D1, commercial DBMS D2, and OSS DBMS D1 while the team tunes the performance.

Index Configuration

Workload

The NDB workload is defined as type 1 and type 2. Type 1 is the HOTSPOT READ model where intensive views are concentrated on a specific post. For an Internet service, many users intensively view several pages when new posts are published in a main page, or when a list of popular posts is published on specific pages; these cases belong to type 1. Type 2 is the NAVIGATION model where access to a post is evenly distributed; Internet communities or blogs belong to type 2. The table below illustrates the features of each workload type.

Features-of-Each-Workload-Type-web.png

The general constraints for the execution of workload in the NBD Benchmark are as follows:

  • The ACID (Atomicity, Consistency, Isolation, and Durability) of all transactions must be guaranteed.
  • The increased click count when reading a post must be reflected to the DB immediately and an accurate number must be provided when it is retrieved by other transactions.
  • Lock Time Out or Deadlock may occur, but must be below 10% of all transactions.
  • The NBD Benchmark test results must include measured values for the items in Table 10 below and the following requirements must be satisfied.
  • Measurement must be conducted during the steady-state period.
  • If one transaction out of a transaction mix has failed, the next transaction will not be executed and the corresponding service is handled as a failure.
  • Transaction failure rate must be below 10% and max; the CPU usage must be below 85%.
  • Measured values must be collected at least every 10 seconds.
  • Warm-up time may be excluded from the measurement time.
  • Expected response time is below 0.3 seconds.
  • The final performance is determined by the page numbers per second (PV/sec) that are processed by each system.

Of the two workloads defined in the NBD Benchmark, we test the HOTSPOT READ type. The HOTSPOT READ is the type of modeling which concentrates workload onto a specific post. In case of a post published on a portal main page, it can be exposed to intensive user clicks; therefore, DBMS must simultaneously read the post and increase the number of clicks on it. At this moment, updates on increasing click count cause an intensive workload to DBMS, leading to a performance bottleneck. This test simulates this situation and compares CUBRID with the commercial DBMS D1, commercial DBMS D2 and OSS DBMS D1, in terms of characteristics and performances of system. The table below shows the summary of the workload transaction.

Workload Summary

There are 6 HOTSPOT posts. (The average number of posts exposed on the main page for each Naver.com service is 5~6.)

Parameter Configurations for CUBRID

Parameter Configuration of CUBRID

Parameter Configurations of Commercial DBMS D1

Parameter Configuration of Commercial DBMS D1

Parameter Configurations of OSS DBMS D1

Parameter Configuration of OSS DBMS D1

Parameter Configurations of Commercial DBMS D2

Parameter Configuration of Commercial DBMS D2

Testing Methods

Perform the test three times consecutively that generates workload for 10 minutes through on the SQLMAP interface and select the intermediate value. The following conditions must be satisfied according to Measuring Rules defined in Section 4.4 of the document regarding the NBD specifications.

  • Performance is measured during Steady State Period, and Expected Response Time must be within 0.3 seconds.
  • The CPU usage of a DB server must be below 85% during the test.
  • Workload amount may be adjusted to achieve the optimum performance of each DBMS system.
  • If one transaction out of a transaction mix has failed, the next transaction will not be executed and the corresponding service is handled as a failure.
  • Transaction failure rate must be below 10%.
  • Measured values must be collected at least every 10 seconds.
  • Warm-up time may be excluded from the measurement time.

The configuration of SQLMAP interface is common to all the DBMSs as followings:

<settings
cacheModelsEnabled="true"
enhancementEnabled="true"
lazyLoadingEnabled="true"
maxRequests="512"
maxSessions="128"
maxTransactions="128"
useStatementNamespaces="false"
defaultStatementTimeout="0" />
<transactionManager type="JDBC">
<dataSource type="DBCP">
<property name="maxActive" value="100" />
<property name="minIdle" value="10" />
<property name="maxIdle" value="-1" />
<property name="maxWait" value="60000" />
<property name="poolPreparedStatements" value="true" />
</dataSource>
</transactionManager>

How to Measure NBD Results

How-to-Measure-NBD-Results-web.png

Testing Machine Environment

This chapter briefly describes the configuration of the test bed used during performing this test. The test environment includes only DB servers, without configuring high availability (HA) functionality. In addition, to maximize workload to a DB server, four servers are configured, and its tolerant limit is 40 users at the same time; that is, 160 simultaneous users are allowed to all DB servers. Each workload server runs two workload generation programs, and each program generates 20 threads to load work on a DBMS. Since CUBRID supports the separation of DB server and broker, it is assumed that broker is separated from the DB server (that is, located on Standby server) in replication state.

Test Environment

2. Test Insight and Results

Performance of each Database System

The below graph displays the performance data by DBMS when 160 simultaneous users access each DBMS.

erformance Analysis for each Database System

Relationship between Database Sizes and Performance

Performance Analysis for Database Sizes

CPU Usage for each Database System

The below table shows the CPU Usage summary for all databases.

Test Configuration and Results

The CPU Usage graph.

CPU Usage Analysis for each Database System

Resource Usage of Each Test

The Resource Usage Results for CUBRID in the Test # 1.

CUBRID CPU Usage in Test #1

CUBRID Disk IO Performance in Test #1

CUBRID Memory Usage in Test #1

The Resource Usage Results for CUBRID in the Test # 2.

CUBRID CPU Usage in Test #2

CUBRID Disk IO Performance in Test #2

CUBRID Memory Usage in Test #2

The Resource Usage Results for Commercial DBMS D1 in the Test # 1.

Commercial DBMS D1 CPU Usage in Test #1

Commercial DBMS D1 Disk IO Performance in Test #1

Commercial DBMS D1 Memory Usage in Test #1

The Resource Usage Results for Commercial DBMS D1 in the Test # 2.

Commercial DBMS D1 CPU Usage in Test #2

Commercial DBMS D1 Disk IO Performance in Test #2

Commercial DBMS D1 Memory Usage in Test #2

The Resource Usage Results for Commercial DBMS D2 in the Test # 1.

Commercial DBMS D2 CPU Usage in Test #1

Commercial DBMS D2 Disk IO Performance in Test #1

Commercial DBMS D2 Memory Usage in Test #1

The Resource Usage Results for Commercial DBMS D2 in the Test # 2.

Commercial DBMS D2 CPU Usage in Test #2

Commercial DBMS D2 Disk IO Performance in Test #2

Commercial DBMS D2 Memory Usage in Test #2

The Resource Usage Results for OSS DBMS D1 in the Test # 1.

OSS DBMS D1 CPU Usage in Test #1

OSS DBMS D1 Disk IO Performance in Test #1

OSS DBMS D1 Memory Usage in Test #1

The Resource Usage Results for OSS DBMS D1 in the Test # 2.

OSS DBMS D1 CPU Usage in Test #2

OSS DBMS D1 Disk IO Performance in Test #2

OSS DBMS D1 Memory Usage in Test #2

3. Conclusion

This performance test was conducted using variables such as database size, workload increase, and cache functionality provided by each DBMS. We observed the manner in which each variable affects four DBMS, and drew some conclusions regarding the advantages and disadvantages of the four DBMS based on analysis.

In the small and medium-sized database benchmarking, the commercial DBMS D1 shows the best performance. 

CUBRID is in second place after the commercial DBMS D1 in small and medium-sized database benchmarking. In the small database benchmarking, its CPU usage is below 30% for CUBRID. Therefore, it may show higher performance when more work is loaded. However, in the medium-sized DBMS benchmarking, its CPU usage is about 80%; further analysis will be required.

DBMS D2 shows the highest CPU usage of 100% in both small and medium-sized databases. 

For OSS DBMS D1 server, it shows 30% or lower in small and medium-sized databases. Even when load is less, there is no significant difference in CPU usage; further analysis will be required.

4. Copyright

Copyright 2009 Search Solution Corporation. All Rights Reserved. 

This document is an intellectual property of Search Solution Corporation; unauthorized reproduction or distribution of this document, or any portion of it is prohibited by law. 

This document is provided for information purpose only. Search Solution Corporation has endeavored to verify the completeness and accuracy of information contained in this document, but it does not take the responsibility for possible errors or omissions in this document. Therefore, the responsibility for the usage of this document or the results of the usage falls entirely upon the user, and Search Solution Corporation does not make any explicit or implicit guarantee regarding this. 

Software products or merchandises mentioned in this document, including relevant URL information, conform to the copyright laws of their respective owners. The user is solely responsible for any results occurred by not complying with applicable laws.

Search Solution Corporation may modify the details of this document without prior notice.

See also

CUBRID 8.4.0 Key Features

The new CUBRID 8.4.0 features many significant improvements which includes Performance Improvements, Developer Productivity Improvements and HA Reliab...

CUBRID vs. MySQL Benchmark Test Results for SNS Data and Workload

As we have recently rolled out the new 8.4.0 version of the CUBRID Database, one of our CUBRID users has approached us with a proposal t...

Increasing Database Performance by Query Tuning

It is common that applications use various kinds of SQL queries when communicating with the database server. If queries are not well structured, they wi...

CUBRID 8.4.0 vs. 8.3.1 Volume Space Reusability Comparison

CUBRID 2008 R4.0 has brought great achievements in overall engine performance and usability of tools. In the first of these blog posts we have a...




You are either using a very old browser or a browser that is not supported.
In order to browse cubrid.org you need to have one of the following browsers:



Internet Explorer: Mozilla Firefox: Google Chrome: