Background Image

BLOG

?

Shortcut

PrevPrev Article

NextNext Article

Larger Font Smaller Font Up Down Go comment Print
Written by CUBRID Community on 07/14/2017

One of the first things I stumbled upon when I started my first Node.js project was how to handle the communication between the browser (the client) and my middleware (the middleware being a Node.js application using the CUBRID Node.js driver (node-cubrid) to exchange information with a CUBRID 8.4.1 database).

I am already familiar with AJAX (btw, thank God for jQuery!!) but, while studying Node.js, I found out about the Socket.IO module and even found some pretty nice code examples on the internet... Examples which were very-very easy to (re)use...

So this quickly becomes a dilemma: what to choose, AJAX or sockets.io?

Obviously, as my experience was quite limited, first I needed more information from out there... In other words, it was time to do some quality Google search :)

There’s a lot of information available and, obviously, one would need to filter out all the “noise” and keep what is really useful. Let me share with you some of the goods links I found on the topic:

To summarize, here’s what I quickly found:

  1. Socket.IO (usually) uses the persistent connection between the client and the server (the middleware), so you can reach a maximum limit of concurrent connections depending on the resources you have on the server side (while more AJAX async requests can be served with the same resources).
  2. With AJAX you can do RESTful requests. This means that you can take advantage of existing HTTP-infrastructure like e.g. proxies to cache requests and use conditional get requests.
  3. There is more (communication) data overhead in AJAX when compared to Socket.IO (HTTP headers, cookies etc.)
  4. AJAX is usually faster than Socket.IO to “code”...
  5. When using Socket.IO, it is possible to have a two-way communication where each side – client or server - can initiate a request. In AJAX, it is only the client who can initiate a request!
  6. Socket.IO has more transport options, including Adobe Flash.

Now, for my own application, what I was most interested in was the speed of making requests and getting data from the (Node.js) server!

Regarding the middleware data communication with the CUBRID database, as ~90% of my data access was read-only, a good data caching mechanism is obviously a great way to go! But about this, I’ll talk next time.

So I decided to put up their (AJAX and socket.io) speed to test, to see which one is faster (at least on my hardware & software environment)....! My middleware was setup to run on an i5 processor, 8GB of RAM and an Intel X25 SSD drive.

But seriously, every speed test and, generally speaking, any performance test depends so much(!) on your hardware and software configuration, that it is always a great idea to try the things on your own environment, rely less on various information you find on internet and more on your own findings!

The tests I decided to do have to meet the following requirements:

  • Test:
    • AJAX
    • Socket.IO persistent connection
    • Socket.IO non-persistent connections
  • Test 10, 100, 250 and 500 data exchanges between the client and the server
  • Each data exchange between the middleware SERVER (a Node.js web server) and the client (a browser) is a 4KBytes random data string
  • Run the server in release (not debug) mode
  • Use Firefox as the client
  • Minimize the console messages output, for both server and client
  • Do each test after a client full page reload
  • Repeat each test at least 3 times, to make sure the results are consistent

Testing Socket.IO, using a persistent connection

I've created a small Node.js server, which was handling the client requests:

io.sockets.on('connection', function (client) {
    client.on('send_me_data', function (idx) {
        client.emit('you_have_data', idx, random_string(4096));
    });
});

And this is the JS client script I used for test:

1
2
3
4
5
6
7
8
9
10
11
12
13
var socket = io.connect(document.location.href);
 
socket.on('you_have_data', function (idx, data) {
    var end_time = new Date();
    total_time += end_time - start_time;
    logMsg(total_time + '(ms.) [' + idx + '] - Received ' + data.length + ' bytes.');
    if (idx++ < countMax) {
        setTimeout(function () {
            start_time = new Date();
            socket.emit('send_me_data', idx);
        }, 500);
    }
});

Testing Socket.IO, using NON-persistent connection

This time, for each data exchange, I opened a new socket-io connection.

The Node.js server code was similar with the previous one, but I decided to send back the client data immediately after connect, as a new connection was initiated every time, for each data exchange:

1
2
3
io.sockets.on('connection', function (client) {
    client.emit('you_have_data', random_string(4096));
});

The client test code was:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function exchange(idx) {
    var start_time = new Date();
    var socket = io.connect(document.location.href, {'force new connection' : true});
 
    socket.on('you_have_data', function (data) {
        var end_time = new Date();
        total_time += end_time - start_time;
        socket.removeAllListeners();
        socket.disconnect();
        logMsg(total_time + '(ms.) [' + idx + '] - Received ' + data.length + ' bytes.');
         
        if (idx++ < countMax) {
            setTimeout(function () {
                exchange(idx);
            }, 500);
        }
    });
}

Testing AJAX

Finally, I put AJAX to test...

The Node.js server code was, again, not that different from the previous ones:

1
2
res.writeHead(200, {'Content-Type' : 'text/plain'});
res.end('_testcb(\'{"message": "' + random_string(4096) + '"}\')');

As for the client code, this is what I used to test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function exchange(idx) {
    var start_time = new Date();
 
    $.ajax({
        url : 'http://localhost:8080/',
        dataType : "jsonp",
        jsonpCallback : "_testcb",
        timeout : 300,
        success : function (data) {
            var end_time = new Date();
            total_time += end_time - start_time;
            logMsg(total_time + '(ms.) [' + idx + '] - Received ' + data.length + ' bytes.');
             
            if (idx++ < countMax) {
                setTimeout(function () {
                    exchange(idx);
                }, 500);
            }
        },
        error : function (jqXHR, textStatus, errorThrown) {
            alert('Error: ' + textStatus + " " + errorThrown);
        }
    });
}

Remember, when coding together AJAX and Node.js, you need to take into account the you might be doing cross-domain requests and violating same origin policy, therefore you should use the JSONP based format!

Btw, as you can see, I quoted only the most significant parts of the test code, to save space. If anyone needs the full code, server and client, please let me know – I’ll be happy to share them.

OK – it’s time now to see what we got after all this work!

I have run each test for 10, 100, 250 and 500 data exchanges and this is what I got in the end:

Data exchanges Socket.IO NON-persistent (ms.) AJAX (ms.) Socket.IO persistent (ms.)
10 90 40 32
100 900 320 340
250 2,400 800 830
500 4,900 1,500 1,600

Looking into the results, we can notice a few things right away:

  1. For each type of test, the results behave quite linear; this is good – it shows that the results are consistent.
  2. The results clearly show that when using Socket.IO non-persistent connections, the performance numbers are significantly worse than others.
  3. It doesn’t seem to be a big difference between AJAX and the Socket.IO persistent connections – we are talking only about some milliseconds differences. This means that if you can live with less than 10,000 data exchanges per day, for example, there are high chances that the user won’t notice a speed difference...

The graph below illustrates the numbers I obtained in test:

nodejs_socketio_ajax_performance.png

...So what’s next...?

...Well, I have to figure out what kind of traffic I need to support and then I will re-run the tests for those numbers, but this time excluding Socket.IO non-persistent connections. That’s because it is obvious that I need to choose between AJAX and persistent Socket.IO connections.

And I also learned that, most probably, the difference in speed would not be as much as one would expect... at least not for a “small-traffic” web site, so I need to start looking into other advantages and disadvantages for each approach/technology when choosing my solution!

P.S. Here are a few more nice resources to find interesting stuff about Node.js, Socket.IO and AJAX:


  1. CUBRID's Development Culture: The Development Process and Improvements Behind

    Written by Hyung-Gyu, Ryoo on 07/16/2021   Foreword Hello, I am Hyung-Gyu Ryoo, I am working in CUBRID as a research engineer in the R&D department. In this post, I would like to introduce the development process of the open source project CUBRID and the efforts we have made to improve the process.   It has been almost two and a half years since I joined CUBRID.  During this period, as many great fellow developers have joined the CUBRID community, the R&D department of CUBRID has grown from one small team to three developments teams along with a QA team.   After I participated in the first major version release (CUBRID 11), I was able to look back to the release process and improve the development process with my fellow developers.     The Development Process of the Open Source Data...
    Read More
  2. TDE (Transparent Data Encryption) in CUBRID 11

    Written by Jiwon Kim on 07/07/2021 👍Increase the level of database security by utilizing various security features of CUBRID. CUBRID 11 has enhanced security by providing the Transparent Data Encryption (henceforth, TDE) feature. So, what is TDE? TDE means transparently encrypting data from the user’s point of view. This allows users to encrypt data stored on disk with little to no application change. When a hacker hacks into an organization, the number one thing they want to steal is the important data in the database. Alternatively, there may be a situation where an employee with malicious intention inside the company logs into the database and moves all data to a storage media such as a USB.  The easiest way to protect data in these situations is to encrypt the database. TDE, a technolog...
    Read More
  3. Preventing Sniffing by CUBRID- Packet Encryption

      Written by Youngjin Hwang on 05/11/2021   Nowadays, browsing the internet with PCs or our smartphones has become an essential part of our daily life. As a result, it is possible to peek into the data being transmitted over the Internet with malicious intent. In other words, being able to peek at the data being transmitted by someone is called sniffing.   A classic example of a sniffing attack would be intercepting the account’s id and password and causing physical damage by using the personal information of others.   To protect our database user data, CUBRID 11.0 has enhanced security by providing packet encryption (and TDE (Transparent Data Encryption) based data encryption, but that will be cover in another blog later). When packet encryption is applied, the packet is encrypted and tran...
    Read More
  4. Contributing to Open Source Community/Project

      Written by Charis Chau on 06/23/2020      What is an open source project? To answer this, let us start with a burger! Imagine an open source project is a burger selling in a restaurant. Every day, the chef makes thousands of burgers that have the same quality by following the same recipe from the restaurant. One day, customer A comes to the burger place to try the burger, and he/she loves it! Therefore, customer A decides to ask the chef whether he/she can get the recipe. Here, if the restaurant is open source, they will be happy to share the recipe to customer A, vice versa.     After customer A gets the receipt, he/she decide to make the burger at home by him/herself! However, customer A is a meat lover and does not like onion that much, so he/she decide to change the recipe by taking o...
    Read More
  5. CUBRID License Model

    Written by Charis Chau on 06/08/2020   Why Licenses Matter?   Open source licenses allow software to be freely used, modified, and shared. Choosing a DBMS with suitable licenses could save the development cost of your application or the Total Cost of Ownership (TCO) for your company. Choosing a DBMS without a proper license, you might find yourself situate in a legal grey area!     CUBRID Licenses   Unlike other open source DBMS vendors, CUBRID is solely under open source license instead of having a dual license in both commercial license and open source license. Which means that for you, it is not mandatory to purchase a license or annual subscription; company/organizational users can achieve the saving from Total Cost of Ownership (TCO).   Since CUBRID has been open source DBMS from 2008,...
    Read More
Board Pagination Prev 1 2 3 4 5 6 Next
/ 6

Join the CUBRID Project on