Background Image

BLOG

?

Shortcut

PrevPrev Article

NextNext Article

Larger Font Smaller Font Up Down Go comment Print Attachment

Written by SeHun Park on 11/09/2021

- HASH SCAN


Hash Scan is a scan method for hash join. Hash Scan is applied in view or hierarchical query. When a subquery such as view is joined as inner, index scan cannot be used. In this case, performance degradation occurs due to repeated inquiry of a lot of data. In this situation, Hash Scan is used.

hash scan vs nl.jpg

The picture above shows the difference between Nested Loop join and Hash Scan in the absence of an index. In the case of NL join, the entire data of INNER is scanned as many as the number of rows of OUTER. In contrast, Hash Scan scans INNER data once when building a hash data structure and scans OUTER once when searching. Therefore, you can search for the desired data relatively very quickly.

 

Here, the internal structure of Hash Scan is written as the flow of the program development process.

 

 

 

- IN-MEMORY HASH SCAN


CUBRID's Hash Scan uses in-memory, hybrid, and file hash data structures depending on the amount of data. First, let's look at the in-memory structure.

 

The advantage of the in-memory hash scan structure is there is no performance degradation during random access. However, the disadvantage of this structure is that the memory size is limited. It cannot be used in all cases due to its disadvantages, but it is the fastest method due to its advantages. Because of its advantage, it is suitable for chaining hash structures.

 

in-memory hash table.jpg

If there is a collision of hash key values, a new entry is put into the next pointer. It is a simple and fast structure. However, when implemented in file format, problems with random access or space utilization may occur. More details about this will be discussed in the following File Hash Structure section. CUBRID performs in-memory hash scan only within a limited size. You can change the limit size using the max_hash_list_scan_size system parameter.

 

At this stage, rather than implementing an in-memory hash data structure, it was necessary to analyze OPTIMIZER and EXECUTOR and think more about which part should be modified. You can refer to the following link for details about it. Moreover, you can check the design-related contents in JIRA and the results of code review in GIT.

 

 

JIRA: http://jira.cubrid.org/browse/CBRD-23665

GIT: https://github.com/CUBRID/cubrid/pull/2389

 

 

- HYBRID HASH SCAN


This is a method of storing the OID (Object Identifier) of the temp file, not DATA, in the value of the in-memory hash data structure.
memory hash table with temp file.jpg

 

Because OIDs are smaller than DATA, in-memory hash data structures can be used in larger data sets. This method is relatively slower than the in-memory hash method because data from the temp file must be read at the time of lookup. This is the second scan method considered in the hash scan. Check out the link below for more details.

 

JIRA: http://jira.cubrid.org/browse/CBRD-23828

GIT: https://github.com/CUBRID/cubrid/pull/2537

 

 

- FILE HASH SCAN


A scan method that uses the file hash data structure. The extendible hash data structure is applied.

 


extendible hash.jpg

 

 

The diagram above shows the operation of the extendible hash algorithm. When overflow occurs, the bucket is divided. Because it operates in this way of partitioning, it is an algorithm that can maintain the bucket space utilization rate above 50%. Since one bucket is implemented as a page, which is the smallest unit of disk I/O, the higher the bucket space utilization, the lower the disk I/O. For this reason, file hash scans use an extendible hash algorithm.

 

 


file hash scan.jpg

 

 

This is the implementation of the extendible hash data structure in CUBRID. The Directory file stores the VPID, which is the Page Identifier. One Bucket is implemented as one page. The data in the Bucket is sorted, so the lookup uses a binary search.

 

One drawback of extendible hash data structures is that there are no exceptions for duplicate data values. For example, if overflow occurs because the same value is all stored in one bucket, it is an algorithm that can no longer be stored. For this, a new Duplicate Key Bucket is created and added in the form of chaining. If more than a certain amount of data is duplicated, the data is moved to the DK bucket. Through this, a file hash scan with excellent space utilization while being able to flexibly store duplicate values is completed. Visit the following links for a detailed explanation.

 

 

JIRA: http://jira.cubrid.org/browse/CBRD-23816

GIT: https://github.com/CUBRID/cubrid/pull/2781

 

 

- HASH SCAN for Hierarchical Queries


A hierarchical query has a special limitation in that it is necessary to perform a lookup between the hierarchies after the join. Because of this, index scans cannot be used for hierarchical queries with joins. What you need in this situation is a hash scan, right? It has been modified so that hash scan can be used for hierarchical queries as well. Check the link below for more details.

 

JIRA: http://jira.cubrid.org/browse/CBRD-23749

GIT: https://github.com/CUBRID/cubrid/pull/2520

 

 

- HASH JOIN


The in-memory hash scan is applied in the CUBRID11 version, and the file hash scan is applied in the CUBRID11.2 version which will be released soon. The hash join function is currently under development. The development of the hash join function is to add a new join method to OPTIMIZER. Currently, there are Nested Loop join and Sort Merge Join in CUBRID. The CUBRID development team is planning to improve the overall OPTIMIZER. OPTIMIZER will be able to generate a more optimal execution plan. And with that work, a hash join method will be added. Before the hash join is added, the execution plan cannot check whether a hash scan is used. Instead, you can check whether Hash Scan is used in the trace information.

 

trace.jpg

 


- HASH SCAN Performance


In situations where a hash scan is required, the query performance has become incomparably faster than before.

 

Performance of hash scan.jpg

 

The performance is greatly improved compared to the previous case in cases where subqueries are joined as inner or hierarchical queries with joins. CUBRID analyzes the causes of several other cases and reflects improvements to improve query performance. Among these improvements, there are REWRITER improvements such as View Merging and Subquery unnest, and improvements related to View Merging are currently in progress. Next, we will learn how to transform a query in DBMS and why rewrite techniques such as View Merging and Subquery unnest are necessary.


  1. A Node.js speed dilemma: AJAX or Socket.IO?

    Written by CUBRID Community on 07/14/2017 One of the first things I stumbled upon when I started my first Node.js project was how to handle the communication between the browser (the client) and my middleware (the middleware being a Node.js application using the CUBRID Node.js driver (node-cubrid) to exchange information with a CUBRID 8.4.1 database). I am already familiar with AJAX (btw, thank God for jQuery!!) but, while studying Node.js, I found out about the Socket.IO module and even found some pretty nice code examples on the internet... Examples which were very-very easy to (re)use... So this quickly becomes a dilemma: what to choose, AJAX or sockets.io? Obviously, as my experience was quite limited, first I needed more information from out there... In other words, it was time to do s...
    Read More
  2. Become a Jave GC Expert Series 1 : Understanding Java Garbage Collection

    Written by Sangmin Lee on 05/31/2017 What are the benefits of knowing how garbage collection (GC) works in Java? Satisfying the intellectual curiosity as a software engineer would be a valid cause, but also, understanding how GC works can help you write much better Java applications. This is a very personal and subjective opinion of mine, but I believe that a person well versed in GC tends to be a better Java developer. If you are interested in the GC process, that means you have experience in developing applications of certain size. If you have thought carefully about choosing the right GC algorithm, that means you completely understand the features of the application you have developed. Of course, this may not be common standards for a good developer. However, few would object when I say ...
    Read More
  3. No Image

    Become a Jave GC Expert Series 2 : How to Monitor Java Garbage Collection

    Written by Sangmin Lee on 06/01/2017 This is the second article in the series of "Become a Java GC Expert". In the first issue Understanding Java Garbage Collection we have learned about the processes for different GC algorithms, about how GC works, what Young and Old Generation is, what you should know about the 5 types of GC in the new JDK 7, and what the performance implications are for each of these GC types. In this article, I will explain how JVM is actually running Garbage Collection in the real time. What is GC Monitoring?  Garbage Collection Monitoring refers to the process of figuring out how JVM is running GC. For example, we can find out: when an object in young has moved to old and by how much, or when stop-the-world has occurred and for how long. GC monitoring is carried out t...
    Read More
  4. Become a Jave GC Expert Series 3 : How to Tune Java Garbage Collection

    Written by Sangmin Lee on 06/02/2017 This is the third article in the series of "Become a Java GC Expert". In the first issue Understanding Java Garbage Collection we have learned about the processes for different GC algorithms, about how GC works, what Young and Old Generation is, what you should know about the 5 types of GC in the new JDK 7, and what the performance implications are for each of these GC types. In the second article How to Monitor Java Garbage Collection I have explained how JVM actually runs the Garbage Collection in the real time, how we can monitor GC, and which tools we can use to make this process faster and more effective. In this third article based on real cases as our examples I will show some of the best options you can use for GC tuning. I have written this arti...
    Read More
  5. Become a Jave GC Expert Series 4 : MaxClients in Apache and its effect on Tomcat during Full GC

    Written by Dongsoon Choi on 06/23/2017 This is the fourth article in the series of "Become a Java GC Expert". In the first issue Understanding Java Garbage Collection we have learned about the processes for different GC algorithms, about how GC works, what Young and Old Generation is, what you should know about the 5 types of GC in the new JDK 7, and what the performance implications are for each of these GC types. In the second article How to Monitor Java Garbage Collection we have explained how JVM actually runs the Garbage Collection in the real time, how we can monitor GC, and which tools we can use to make this process faster and more effective. In the third article How to Tune Java Garbage Collection we have shown some of the best options based on real cases as our examples that you c...
    Read More
Board Pagination Prev 1 2 3 4 5 6 Next
/ 6

Join the CUBRID Project on