Hyperspace Search

Keywords and metadata hold essential information about data. Lexical, or keyword-based search (also called Classic Search) leverages this information by matching keywords and values to find related items. This process identifies objects with shared characteristics or patterns. Lexical search does not attempt to understand the meaning and context behind the words, but strictly focuses on finding exact matches of the search terms in the documents or datasets.

Lexical Search is a relatively fast method that minimizes the computational investment required to identify similar items, thus enabling fast retrieval. This method is commonly used in a wide variety of applications.

Hyperspace allows enhanced search performance through smart indexing and implementation of search logic on a designated cloud process unit. Hyperspace's similarity search engine employs an inverted index that supports various data field types, ensuring O(1) data access. The search performance is further improved by providing cardinality hints at the configuration level.

Vector Search is a common technique for finding similar items using vector representations. It involves vector embedding, which turns data into high-dimensional vectors that embody their essential traits. Instead of traditional keyword or exact matching, Vector Search identifies similar items by measuring the closeness or similarity between these vectors rather than relying on traditional keyword matching or exact matches. This approach is frequently used in a wide variety of applications, such as recommendation systems and content search.

Hyperspace offers advantages in the realm of Vector Search latency, as it is optimized for cases where CPU-based search suffers from poor performance.

The straightforward approach to vector search, in which the search is performed in an accurate manner, is called "brute force K-Nearest Neighbors". In this approach, the query vector is compared to all dataset points, to find the k closest neighbors based on a distance metric. While brute force k-NN is simple to implement and effective for small datasets, it suffers from computational intensity - the time required for distance calculations increases linearly with the dataset size, making it inefficient for large-scale applications.

An alternate approach is "Approximate Nearest Neighbors", or ANN, aiming to reduce the computational load of KNN search through sophisticated approximations. ANN techniques use smart methods to quickly narrow down the search space and identify points that are likely to be close to the query point. This results in much faster query times, especially in high-dimensional spaces, at the cost of a small amount of accuracy. Hyperspace supports various ANN methods.

Hybrid Search combines vector and lexical searches, offering a versatile solution that caters to various types of applications and delivers the best of both worlds.

Hybrid search allows the use of semantic search for understanding the context and meaning behind the search terms, as well as keyword search for exact matching of specific terms.

Hyperspace's engine allows hybrid search that merges full Lexical Search and Vector Search. This combination leverages the strengths of each search type, offering a more accurate and comprehensive search experience.

For Hybrid search that combines accurate KNN (brute force) with metadata filtering, Hyperspace uses the pre-filtering approach. In this approach, the documents are first filtered using lexical search. Vector similarity is then calculated only for documents that pass this initial filtering. For KNN, this approach optimizes the query latency without reducing its recall.​

In the following sections we discuss different approaches to combine vector and classic search, and outline how Hyperspace handles these approaches.

The Hybrid Pre-Filtering Approach

In the pre-filtering approach, documents are first filtered using keyword and metadata. The resulting subset is then subjected to more complex and computationally intensive search methods, such as vector search or machine learning-based ranking. By narrowing down the search space before applying sophisticated algorithms, pre-filtering hybrid search improves performance and reduces computational load, making it particularly useful for large-scale and real-time search applications.

For KNN vector search, the pre-filtering approach optimizes the query latency without reducing its recall.​ This approach also allows to control the number of results, as the KNN algorithm that controls the number of results is performed last.

While the pre-filtering approach works well with accurate KNN, it suffers from inherent limitations when combined with ANN method. In particular, it may lead to reduction of recall in the case of graph based methods such as HNSW. This is occurs since the filtering stage of the hybrid search removes database documents that do not satisfy the query conditions. In graph based approximations, some of the filtered out data points may be needed in order to reach relevant points in the graph, see example below.

The above example demonstrates that without filtering, the relevant result can be reached by hopping from the query vector and no results are lost. When filtering is applied (right image), the graph is no longer connected and the relevant result can no longer be reached.

The Post-Filtering Approach

In the post-filtering approach, the best matching documents are first calculated using vector search, and then filtered using keyword and metadata matching. This method is generally less efficient resource-wise when combined with KNN algorithms, as it requires the computationally intensive KNN algorithm to be applied over the whole data space. By contrast, when combined with ANN algorithms, this efficiency gap is significantly decreased due to the efficiency of ANN algorithm.

In terms of accuracy, the post-filtering method does not suffer from the recall loss of ANN algorithms, as the pre-filtering method, but may suffer from less controlled number of final candidates, as the filtering occurs after the N best candidates were selected and may reduce their number to below the number of the required results.

Hybrid K Nearest Neighbors (KNN)

For Hybrid search that combines accurate KNN (brute force) with metadata filtering, Hyperspace uses the pre-filtering approach. In this approach, documents are first filtered using lexical search. Vector similarity is then calculated only for documents that pass this initial filtering. For KNN, this approach optimizes the query latency without reducing its recall.​

Pre-Filtering with KNNPost-Filtering with KNN

Controlled No. Results

Yes

No

Recall

No impact on recall

No impact on recall

Latency

Slightly better

Slightly lower

Hybrid Approximate Nearest Neighbors (ANN)

While brute force optimizes latency without reducing search recall for brute force KNN, for the Approximate Nearest Neighbor (ANN) calculation, this approach can lead to a sparse graph and reduction of recall, as explained earlier.

To avoid this outcome, Hyperspace uses the post-filtering approach for ANN, in which the matching is performed before the metadata filtering. This approach optimizes query recall at the expense of latency.

Pre-Filtering with ANNPost-Filtering with ANN

Controlled No. Results

Yes

No

Recall

Recall can be lower

No impact on recall

Latency

Slightly better

Slightly lower

Last updated