Uploading Data to a Collection

Data points of all types are uploaded into Hyperspace Collection as documents and stored according to the identifier you specify during upload, as described below. Data upload can be performed in batches or by uploading a single vector, as follows.

Uploading a Single Document

Use the following command to upload a single document –

hyperspace_client.add_document(document, collection_name)

Where –

  • document – Represents the document to upload. The structure of each document must be according to the database schema configuration file. Must be of type dictionary.

  • collection_name – Specifies the name of the Collection into which to load the document.

Uploading a Batch of Documents

Data can be uploaded in batches by conversion of the data points to a document object before uploading. The basic data point object for the Hyperspace database is a document of type dictionary.

To upload a batch of documents into a Collection –

For verification purposes, we recommend that you upload data to a Collection in batches of documents each which has the structure specified in the data schema configuration file.

The following code snippet builds a list of documents in a temporary variable named batch and then uploads each batch using –

response = hyperspace_client.add_batch(batch, collection_name)

The following example builds batches of 250 random documents for Hybrid Search. Each time it creates a random document, it loads it into a batch and then uploads the batch. Once a batch reaches 250 documents, it's uploaded to the Hyperspace Collection.

Copy the following code snippet

BATCH_SIZE = 250
batch = []
for i, document in enumerate(documents):
   batch.append(document )
   if (i+1) % BATCH_SIZE == 0:
      response = hyperspace_client.add_batch(batch, collection_name)
      batch.clear()
      
if batch:
  response = hyperspace_client.add_batch(batch, collection_name)
hyperspace_client.commit(collection_name)

Where

  • document – Represents the document to upload. The structure of each document must be according to the database schema configuration file. Must be of type dictionary.

  • BATCH_SIZE – Specifies the number of documents in a batch.

  • commit - is required for vector search only. commit should only be performed after the data upload is complete.

In this method, each document will be assigned with an automatic identifier.

Optimizing the batch size can improve the data upload speed. Larger batches will be uploaded faster, but in case of a upload failure (i.e. mismatch between a document and the data schema), the whole batch should be re-uploaded

To manually assign Id to documents, copy the following code snippet

BATCH_SIZE = 250
batch = []
for i, data_point in enumerate(documents):
   data_point["Id"] = str(i)
   batch.append(data_point)
   if (i+1) % BATCH_SIZE == 0:
      response = hyperspace_client.add_batch(batch, collection_name)
      batch.clear()
      
if batch:
  response = hyperspace_client.add_batch(batch, collection_name)
hyperspace_client.commit(collection_name)

Where

  • Id - Represents the id field of the documents. The field should be set in the Database Schema Configuration file

  • i – Specifies the identifier that you assign to the document that you are uploading, which must be unique per Collection. You can assign any identifier as long as it's unique.

This step is optional. If no id is defined in the data schema configuration file, automatic Id will be set during the upload.

Last updated