Batching upserts

Vector Databases for Embeddings with Pinecone

James Chapman

Curriculum Manager, DataCamp

Upserting limitations

 

  1. Rate of requests
  2. Size of requests

 

  • Batching: breaking requests up into smaller chunks

 

pinecone-rate-limits.png

1 https://docs.pinecone.io/reference/quotas-and-limits#rate-limits
Vector Databases for Embeddings with Pinecone

Defining a chunking function

def chunks(iterable, batch_size=100):

it = iter(iterable)
chunk = tuple(itertools.islice(it, batch_size))
while chunk:
yield chunk
chunk = tuple(itertools.islice(it, batch_size))
Vector Databases for Embeddings with Pinecone

Sequential batching

  • Splitting requests and sending them sequentially one-by-one
pc.Pinecone(api_key="YOUR API KEY")
index = pc.Index('datacamp-index')


for chunk in chunks(vectors): index.upsert(vectors=chunk)

Pros:

  • Solve rate and size limiting

Cons:

  • Really slow!
Vector Databases for Embeddings with Pinecone

Parallel batching

  • Splitting requests and sending them in parallel
pc = Pinecone(api_key="YOUR_API_KEY", pool_threads=30)


with pc.Index('datacamp-index', pool_threads=30) as index:
async_results = [index.upsert(vectors=chunk, async_req=True) for chunk in chunks(vectors, batch_size=100)]
[async_result.get() for async_result in async_results]
Vector Databases for Embeddings with Pinecone

Let's practice!

Vector Databases for Embeddings with Pinecone

Preparing Video For Download...