-
Notifications
You must be signed in to change notification settings - Fork 19.7k
partners/qdrant: DRAFT QdrandVectorStore.aadd_texts POC becnmark #26795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
|
@Anush008 wdyt? |
|
This also includes time for generating embeddings as well. Correct? |
Right. |
|
@Anush008 using |
|
hey team! while this is in draft, could you open it against your own fork, and reopen the PR against the main project when it's ready for our team's review? |
|
If we remove the embeddings generation, is the difference in performance worth the code duplication? |
That's actually what I'm asking. Embeddings is one of the pipeline stages, even if we avoid it, aadd_texts can send a few batches in parallel. Whether it's worth it, I don't know. Here's example of parallel execution from another project (if you ever heard of ;) UPD although turns out that code doesn't proceed batches concurrently. |

Here I want to boost QdrantVectorStore.aadd_texts performance with truly async implementation
It's just a test in a form of a benchmark. It also has LocalAIEmbeddings implementation allowing bulk request #22666.
needs openai>=1.3 to run it #22399.