Core features:
- 21MB default install vs 80-171MB alternatives
- 33x faster token chunking than popular alternatives
- Supports multiple chunking strategies: token, word, sentence, and semantic
- Works with all major tokenizers (transformers, tokenizers, tiktoken)
- Zero external dependencies for basic functionality
Technical optimizations:
- Uses tiktoken with multi-threading for faster tokenization
- Implements aggressive caching and precomputation
- Running mean pooling for efficient semantic chunking
- Modular dependency system (install only what you need)
Benchmarks and code: https://github.com/bhavnicksm/chonkie
Looking for feedback on the architecture and performance optimizations. What other chunking strategies would be useful for RAG applications?
I've been hoping to find an ultra light-weight chunking library that can do things like very simple regex-based sentence/paragraph/markdown-aware chunking with minimal additional dependencies.
The more complicated stuff is the effective bin-packing problem that emerges depending on how much different contextual sources you have.
[1] https://gist.github.com/LukasKriesch/e75a0132e93ca989f8870c4...
I just removed one sentence at a time from the left until there was a jump in the embedding distance. Then repeated for the right side.
I hope that you will stick with Chonkie for the journey of making the 'perfect' chunking library!
Thanks again!
1) what
Edit: Also, from the same table, it seems that only this library was ran after warming up, while others were not. https://github.com/bhavnicksm/chonkie/blob/main/benchmarks/R...
Algorithmically, there's not much difference in TokenChunking between Chonkie and LangChain or any other TokenChunking algorithm you might want to use. (except Llamaindex, I don't know what mess they made for 33x slower algo)
If you only want TokenChunking (which I do not recommend completely), better than Chonkie or LangChain, just write your own for production :) At least don't install 80MiB packages for TokenChunking, Chonkie is 4x smaller than them.
That's just my honest response... And these benchmarks are just the beginning, future optimizations on SemanticChunking which would increase the speed-up from the current 2nd (2.5x right now) to even higher.
I’m using o1-preview for chunking, creating summary subdocuments.
Thanks for responding, I'll try to make it easier to use something like that in Chonkie in the future!
Chunking is easily where all of these problems die beyond PoC scale.
I’ve talked to multiple code generation companies in the past week — most are stuck with BM25 and taking in whole files.
But, it's on the roadmap, so please hold on!
I have a particular max token length in mind, and I have a tokenizer like tiktoken. I have a string and I want to quickly find the maximum length truncation of the string that is <= target max token length.
Does chonkie handle this?
Is that what you meant?
Given a list sentences, find the largest in order group of sentences which fit into a max token length such that the sentences contain a natural coherence.
In my case I used a fuzzy token limit and the chunker would choose a smaller group of sentences that fit into a single paragraph or a single common structure instead of cramming every possible sentence until it ran out of room. It would do the same going over the limit if it would be beneficial to do so.
A simple example would be having an alphabetized set and instead of making one chunk A items through part of B items it would end at A items with tokens to spare, or if it were only an extra 10% it would finish the B items. Most of the time it just decided to use paragraphs to end chunks instead of continuing into the middle of the next one.
edit: Get some Moo Deng jokes in the docs!
Memory footprint of the chunking itself would vary widely based on the dataset, and it's not something we tested on... usually other providers don't test it either, as long as it doesn't bust up the computer/server.
If saving memory during runtime is important for your application, let me know! I'd run some benchmarks for it...
Thanks!
Think of it as if ChatGPT (or other models) didn't just have the embedded unstructured knowledge in their weights from learning, but also an extra DB on the side with specific structured knowledge that it can lookup on the fly.
The Benchmark numbers are massaged to look really impressive but upon scrutiny the improvements are at most <1.86x compared to the leading product LangChain in a further page describing the measurements. It claims to beat it on all aspects but where it gets close, the author's library uses a warmed up version so the numbers are not comparable. The author acknowledged this but didn't change the methodology to provide a direct comparison.
The author is Bhavnick S. Minhas, an early career ML engineer with both research and industry experience and very prolific with his GitHub contributions.