We took an invention disclosure PDF (https://drive.google.com/file/d/1ySzQgbNZkC5dPLtE3pnnVL2rW_9...) containing an IRR‑vs‑frequency graph and asked GPT “From the graph, at what frequency is the IRR maximized?”. We originally tried this on gpt-4o, but while writing this used the new natively multimodal model o4‑mini‑high. After a 30‑second thinking pause, it asked for clarifications, then churned out buggy code, pulled data from the wrong page, and still couldn’t answer the question. We wrote up the full story with screenshots here: https://docs.morphik.ai/blogs/gpt-vs-morphik-multimodal.
We got frustrated enough to try fixing it ourselves.
We built Morphik to do multimodal retrieval over documents like PDFs, where images and diagrams matter as much as the text.
To do this, we use Colpali-style embeddings, which treat each document page as an image and generate multi-vector representations. These embeddings capture layout, typography, and visual context, allowing retrieval to get a whole table or schematic, not just nearby tokens. Along with vector search, this could now retrieve exact pages with relevant diagrams and pass them as images to the LLM to get relevant answers. It’s able to answer the question with an 8B llama 3.1 vision running locally!
Early pharma testers hit our system with queries like "Which EGFR inhibitors at 50 mg showed ≥ 30% tumor reduction?" We correctly returned the right tables and plots, but still hit a bottleneck, we weren’t able to join the dots across multiple reports. So we built a knowledge graph: we tag entities in both text and images, normalize synonyms (Erlotinib → EGFR inhibitor), infer relations (e.g. administered_at, yields_reduction), and stitch everything into a graph. Now a single query could traverse that graph across documents and surface a coherent, cross‑document answer along with the correct pages as images.
To illustrate that, and just for fun, we built a graph of 100 Paul Graham’s essays here: https://pggraph.streamlit.app/ You can search for various nodes, (eg. startup, sam altman, paul graham and see corresponding connections). In our system, we create graphs and store the relevant text chunks along with the entities, so on querying, we can extract the relevant entity, do a search on the graph and pull in the text chunks of all connected nodes, improving cross document queries.
For longer or multi-turn queries, we added persistent KV caching, which stores intermediate key-value states from transformer attention layers. Instead of recomputing attention from scratch every time, we reuse prior layers, speeding up repeated queries and letting us handle much longer context windows.
We’re open‑source under the MIT Expat license: https://github.com/morphik-org/morphik-core
Would love to hear your RAG horror stories, what worked, what didn’t and any feedback on Morphik. We’re here for it.
Not quite. You should clarify a bit more. The README has this about their license.
"Certain features - such as Morphik Console - are not available in the open-source version. Any feature in the ee namespace is not available in the open-source version and carries a different license. Any feature outside that is open source under the MIT expat license."
Does such a project exist?
https://github.com/rmusser01/tldw
I first built a POC in gradio and am now rebuilding it as a FastAPI app. The media processing endpoints work but I’m still tweaking media ingestion to allow for syncing to clients(idea is to allow for client-first design). The GitHub doesn’t show any of the recent changes, but if you check back in 2-3 weeks, I think I’ll have the API version pushed to the main branch.
Since people will be curious, one lesser thing I used this for is a diary/assistant and it's nice to have the peace of mind that I can dump my inner most thoughts without any concern for oversharing.
Curious about suitability of this for PDF's as conference presentation slides vs academic papers. Is this sensitive or tunable to such distinctions?
Looking for tests/validation; are they all in the evaluation folder? A Pharma example would be great.
Thank you for documenting the telemetry. I appreciate the ee commercialization dance :)
Creating graphs and entity resolution are both tunable with overrides, you can specify domain specific prompts and overrides (will add a pharma example!) (https://docs.morphik.ai/python-sdk/create_graph#parameters). I tried to add code, but was formatting badly, sorry for the redirect.
Minor nitpick, but the README for your ui-component project under ee says:
"License This project is part of Morphik and is licensed under the MIT License."
However, your ee folder has an "enterprise" license, not the MIT license.
For the metadata extraction, we save these as Column(JSONB) for each documents which allows it to be changed on the fly.
Although, I keep wondering if it would have been better to use something like mongodb for this part, just because it's more natural.
Please let me know if you have questions and how it works out for you.
If you're using txts, then plain RAG built on top of any vector database can suffice depending on your queries (if they directly reference the text, or can be made to, then similarity search is good enough). If they are cross document, setting a high number of chunks with plain RAG to retrieve might also do a good job.
If you have tables, images, etc. then using a better extraction mechanism (maybe unstructured, or other document processors) and then creating the embeddings can also work well.
I'd say if docs are simple, then just building your own pipeline on top of a vector db is good!
I'd be happy to report back after some testing, we are looking to optimize more of this soon, as speed is somewhat of a missing piece at the moment.