./llama.cpp/llama-cli -hf unsloth/DeepSeek-V3.1-GGUF:UD-Q2_K_XL -ngl 99 --jinja -ot ".ffn_.*_exps.=CPU"
More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1
Was that document almost exclusively written with LLMs? I looked at it last night (~8 hours ago) and it was riddled with mistakes, most egregious was that the "Run with Ollama" section had instructions for how to install Ollama, but then the shell commands were actually running llama.cpp, a mistake probably no human would make.
Do you have any plans on disclosing how much of these docs are written by humans vs not?
Regardless, thanks for the continued release of quants and weights :)
``` ./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf ```
Ollama can only allow merged GGUFs (not splitted ones), so hence the command.
All docs are made by humans (primarily my brother and me), just sometimes there might be some typos (sorry in advance)
I'm also uploading Ollama compatible versions directly so ollama run can work (it'll take a few more hours)
But in the docs I see things like
cp llama.cpp/build/bin/llama-* llama.cpp
Wouldn't this explain that? (Didn't look too deep)```
apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server
cp llama.cpp/build/bin/llama-* llama.cpp
```
but then Ollama is above it:
```
./llama.cpp/llama-gguf-split --merge \ DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \ merged_file.gguf
```
I'll edit the area to say you first have to install llama.cpp
There is a way to convert to Q8_0, BF16, F16 without compiling llama.cpp, and it's enabled if you use `FastModel` and not on `FastLanguageModel`
Essentially I try to do `sudo apt-get` if it fails then `apt-get` and if all fails, it just fails. We need `build-essential cmake curl libcurl4-openssl-dev`
See https://github.com/unslothai/unsloth-zoo/blob/main/unsloth_z...
Imo it's best to just depend on the required fork of llama.cpp at build time (or not) according to some configuration. Installing things at runtime is nuts (especially if it means modifying the existing install path). But if you don't want to do that, I think this would also be an improvement:
- see if llama.cpp is on the PATH and already has the requisite features
- if not, check /etc/os-release to determine distro
- if unavailable, guess distro class based on the presence of high-level package managers (apt, dnf, yum, zypper, pacman) on the PATH
- bail, explain the problem to the user, give copy/paste-friendly instructions at the end of we managed to figure out where we're running
Is either sort of change potentially agreeable enough that you'd be happy to review it?(1) Removed and disabled sudo
(2) Installing via apt-get will ask user's input() for permission
(3) Added an error if failed llama.cpp and provides instructions to manual compile llama.cpp
I would just ask the user to install the package, and _maybe_ show the command line to install it (but never run it).
That said, it does at least seem like these recent changes are a large step in the right direction.
---
* in terms of what the standard approach should be, we live in an imperfect world and package management has been done "wrong" in many ecosystems, but in an ideal world I think the "correct" solution here should be:
(1) If it's an end user tool it should be a self contained binary or it should be a system package installed via the package manager (which will manage any ancillary dependencies for you)
(2) If it's a dev tool (which, if you're cloning a cpp repo & building binaries, it is), it should not touch anything systemwide. Whatsoever.
This often results in a README with manual instructions to install deps, but there are many good automated ways to approach this. E.g. for CPP this is a solved problem with Conan Profiles. However that might incur significant maintenace overhead for the Unsloth guys if it's not something the ggml guys support. A dockerised build is another potential option here, though that would still require the user to have some kind of container engine installed, so still not 100% ideal.
(2) I might make the message on installing llama.cpp maybe more informative - ie instead of re-directing people to the docs on manual compilation ie https://docs.unsloth.ai/basics/troubleshooting-and-faqs#how-..., I might actually print out a longer message in the Python cell entirely
Yes we're working on Docker! https://hub.docker.com/r/unsloth/unsloth
That will be nice too, though I was more just referring to simply doing something along the lines of this in your current build:
docker run conanio/gcc11-ubuntu16.04 make clean -C llama.cpp etc etc...
(likely mounting & calling a sh file instead of passing individual commands)---
Although I do think getting the ggml guys to support Conan (or monkey patching your own llama conanfile in before building) might be an easier route.
Quietly installing stuff at runtime is shady for sure, but why not if I consent?
1. So I added a `check_llama_cpp` which checks if llama.cpp does exist and it'll use the prebuilt one https://github.com/unslothai/unsloth-zoo/blob/main/unsloth_z...
2. Yes I like the idea of determining distro
3. Agreed on bailing - I was also thinking if doing a Python input() with a 30 second waiting period for apt-get if that's ok? We tell the user we will apt-get some packages (only if apt exists) (no sudo), and after 30 seconds, it'll just error out
4. I will remove sudo immediately (ie now), and temporarily just do (3)
But more than happy to fix this asap - again sorry on me being dumb
- Determine the command that has to be run by the algorithm above.
This does most of the work a user would have to figure out what has to be installed on their system.
- Ask whether to run the command automatically.
This allows the “software should never install dependencies by itself” crowd to say no and figure out further steps, while allowing people who just want it to work to get on with their task as quickly as possible (who do you think there are more of?).
I think it would be fine to print out the command and force the user to run it themselves, but it would bring little material gain at the cost of some of your users’ peace (“oh no it failed, what is it this time ...”).
Please, please, never silently attempt to mutate the state of my machine, that is not a good practice at all and will break things more often than it will help because you don't know how the machine is set up in the first place.
But yes agreed there won't be any more random package installs sorry!
You just fail and print a nice error message telling the user exactly what they need to do, including the exact apt command or whatever that they need to run.
(1) Removed and disabled sudo
(2) Installing via apt-get will ask user's input() for permission
(3) Added an error if failed llama.cpp and provides instructions to manual compile llama.cpp
Again apologies on my dumbness and thanks for pointing it out!
I was thinking if I can do it during the pip install or via setup.py which will do the apt-get instead.
As a fallback, I'll probably for now remove shell executions and just warn the user
Some people may prefer using whatever llama.cpp in $PATH, it's okay to support that, though I'd say doing so may lead to more confused noob users spam - they may just have an outdated version lurking in $PATH.
Doing so makes unsloth wheel platform-dependent, if this is too much of a burden, then maybe you can just package llama.cpp binary and have it on PyPI, like how scipy guys maintain a https://pypi.org/project/cmake/ on PyPI (yes, you can `pip install cmake`), and then depends on it (maybe in an optional group, I see you already have a lot due to cuda shit).
I'm still working on it, but sadly I'm not a packaging person so progress has been nearly zero :(
From how I interpreted it, he meant you could create a new python package, this would effectively be the binary you need.
In your current package, you could depend on the new one, and through that - pull in the binary.
This would let you easily decouple your package from the binary,too - so it'd be easy to update the binary to latest even without pushing a new version of your original package
I've maintained release pipelines before and handled packaging in a previous job, but I'm not particularly into the python ecosystem, so take this with a grain of salt: an approach would be
Pip Packages :
* Unsloth: current package, prefers using unsloth-llama, and uses path llama-cpp as fallback (with error msg as final fallback if neither exist, promoting install for unsloth-llama)
* Unsloth-llama: new package which only bundles the llama cpp binary
I was trying to see if I could pre-compile some llama.cpp binaries then save them as a zip file (I'm a noob sorry) - but I definitely need to investigate further on how to do python pip binaries
But I think similarly for uv we need a setup.py for packaging binaries (more complex)
Try to find prebuilt and download.
See if you can compile from source if a compiler is installed.
If no compiler: prompt to install via sudo apt and explaining why, also give option to abort and have the user install a compiler themselves.
This isn't perfect, but limits the cases where prompting is necessary.
The current solution hopefully is in between - ie sudo is gone, apt-get will run only after the user agrees by pressing enter, and if it fails, it'll tell the user to read docs on installing llama.cpp
Usually you don't make assumptions on the host OS, just try to find the things you need and if not, fail, ideally with good feedback. If you want to provide the "hack", you can still do it, but ideally behind a flag, `allow_installation` or something like that. This is, if you want your code to reach broader audiences.
But I do agree maybe for better security pypi should check for commands and warn
But I'm working on more cross platform docs as well!
I'm working with the AMD folks to make the process easier, but it looks like first I have to move off from pyproject.toml to setup.py (allows building binaries)
I chose (1) since it was mainly for ease of use for the user - but I agree it's not a good idea sorry!
:( I also added a section to manually compile llama.cpp here: https://docs.unsloth.ai/basics/troubleshooting-and-faqs#how-...
But I agree I should remove apt-gets - will do this asap! Thanks for the suggestions :)
I think that you have removed sudo so this is nice, my suggestion is pretty similar to that of pxc (basically determine different distros and use them as that)
I wonder if we will ever get a working universal package manager in linux, to me flatpak genuinely makes the most sense even sometimes for cli but flatpak isn't built for cli unlike snap which both support cli and gui but snap is proprietory.
I agree on handling different distros - sadly I'm not familiar with others, so any help would be appreciated! For now I'm most familiar with apt-get, but would 100% want to expand out!
Interesting will check flatpak out!
I doubt its efficacy here, they might be more useful if you provide a whole jupyter / browser gui though but a lot o f us run it just in cli so I doubt flatpak.
I didn't mean to say that flatpak was the right tool for this job, I seriously don't know too much to comment and so I'd prefer if you could ask someone definitely experienced regarding it.
My reasoning for flatpak was chunking support (that I think is rare in appimage) and easier gpu integration (I think) compared to docker, though my reasoning might be flawed since flatpak isn't mostly used with cli.
probably not, because LLMs are a little more competent than this
nixos is such a great way to expose code doing things it shouldn't be doing.
but when it was failing on my original idea, it kept trying dumb things that weren't really even nix after a while.
165GB will need a 24GB GPU + 141GB of RAM for reasonably fast inference or a Mac
You guys already do a lot for the local LLM community and I appreciate it.
llama.cpp also unfortunately cannot quantize matrices that are not a multiple of 256 (2880)
https://www.tbench.ai/leaderboard
Looks like it doesn't get close to GPT-5, Claude 4, or GLM-4.5, but still does reasonably well compared to other open weight models. Benchmarks are rarely the full story though, so time will tell how good it is in practice.
there are plenty of other benchmarks that disagree with these, with that said. from my experience most of these benchmarks are trash. use the model yourself, apply your own set of problems and see how well it fairs.
I also publish my own evals on new models (using coding tasks that I curated myself, without tools, rated by human with rubrics). Would love you to check out and give your thoughts:
Example recent one on GPT-5:
https://eval.16x.engineer/blog/gpt-5-coding-evaluation-under...
All results:
I don't consider myself super special. I think it should be doable to create a benchmark that beats me having to test every single new model.
"We totally promise that when we run your benchmark against our API we won't take the data from it and use to be better at your benchmark next time"
:P
If you want to do it properly you have to avoid any 3rd party hosted model when you test your benchmark, which means you can't have GPT5, claude, etc. on it; and none of the benchmarks want to be 'that guy' who doesn't have all the best models on it.
So no.
They're not secret.
Your api key is linked to your credit card, which is linked to your identity.
…but hey, youre right.
Lets just trust them not to be cheating. Cool.
We made objective systems turn out subjective answers… why the shit would anyone think objective tests would be able to grade them?
https://artificialanalysis.ai/models/deepseek-v3-1-reasoning
Qwen3-Coder-480B-A35B-Instruct
GLM4.5 Air
Kimi K2
DeepSeek V3 0324 / R1 0528
GPT-5 Mini
Thanks for any feedback!
It's knowledge seems to be lacking even compared to gpt3.
No idea how you'd benchmark this though.
That is the point of these small models. Remove the bloat of obscure information (address that with RAG), leaving behind a core “reasoning” skeleton.
Seems more user-friendly to bake it in.
https://openrouter.ai/openai/gpt-oss-120b and https://openrouter.ai/deepseek/deepseek-chat-v3.1 for the same providers is probably better, although gpt-oss-120b has been around long enough to have more providers, and presumably for hosters to get comfortable with it / optimize hosting of it.
Let's hope not, because gpt-oss-120B can be dramatically moronical. I am guessing the MoE contains some very dumb subnets.
Benchmarks can be a starting point, but you really have to see how the results work for you.
I'm considering buying a Linux workstation lately and I want it full AMD. But if I can just plug an NVIDIA card via an eGPU card for self-hosting LLMs then that would be amazing.
Though I have to ask: why two eGPUs? Is the LLM software smart enough to be able to use any combination of GPUs you point it at?
llama.cpp probably is too, but I haven't tried it with a bigger model yet.
llama-server.exe -ngl 99 -m Qwen3-14B-Q6_K.gguf
And then connect to llamacpp via browser to localhost:8080 for the WebUI (its basic but does the job, screenshots can be found on Google). You can connect more advanced interfaces to it because llama.cpp actually has OpenAI-compatible API.
There's also ROCm. That's not working for me in LM Studio at the moment. I used that early last year to get some LLMs and stable diffusion running. As far as I can tell, it was faster before, but Vulkan implementations have caught up or something - so much the mucking about isn't often worth it. I believe ROCm is hit or miss for a lot of people, especially on windows.
While the memory bandwidth is decent, you do actually need to do matmuls and other compute operations for LLMs, which again its pretty slow at
[0]: https://old.reddit.com/r/LocalLLaMA/comments/1dcdit2/p40_ben... [1]: https://developer.nvidia.com/cuda-gpus
or this: ``` <|toolcallsbegin|><|toolcallbegin|>executeshell<|toolsep|>{"command": "pwd && ls -la"}<|toolcallend|><|toolcallsend|> ```
Prompting it to use the right format doesn't seem to work. Claude, Gemini, GPT5, and GLM 4.5, don't do that. To accomodate DeepSeek, the tiny agent that I'm building will have to support all the weird formats.
One of the funnier info scandals of 2025 has been that only Claude was even close to properly trained on JSON file edits until o3 was released, and even then it needed a bespoke format. Geminis have required using a non-formalized diff format by Aider. Wasn't until June Gemini could do diff-string-in-JSON better than 30% of the time and until GPT-5 that an OpenAI model could. (Though v4a, as OpenAI's bespoke edit format is called, is fine because it at least worked well in tool calls. Geminis was a clown show, you had to post process regular text completions to parse out any diffs)
I thought the APIs in use generally interface with backend systems supporting logit manipulation, so there is no need to reject and reinference anything; its guaranteed right the first time because any token that would be invalid has a 0% chance of being produced.
I guess for the closed commercial systems that's speculative, but all the discussion of the internals of the open source systems I’ve seen has indicated that and I don't know why the closed systems would be less sophisticated.
There is a substantial performance cost to nuking, the open source internals discussion may have glossed over that for clarity (see github.com/llama.cpp/... below). The cost is very high, default in API* is not artificially lower other logits, and only do that if the first inference attempt yields a token invalid in the compiled grammar.
Similarly, I was hoping to be on target w/r/t to what strict mode is in an API, and am sort of describing the "outer loop" of sampling
* blissfully, you do not have to implement it manually anymore - it is a parameter in the sampling params member of the inference params
* "the grammar constraints applied on the full vocabulary can be very taxing. To improve performance, the grammar can be applied only to the sampled token..and nd only if the token doesn't fit the grammar, the grammar constraints are applied to the full vocabulary and the token is resampled." https://github.com/ggml-org/llama.cpp/blob/54a241f505d515d62...
For OpenAI, you can just pass in the json_schema to activate it, no library needed. For direct LLM interfacing you will need to host your own LLM or use a cloud provider that allows you too hook in, but someone else may need to correct me on this.
If anyone is using anything other than Outlines, please let us know.
Some people on reddit (very reliable source I know) are saying it was trained on a lot of Gemini and I can see that. for example it does that annoying thing gemini does now where when you use slang or really any informal terms it puts them in quotes in its reply
Haven´t used Gemini much, but the time I used it, it felt very academic and theoretical compared to Opus 4. So that seems to fit. But I'll have to do more evaluation of the non-Claude models to get a better idea of the differences.
It raises a funny "innovator's dilemma" that might happen. Where an incumbent has to serve chatty consumers, and therefore gets little technical/professional training data. And a more sober workplace chatbot provider is able to advance past the incumbent because they have better training data. Or maybe in a more subtle way, chatbot personas give you access to varying market segments, and varying data flywheels.
https://github.com/pchalasani/claude-code-tools/tree/main?ta...
I'm not confident enough to say it's as good as claude opus or even sonnet, but it seems not bad!
I did run into an api error when my context exceeded deepseek's 128k window and had to manually compact the context.
some Chinese next gen AI chips is coming, deepseek is working with them to get its flagship model trained using such domestic chips.
interesting time ahead! just imagine what it could do to NVIDIA share price when deepseek releases a SOTA new model trained without using NVIDIA chips.
Anyway, from my experience, if China really has advanced AI chips for SOTA model, I am sure propaganda machine will go all out, look how they boasted Huawei CPU that’s two generations behind Qualcomm and TSMC
I'm not a programmer, though - engineering manager.
While system promting is the easy way of limiting the output in a somewhat predictable manner, have you tried setting `max_tokens` when doing inference? For me that works very well for constraining the output, if you set it to 100 you get very short answers while if you set it to 10,000 you can very long responses.
I'll have to see how things go with this model after a week, once the hype has died down.
$0.56 per million tokens in — and $1.68 per million tokens out.
But I am sti disappointed with the price increase.
I asked Kimi K2 for an account of growing up in my home town in Scotland, and it was ridiculously accurate. I then asked it to do the same for a similarly sized town in Kerala. ChatGPT suggested that while it was a good approximation, K2 got some of the specifics wrong.
https://brokk.ai/power-ranking?version=openround-2025-08-20&...
*pricing: MODEL deepseek-chat deepseek-reasoner 1M INPUT TOKENS (CACHE HIT) $0.07 1M INPUT TOKENS (CACHE MISS) $0.56 1M OUTPUT TOKENS $1.68
wait until you find out that China also acting the same way toward the rest of the world (surprise pikachu face)
Dark propaganda opposed to what, light propaganda? The Chinese model being released is about keeping China down?
You seem very animated about this, but you would probably have more success if you tried to clarify this a bit more.
What would be incredible is China sticking the landing to a third-sector economy. Plenty of countries have industrialized over the past century, only a handful became true service economies.
It's theft all the way down, son
Drawing the line only now... saying this is where copying stops being okay doesn't seem very fair. No AI company is really in a position to whine about it from my POV (ignoring any lawyer POV). Cue the world's smallest violin
BTW DeepSeek has contributed a lot, with actual white papers describing in detail their optimizations. How are the rest of the American AI labs doing in contributing research and helping one another advance the field?