Main deal breaker for me when I tried it was I couldn't talk to multiple models at once, even if they were remote models on OpenRouter. If I ask a question in one chat, then switch to another chat and ask a question, it will block until the first one is done.
Also Tauri apps feel pretty clunky on Linux for me.
All of them, or this one specifically? I've developed a bunch of tiny apps for my own usage (on Linux) with Tauri (maybe largest is just 5-6K LoC) and always felt snappy to me, mostly doing all the data processing with Rust then the UI part with ClojureScript+Reagent.
I met the team late last year. They’re based out of Singapore and Vietnam. They ghosted me after promising to have two follow-up meetings, and were unresponsive to any emails, like they just dropped dead.
Principles and manifestos are a dime a dozen. It matters if you live by them or just have them as PR pieces. These folks are the latter.
I stumbled upon Jan.ai a couple of months ago when I was considering a similar app approach. I was curious because Jan.ai went way beyond what I considered to be limitations.
I haven’t tried Jan.ai yet, I see it as an implementation not a solution.
… which seems particularly strange considering the size of the cloned GitHub repository to be 1.8GiB which swells up to 4.8GiB after running «make build» – I tried to build it locally (which failed anyway).
It is startling that a relatively simple UI frontend can add 3Gb+ of build artefacts alone – that is the scale of a Linux kernel build.
That's a tall claim.
I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is:
a) is fully native (not electron.js) b) not a llama.cpp / MLX wrapper c) fully sandboxed (none of Jan, Ollama, LM Studio are)
I will not promote. Quite shameless of you to shill your electron.js based llama.cpp wrapper here.
> I accept every challenge to prove that HugstonOne is worth the claim.
I expect your review.
I’ll remind you,
> If you looking for privacy there is only 1 app in the whole wide internet right now, HugstonOne (I challenge everyone to find another local GUI with that privacy).
Heck, if you look at the original comment, it clearly states it’s macOS and iOS native,
> I've been selling a macOS and iOS private LLM app on the App Store for over two years now, that is: > a) is fully native (not electron.js) b) not a llama.cpp / MLX wrapper c) fully sandboxed (none of Jan, Ollama, LM Studio are)
How do you expect it to be and cross platform? Isn’t hugstone windows only?
So, what are your privacy arguments? Don’t move the goal post.
Now for real, I wish to meet more people like you, I admire your professional way of arguing, and I really wish you all the best :)
And HugstonOne is for Windows; what of it?
It's not open source, has no license, runs on Windows only, and requires an activation code to use.
Also, the privacy policy on their website is missing[2].
Anyone remotely concerned about privacy wouldn't come near this thing.
Ah, you're the author, no wonder you're shilling for it.
Great to hear! Since you care so much about privacy, how can I get an activation code without sending any bytes over a network or revealing my email address?
Llama.cpp's built-in web UI.
I tried downloading your app, and it's a whopping 500 MB. What takes up the most disk space? The llama-server binary with the built-in web UI is like a couple MBs.
>the app is a bit heavy as is loading llm models using llama.cpp cli
So it adds an unnecessary overhead of reloading all the weights to VRAM on each message? On some larger models it can take up to a minute. Or you somehow stream input/output from an attached CLI process without restarting it?
What in the world are you trying to say here? llama.cpp can run completely locally and web access can be limited to localhost only. That's entirely private and offline (after downloading a model). I can't tell if you're spreading FUD about llama.cpp or are just generally misinformed about how it works. You certainly have some motivated reasoning trying to promote your app which makes your replies seem very disingenuous.
- Cloud Integration: Connect to OpenAI, Anthropic, Mistral, Groq, and others
- Privacy First: Everything runs locally when you want it to
I'm trying Jan now and am really liking it - it feels friendlier than the Ollama GUI.
I mean, it's not like people enjoy lovely smell of cash burning and bias opinions heavily towards it, or is it like that?
I captured loopback and noticed Ollama returning an HTTP 403 forbidden message to Jan.
The solution was set environment variables:
OLLAMA_HOST=0.0.0.0
OLLAMA_ORIGINS=*
Here's the rest of the steps:- Jan > Settings > Model Providers
- Add new provider called "Ollama"
- Set API key to "ollama" and point to http://localhost:11434/v1
- Ensure variables above are set
- Click "Refresh" and the models should load
Note: Even though an API key is not required for local Ollama, Jan apparently doesn't consider it a valid endpoint unless a key is provided. I set mine to "ollama" and then it allowed me to start a chat.
Can't make it work with ollama endpoint
this seems to be the problem but they're not focusing on it: https://github.com/menloresearch/jan/issues/5474#issuecommen...
I think
I first used Jan some time ago and didn’t really like it but it has improved a lot so I encourage everyone to try it, it’s a great project