Launch HN: Cactus (YC S25) – AI inference on smartphones
Hey HN, Henry & Roman here, we are building Cactus (https://cactuscompute.com/), an AI inference engine specifically designed for phones.

We're seeing a major push towards on-device AI, and for good reason: on-device AI decreases latency from >1sec to <100ms, guarantees privacy by default, works offline, and doesn't rack up a massive API bill at scale.

Also, tools and agentic designs make small models really good beyond benchmarks. This has been corroborated by other papers like https://arxiv.org/abs/2506.02153, and we see model companies like DeepMind aggressively going into smaller models with Gemma3 270m and 308m. We found Qwen3 600m to be great at tool calls for instance.

Some frameworks already try to solve this but in my previous job, they struggled in production compared to research and playgrounds:

- They optimise for modern devices but 70% of phones today are low-mid budget.

- Bloated app bundle sizes and battery drain are serious concerns for users.

- Phone GPU battery drain is unacceptable, NPUs are preferred, but few phones have those for now.

- Some are platform-specific, requiring different models and workflows for different operating systems.

At Cactus, we’ve written kernels and inference engine for running AI locally on any phone, from the ground-up.

Cactus is designed for mobile devices and their constraints. Every design choice like energy efficiency, accelerator support, quantization levels, supported models, weight format, and context management were determined by this. We also provide minimalist SDKs for app developers to build agentic workflows in 2-5 lines of code.

We made a Show HN post when we started the project to get the community's thoughts (https://news.ycombinator.com/item?id=44524544). Based on your feedback, we built Cactus bottom-up to solve those problems, and are launching the Cactus Kernels, Cactus Graph and Cactus Engine, all designed for phones and tiny devices.

CPU benchmarks for Qwen3-600m-INT8 :

- 16-20 toks/sec on Pixel 6a / Galaxy S21 / iPhone 11 Pro

- 50-70 toks/sec on Pixel 9 / Galaxy S25 / iPhone 16.

- Time-to-first-token is as low as 50ms depending on prompt size.

On NPUs, we see Qwen3-4B-INT4 run at 21 toks/sec.

We are open-source (https://github.com/cactus-compute/cactus). Cactus is free for hobbyists and personal projects, with a paid license required for commercial use.

We have a demo app on the App Store at https://apps.apple.com/gb/app/cactus-chat/id6744444212 and on Google Play at https://play.google.com/store/apps/details?id=com.rshemetsub....

In addition, there are numerous apps using Cactus in production, including AnythingLLM (https://anythingllm.com/mobile) and KinAI (https://mykin.ai/). Collectively they run over 500k weekly inference tasks in production.

While Cactus can be used for all Apple devices including Macbooks due to their design, for computers/AMD/Intel/Nvidia generally, please use HuggingFace, Llama.cpp, Ollama, vLLM, MLX. They're built for those, support x86, and are all great!

Thanks again, please share your thoughts, we’re keen to understand your views.

  • cco
  • ·
  • 1 hour ago
  • ·
  • [ - ]
I've been using Cactus for a few months, great product!

Makes it really easy to plug and play different models on my phone.

If anybody is curious what a Pixel 9 Pro is capable of:

Tokens: 277- TTFT: 1609ms 9 tok/sec

qwen2.5 1.5b instruct q6_k

Sure, here's a simple implementation of the Bubble Sort algorithm in Python:

def bubble_sort(arr): n = len(arr) for i in range(n): # Flag to detect any swap in current pass swapped = False for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] swapped = True # If no swap occurs in the inner loop, the array is already sorted if not swapped: break

# Example usage: arr = [64, 34, 25, 12, 22, 11, 90] bubble_sort(arr) print("Sorted array is:", arr)

This function sorts the array in ascending order using the Butbble Sort algorithm. The outer loop runs n times, where n is the length of the array. The inner loop runs through the array, comparing adjacent elements and swapping them if they are in the wrong order. The swapped flag is used to detect if any elements were swapped in the current pass, which would indicate that the array is already sorted and can be exited early.

Thanks for the kind words, we’ve improved performance now actually, follow the instructions on the core repo.

Same model should run 3x faster on the same phone.

These improvements are still being pushed to the SDKs though.

  • pzo
  • ·
  • 52 minutes ago
  • ·
  • [ - ]
FWIW They change license 2 weeks ago from Apache 2.0 to non commercial. Understand they need to pay the bills but lost trust with such move. Will stick with react-native-ai [0] that is extension of vercel aisdk but with also local inference on edge devices

[0] react-native-ai.dev

Understandable, though to explain, Cactus is still free for personal & small projects if you fall into that category. We’re early and would definitely consider your concerns on license in our next steps, thanks.
For fear of having dang show up and scold me, I'll just add the factual statement that I will never ever believe any open source claim in any Launch HN ever. I can now save myself the trouble of checking, because I can be certain it's untrue

I already knew to avoid "please share your thoughts," although I guess I am kind of violating that one by even commenting

It’s absolutely fine to share your thoughts, that’s the point of this post, we want to understand where people’s heads are at, it’s what determines our next decisions. What do you really think? I’m genuinely asking so I don’t think mods will react.
Open source for the PR, then switching to non-open licensing is a cowardly, bullshit move.

https://github.com/cactus-compute/cactus/commit/b1b5650d1132...

Use open source and stick with it, or don't touch it at all, and tell any VC shitheels saying otherwise to pound sand.

If your business is so fragile or unoriginal that it can't survive being open source, then it will fail anyway. If you make it open source, embrace the ethos and build community, then your product or service will be stronger for it. If the big players clone your work, you get instant underdog credibility and notoriety.

Thanks for sharing your thoughts. Honestly, I’d be annoyed too and it might sound like an excuse, but our circumstance was quite unique, it was a difficult decision at that time being an open-source contributor myself.

It’s still free for the community, just that corporations need a license. Should we make this clearer in the license?

Does it incorporate web search tool?
It can incorporate any tool you want at all. This company’s app use exactly that feature, you can download and get a sense of it before digging in. https://anythingllm.com/mobile
how many GB does an app packaged with Qwen3 600m + Cactus take up?

e.g. if I built a basic LLM chat app with Qwen3 600m + Cactus, whats the total app size?

400mb if you ship the model as an asset. However, you can also build the app to download the model post-install, Cactus SDKs support this, as well as agentic workflows you’d need.
The first picture on the android app store page shows Claude Haiku as the model
Thanks for noticing! The app is just a demo for the framework, so devs can compare the open-source models against frontier Cloud models and make a decision. We removed the comparison now so those screenshots indeed has to be updated.
How does this startup plan to make money?
Cactus is free for hobbyists and personal projects, but we charge a tiny fee for commercial use which comes with more features that are relevant for enterprises.