Show HN: Open source alternative to Perplexity Comet
Hey HN, we're a YC startup building an open-source, privacy-first alternative to Perplexity Comet.

No invite system unlike bunch of others – you can download it today from our website or GitHub: https://github.com/browseros-ai/BrowserOS

--- Why bother building an alternative? We believe browsers will become the new operating systems, where we offload much bunch of our work to AI agents. But these agents will have access to all your sensitive data – emails, docs, on top of your browser history. Open-source, privacy-first alternatives need to exist.

We're not a search or ad company, so no weird incentives. Your data stays on your machine. You can use local LLMs with Ollama. We also support BYOK (bring your own keys), so no $200/month plans.

Another big difference vs Perplexity Comet: our agent runs locally in your browser (not on their server). You can actually watch it click around and do stuff, which is pretty cool! Short demo here: https://bit.ly/browserOS-demo

--- How we built? We patch Chromium's C++ source code with our changes, so we have the same security as Google Chrome. We also have an auto-updater for security patches and regular updates.

Working with Chromium's 15M lines of C++ has been another fun adventure that I'm writing a blog post on. Cursor/VSCode breaks at this scale, so we're back to using grep to find stuff and make changes. Claude code works surprisingly well too.

Building the binary takes ~3 hours on our M4 Max MacBook.

--- Next? We're just 2 people with a lot of work ahead (Firefox started with 3 hackers, history rhymes!). But we strongly believe that a privacy-first browser with local LLM support is more important than ever – since agents will have access to so much sensitive data.

Looking forward to any and all comments!

The demo buying toothpaste shows the difficulty of these tasks. Toothpaste itself was very underspecified, and it essentially randomly chose from a huge list. Some tasks may have past actions that could help guide, others won't have any to inform. Failure cases abound -- maybe the toothpaste you previously bought is no longer available. Then what? Ultimately how much time did this particular example save since you need to double check the result anyway? This is what doomed Alexa for the purchasing experience that Amazon assumed it would enable in the first place.

I think it'd be better to show more non-trivial examples where the time savings is clear, and the failure cases are minimized... or even better how it's going to recover from those failure cases. Do I get a bespoke UI for the specific problem? Talk to it via chat?

This whole world is non-trivial. Good luck!

Great points! For sure, the whole agentic browsers space is still super early.

We are also just getting started and trying to narrow down on a high-value niche use-case.

There are few repetitive, boring use-cases where time saving could be meaningful -- one example: Walmart 3rd-party sellers routinely (multiple times a day) keep checking prices of the competitor products to price their products appropriately. This could be easily automated with current agentic browsers.

But in reality, would much more consistently be automated by a single playwright script.
True, there are plenty of libs already available to do such an automation if you are (or can hire a) dev.

But for non-technical folks, agentic browsers seems like a good UX to build such and many more automations.

  • bkyan
  • ·
  • 3 minutes ago
  • ·
  • [ - ]
Is there a way to hook BrowserOS up as a sub-agent for an orchestration agent/system?
Yes! We want to do this.

We were thinking of implement MCP protocol into the browser, so the browser can be an MCP server (that exposes bunch of tools -- navigation, click, extract) and you can connect that to your agent, would that work?

What is your use-case? Happy to chat on discord!

> --- How we built? We patch Chromium's C++ source code with our changes, so we have the same security as Google Chrome. We also have an auto-updater for security patches and regular updates.

So you rebuild your browser on every Chromium release? Because that's the risk: often changes go into Chromium with very innocent looking commit messages than are released from embargo 90 days later in their CVE reference

Good question, so far we have been building on top of chromium release that Google Chrome is based on.
This is similar to the chrome extension nanobrowser. https://github.com/nanobrowser/nanobrowser
I would prefer this as a browser extension, not as its own browser application.
We would've preferred to build this as browser extension too.

But we strongly believe that for building a good agent co-pilot we need bunch of changes at Chromium C++ code level. For example, chromium has a accessibility tree for every website, but doesn't expose it as an API to chrome extension. Having access to accessibility tree would greatly improve agent execution.

We are also building bunch of changes in C++ for agents to interact with websites -- functions like click, elements with indexes. You can inject JS for doing this but it is 20-40X slower.

Would this be possible for Firefox?
IIRC, Firefox's web extension API does not provide access to accessibility tree as well.
I'm not GP, but I agree that if your goal is to empower the end user and protect him from corporate overlords, then Firefox is a more logical choice to fork from.
Could you upstream that change in order to make it an extension in the future? I think people would not value it any less.
We don't mind upstreaming. But I don't think Google Chrome/Chromium wants to expose it as an API chrome extensions, if not they would've done this long time ago.

From Google's perspective, extension are meant to be lightweight applications, with restricted access.

I'm not really interested in AI agents for my webbrowser, but it would be pretty cool to see a fork of chromium available that, aside from being de-googled, relaxes all the "restricted access" to make it more fun to modify and customize the way you guys are. Just a thought, may be more of a market for the framework more than the product :)

See Sciter. A very cool, super lightweight alternative to Electron, but unfortunately it seems like a single developer project and I could never get any of the examples to run.

https://sciter.com/

Yes, we want to do this too! We'll expose much more richer APIs.

What use-cases do you have in mind? like scraping?

I mean you could build the agent with a first principles understanding of the DOM instead of just hacking together with the accessibility tree
We had this exact thought as well, you don't need a whole browser to implement the agentic capabilities, you can implement the whole thing with the limited permissions of a browser extension.

There are plenty of zero day exploit patches that Google immediately rolls out and not to mention all the other features that Google doesn't push to Chromium. I wouldn't trust a random open source project for my day-to-day browser.

Check out rtrvr.ai for a working implementation, we are an AI Web Agent browser extension that meets you where your workflows already are.

Brave Browser (70M+ users) has validated that a chromium fork can be viable path. And it can in fact provide better privacy and security.

Chrome extensions is not a bad idea too. Just saying that owning the underlying source code has some strong advantages in the long term (being able to use C++ for a11y tree, DOM handling, etc -- which will be 20-40X faster than injecting JS using chrome extension).

Congrats!

How are you planning to make the project sustainable (from a financial, and dev work/maintenance pov)?

Thank you!

plan is to sell licenses for Enterprise-version of browser, same as other open-source projects.

my guess is it's just an electron app or chromium wrapper with an ollama wrapper to talk to it (there are plenty of free open source libs to control browsers).
We are a chromium "wrapper"

But we are much more performant than other libs (like playwright) which are written in JS, as we implement bunch of changes at chromium source code level -- for example, we are currently implementing a way to build enriched DOMtree required for agent interactions (click, input text, find element) directly at C++ level.

We also plan to expose those APIs to devs.

“Just” is a four-letter word :)

When someone in their infinite wisdom decides to refactor an api and deprecate the old one, it creates work for everyone downstream.

Maybe as an industry we can agree to do this every so often to keep the LLMs at bay for as long as possible. We can take a page out of the book of the maintainers of moviepy for shuffling their apis around, it definitely keeps everyone on their toes.

No wireless. Less space than a Nomad.
You don’t have to guess, it’s open source
Would love to see this show up on homebrew!
Oooh, that's a nice idea! We'll look into doing that!
  • mh-
  • ·
  • 1 hour ago
  • ·
  • [ - ]
Making a homebrew recipe is super easy, and you can definitely find an example to draw from that's "shaped" like your app. Highly recommend.
Whats the roadmap looking like for Linux?

I don't have Mac or Windows.

this is on our radar, we plan to have it ready by early next week!

still a team of 2 people, so bunch things on our plate.

Is BrowserOS-OS on the roadmap?

(Will you ever make a better FydeOS, or if you're laser-focused, perhaps be open to sharing some with them, so they could?)

Yes! We are excited to build BrowserOS-OS! I think with agents the whole UX can be re-imagined.

I'll check out FydeOS!

This is very exciting given the rumor that OpenAI will be launching a (presumably not open source) browser of their own this summer. I've joined your Discord, so will try it soon and report back there. Congrats on launching!
Thank you!

Browser wars have begun.

> that OpenAI will be launching a (presumably not open source) browser of their own this summer.

For sure, won't be open-source. I bet in some parallel world, openAI would be non-profit and actually open-source AI :)

Do you have any benchmarks to share like Halluminate's Web Bench?
We working on it! Should have pretty soon!
> our agent runs locally in your browser (not on their server)

That's definitely a nice feature. Did you measure the impact on laptop battery life in a typical scenario (assuming there is such a scenario at this early stage)

The agent running by itself shouldn't impact battery life, it is similar to a lightweight chrome extension and if you think about it, it's an agent browsing the web like human would :)

If you run LLMs locally (using Ollama) and use that in our browser, that would impact battery life for sure.

This looks like a great project.

What are the system requirements? And shouldn't they be listed on your website?

we support Mac (apple silicon and intel) and Windows.

hardware requirements are minimal, same as Google Chrome, if you BYOK API keys for agents and are not running LLMs locally.

  • ivape
  • ·
  • 2 hours ago
  • ·
  • [ - ]
What is the default BrowserOS model? Is it local, and if so, what inferencing server are you using?
The default model is Gemini.

You can bring your own API keys and change the default to any model you local.

Or better run model locally using Ollama and use that!

  • ivape
  • ·
  • 2 hours ago
  • ·
  • [ - ]
The default is a remote Gemini call?
Yes, right now using Gemini API on our liteLLM server (we can't expose API key on client side).

We are working on smaller, fine-tuned model too, which will be the default soon! It should be much faster and precise at navigation tasks.

So would this or any AI browser go out and fetch a list of the best deals for my trip to Iceland? After Show me all the options it has found for flights, hotels, car rentals and show cheapest/best prices with all details (fly out of and into with times) to even allow me to pay for each item on same page I asked it to do so? As well it could group the overall best deal with details and then i can just click to pay instantly and or make some edits.
It seems that the next evolution of SEO will be too skip the SE and simply O for the LLMs.
We just started cooking, very soon you should be able to do this!
  • ·
  • 8 hours ago
  • ·
  • [ - ]