That being said, they essentially took the IETF draft I worked on for a while [1] and also my Rust implementation [2]. They built a thin wrapper [3] around my implementation and now call it "Kagi’s implementation of Privacy Pass". I think giving me some credit would have been in order. IETF work and work on open-source software is mostly voluntary, unpaid, and often happens outside of working hours. It's not motivating to be treated like that. Kagi, you can do better.
[1] https://datatracker.ietf.org/doc/draft-ietf-privacypass-batc... [2] https://github.com/raphaelrobert/privacypass [3] https://github.com/kagisearch/privacypass-lib/blob/e4d6b354d...
Indeed, this is the intended interpretation of "Kagi's implementation of Privacy Pass" - we're talking about building out the server infrastructure, the UX, the browser extensions, the mobile applications, the Orion browser integration, the support and documentation, the Tor service, etc. The cryptography is obviously an extremely important piece, but it is far from the only piece.
As other commenters have noted, the code in question is MIT licensed [1] and we're pulling it in as a standard dependency [2], it's not like we've gone out of our way to obscure its origin. The MIT license does not require us to do anything more.
That said, I can understand the author wanting more visible attribution, and that's very reasonable, we'll add a blurb to the blog post acknowledging his contribution to Kagi's deployment of Privacy Pass.
[1] https://github.com/raphaelrobert/privacypass/blob/main/LICEN...
[2] https://github.com/kagisearch/privacypass-lib/blob/e4d6b354d...
PS: If you want to go above and beyond, you can spell my last name right in the blog post – it's Robert, not Roberts.
That was on me, fixed!
I’ve never had any of my open source software used, and I typically license it with MIT, so I’m curious how other groups and organizations actually comply with the license.
If someone wants attribution or something then they should use a license that requires that thing.
I’m under no obligation to thank someone for holding a door for me; if I fail to do so it does not mean that person should switch to a different door-holding license in the future. It just means I’m a bit of a jerk.
When lifting an entire (permissive licensed) implementation it’s good form to say thanks.
For FOSS, on the other hand, licenses are a well-established thing. And developers have free reign to pick a license for their code and they very commonly pick MIT...totally on their own volition. Which strips them of all privileges. It's like writing a book and explicitly setting it into the public domain. If that's what you want to do, that's great, but very commonly I don't think it's what developers actually want to do.
In the world of copyright, the long-standing legal default is for the author to own their work for a certain amount of time, whether or not the copyright is explicitly claimed. Because making public domain the legal default would be utterly insane.
I guess what I'm saying here is my beef isn't with entities that choose to be jerks—that's annoying and always gonna happen to some extent—it's more with the all-too-common decision to use the MIT License. And when I see people complain about it...I understand the sentiment but I also can't help but think that the folks complaining had it coming and it was totally avoidable.
I like that the Kagi folks stepped up and thanked you when you requested it, and I like that you wrote this code and made it available. But going around the internet trying to get explicit thanks seems more like the norm breaking here.
Which, as far as I can tell they haven't done. Their MIT licence claims their own copyright. No reference to the library used in readme.
https://github.com/kagisearch/privacypass-lib/blob/main/LICE...
https://github.com/kagisearch/privacypass-lib/blob/main/READ...
Usually when using apps that use MIT licensed libs they also implement a notice in a user-facing way. Google maps for instance has a (albeit hidden) section in their settings menu referencing at least one MIT licensed library.
As soon as you run cargo build the source code will be fetched including the original license. That’s better than a settings menu with just a license!
https://github.com/kagisearch/privacypass-lib/blob/83c9be8cb...
>Copyright (c) <year> <copyright holders>
>Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
>*The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.*
>[...]
> This repository contains the source code of the core library implementing the Privacy Pass API used by Kagi.
Yeah... that doesn't feel great. Though I do think the folks at Kagi would be open to more accurately reframing that as "core library implementing a Crystal Lang wrapper for raphaelrobert/privacypass". It's likely unintentional, they were probably just focusing on getting it working and didn't get someone to reread this stuff.
I wish this extension would integrate better with the browser by automatically understanding the context. That is, if I'm in a "regular" mode it'll use my session, but if I'm in a "private browsing" mode (`browser.extension.inIncognitoContext`) it'll use Privacy Pass to authenticate me, without me having to explicitly do anything about it.
(I don't use Orion, as there's no GNU/Linux version.)
The reason it's become so rare is most companies in this space (heck tons of tech companies period) have used a business model of offering a thing to one group of users and then turning around and selling the results of that thing to another group of users, where the latter group is the one actually driving your revenue. This by default almost assumes a hostility towards the former group because their interests will of course be at odds with the interests of the latter group.
What's refreshing about Kagi and other new tech companies is they have dumped this model in favor of having just one group that they serve and drive revenue from (ie. the 'old' model).
It’s hard to make money by charging a lot to a small group of people since now you’re dealing with anti-network effects. Doubling the price of a product will likely more than halve your user base.
If you try to build a network of paid users, you lose because you'll be run over by 'free' competitors monetizing indirectly.
HBO used this model way back when. It’s been a lasting business.
Yeah, the ad supported model has its problems, but it also makes the internet way more accessible. If we think about it, companies and people with more money are basically subsidizing these services for everyone else. They're the ones seeing the ads that keeps the lights on for users who can't afford to pay.
If everything was subscription only, a ton of people like students, low income families, people in developing countries would be shut out. "Free" services, even with their flaws, create a kind of digital subsidy. It's not perfect, but it means way more people can use these tools.
There's no reason why a subscription model could not also be used to subsidize people who can not pay, other than that companies are structured to extract as much as possible (by law, if they are public).
There are good network effect arguments about why this strategy can be effective, not simply 'altruistic.'
Ads simply make the extraction happen across the board, except that the ad model somewhat privileges technical users who know how to circumvent ads.
I'm a firm believer of this but we need more people to join in.
And it already works to some degree.
I've now had a working search engine for almost 3 years.
My last 3 jobs (9 years) haven't forced me to use Windows.
I can chat and organize events without Facebook knowing.
And it is not like the quality has gone down either. My choices have mostly given me better experiences in a number of ways.
Edit:
If more people start
- advocating for better hardware and software,
- canceling subscriptions and memberships when it becomes clear they are reducing value or increasing price,
- building skills both to get independent from their current cloud (so you can move around or at least having a credibile possibility to do so)
- and for individuals to get better jobs
then I think things will change.
For inspiration: at least here in Norway, with several gym memberships, if you cancel they will quickly approach you with good offers, and they can get really good: I got several months free, a friend got offered free months and a sizable gift card.
Bonus: if more people join in this will get picked up by Wall Street and they will begin punishing this nonsense too ;-)
My base salary has doubled and I enjoy my work a lot more now that I don't have to accept all kinds of MS shenanigans to play a part in how I work.
Having a working search engine shouldn't be underestimated either: living from 2012 to 2022 knowing that search used to be a solved problem but wasn't anymore was really annoying.
Directly through activist investors and shareholder groups (which nowadays usually are institutional investors) who vote to change company policies, fire the CEO, or in some cases fire the whole board.
This is not true and it’s not what fiduciary duty means. Stop repeating it, it’s really dumb.
Companies very frequently do not monetize things that they could under the guise of “building brand recognition” or “establishing a user base”. It’s even as easy as “raising the price will alienate customers we think are important to long term revenue”.
It’s trivial to justify not extracting maximum price and public companies do it all of the time.
Look at Costco’s business model if you want an example
Or would the idea be to only subsidize students and not poor adults?
It would be one thing if we had like a national "verify I'm on SNAP or equivalent API"
Not every product category is amenable to such business models but many are.
[1] To be fair, Discord likely sells user data to advertisers to make additional money.
Imagine a utopian world where you just pay per site visit, and in return all companies selling stuff don't have an inflated advertising budget and free market effects force them to pass the savings on to you, meaning the net cost increase for you is zero. And as a side-effect, quality products float to the top, since you hear of them mostly by word-of-mouth, meaning products compete on value-per-dollar.
Sadly human psychology and economics does not work that way haha. We pay what the market will bear, and increasing sales via a torrent of ads is cheaper than increasing the value-per-dollar ratio of the product.
The problem (other than the obvious privacy and noise issues) is that it's not a neutral subsidy. It introduces a lot of biases.
Since advertisers are subsidizing the platform, they tilt the content toward things they want and away from messages they don't. Messages that criticize advertisers products (which include things like governments and political ideologies since they are advertisers) are de-emphasized and marginalized.
Since impressions / clicks / eyeballs are the goal, an inherent bias is introduced toward emotionally triggering and/or addictive or hypnotic content. The reason social media for example is so divisive and negative is that this keeps people engaged by triggering simple powerful emotions.
The service is subsidized by "whale players" that regularly spend a lot of cash, but they are a lot of freeloaders (to entertain the whales and to build brand popularity).
A cheap/free game supercharges network effects to amass players, each of which incrementally adds value to every other player. Most players will never directly pay enough to offset their own cost to the game maker. However, they will create a real community that draws in a small number of whale players who will directly pay for themselves and indirectly pay for all of the free players.
Not so different from the two-sided markets on Facebook and Instagram.
I would generally agree that that's the "default".
However, there are cases where two sides of a market need an intermediary with which they can both independently transact, and a net benefit of that interaction is felt on both sides. The key is to construct the solution such that the intermediary depends on the goodwill of both sides of the market.
I think Kagi is somewhat flipping the script. By "taking" data from publishers for free, they are then selling it to readers at a cost. However, there is a trade off. Kagi needs to make sure publishers continue to make their content available so that it can be searchable, or used in their Assistant product. In order to do that, they need to do the opposite of what Google is doing by trying to sequester traffic on Google.com: Kagi's best interest is to make sure that they provide good value to both sides.
Indeed, using the Assistant product, the way it is structured, I very often find myself clicking through to the referenced original sources and not just consuming the summarized content.
How this evolves over time, from a product design standpoint, will be interesting to watch.
The main driver of hostility to users is due to ad-based business models. I think we would see a much more healthy internet if we had regulation which prohibited companies from choosing ads based on any information associated with the user that the ad is shown to. That is, any data collected in the past and any data associated with the session and request must not be taken into account when choosing the ad; two requests by different users in different locations should have the exact same ad probability distributions.
I know we are never getting this because it would kill or severely harm the business models of some of the most profitable businesses in the world.
Incentives aligned. Happy customers. Good businesses. Maybe you only get 60% gross margins, or, gasp, 40% gross margins. But so much less toxic.
We commenced work on Orion for Linux yesterday.
Arc, another Webkit-based browser, has an interesting implementation combining Profiles and Arc Spaces[2]. Instead of switching between windows, you switch between "Spaces" in the sidebar that are linked to a profile.
[0] https://addons.mozilla.org/firefox/addon/multi-account-conta...
I might consider demoing Orion on Linux even if it doesn't have container tabs, but at this time I wouldn't consider a full switch without that feature.
Tor has its flaws and criticisms, but it's really not on Kagi to fix them. With the combination of tor and their privacy pass, Kagi has gone further in allowing their paid users access to their services than anyone else.
Disclaimer: Not associated with Kagi in anyway other than being a very happy user.
(Privacy Pass in fact doesn't make sense outside of an anonymizing transport, which makes the current announcement an exercise in marketing, at best)
This kind of thinking is pervasive in the discussion of privacy enhancing technologies. It might not make sense against the most sophisticated attacker, but it lays the groundwork of a complex system that will be able to do so.
Allowing more users will provide herd privacy at the token generation phase. Searches being decoupled from user account primary key offers privacy in all kinds of scenario's, comparable with a browser private tab.
> This kind of thinking is pervasive in the discussion of privacy enhancing technologies
It is in RFC.
Origin-Client, Issuer-Client, and Attester-Origin unlinkability requires that issuance and redemption events be separated over time, such as through the use of tokens that correspond to token challenges with an empty redemption context (see Section 3.4), or that they be separated over space, such as through the use of an anonymizing service when connecting to the Origin.
https://datatracker.ietf.org/doc/html/rfc9576I wish my kagi t-shit could say the same. Bottom hem unraveled on the second wash, and so it's been consigned to the sleep and yard work shirts. They issued me a coupon for a free shirt as replacement, but it's yet to ship
The search engine works great for me. I will almost certainly renew my subscription when it's time to. Glad to see them continually delivering user-benefiting features.
User A asks kagi for tokens. Kagi says "sure, here's 500 tokens". If kagi then logs the 500 tokens it just gave to user A, it now will know if any of those tokens is redeemed at a later date, that they're assigned to user A?
Of course if Kagi just doesn't retain this data, then yeah all is good because the token itself is only marked as valid, not valid and given to user A on date Y, but....that's it right? Or am I misunderstanding something?
> The main building block of our construction is a verifiable oblivious pseudorandom function (VOPRF)
I am not sure how well tested that primitive is, but it definitely appears to be more than the server handing clients tokens and then pretending not to know who it gave them to.
The referenced paper: https://petsymposium.org/popets/2018/popets-2018-0026.pdf
Update: On some thought, for the approach of the server providing a common authorization token that there is no guarantee to the client that the server is actually providing a common token and thus not just simply providing a unique identifier to each user. Thus, the Privacy Pass's cryptography ensures that the client knows that it is still anonymous. Update 2: But, what guarantee exists that the server doesn't generate a unique public key (i.e. public-private key pair) for each user and thus defeat anonymity this way? Update 3: They use zero-knowledge proofs to prove that all tokens are signed by the same private-key, from their paper: "The work of Jarecki et al. [18] uses a non-interactive zero-knowledge (NIZK) proof of discrete log equality (DLEQ) to provide verification of the OPRF result to the user. Their construction is hence a ‘verifiable’ OPRF or VOPRF and is proven secure in the random-oracle model. We adapt their construction slightly to use a ‘batch’ DLEQ proof allowing for much more efficient verification; in short this allows a user to verify a single NIZK proof that states that all of their tokens are signed by the same private key. This prevents the edge from using different key pairs for different users in an attempt to launch a deanonymization attack; we give more details in Section 3.2.".
# client
r = random_blinding_factor()
x = client_secret_input()
x_blinded = blind(x, r)
# Server
y_blinded = OPRF(k, x_blinded)
# Client
y = unblind(y_blinded, r)
So you end up with y = OPRF(k, x). But the server never saw x and the client never saw k.This feels like the same kind of unintuitive cryptography as homomorphic encryption.
> When an internet challenge is solved correctly by a user, Privacy Pass will generate a number of random nonces that will be used as tokens. These tokens will be cryptographically blinded and then sent to the challenge provider. If the solution is valid, the provider will sign the blinded tokens and return them to the client. Privacy Pass will unblind the tokens and store them for future use.
> Privacy Pass will detect when an internet challenge is required in the future for the same provider. In these cases, an unblinded, signed token will be embedded into a privacy pass that will be sent to the challenge provider. The provider will verify the signature on the unblinded token, if this check passes the challenge will not be invoked.
Here is how I see it:
1. The user generates a token/nonce => T
2. The user blinds the token with secret blinding factor b => Blinded token TB = T*b
3. The user sends the blinded token for signing. The server signs it and returns it to the user => Signed blinded token TBS = Sign(TB)
4. The user unblinds the token (this does not break the signature) => Signed Unblinded token TS = TBS/b
5. The user sends TS for its search query.
The server signed TB, then received TS. Even if it logged that TB = user, it cannot link TS to TB, because it does not know the blinding factor b. Thus, it cannot link the search query with TS to the user.The whole idea is that the server does not which WHICH client a token belongs to. It doesn’t generate the tokens.
Should support some crypto currency (probably monero), and something like GNU Taler if that technology ever becomes usable.
This feature looks like it narrows the gap a bit though.
Nice work
Are the conversion fees too high?
When I try to go into billing in Kagi I just get forwarded to Stripe. Does Stripe process the crypto payments?
edit: on desktop, on the page where you choose your plan, scroll to the bottom and look for the link to paying with OpenNode (btc lightning)
Yeah routing can suck. First timers should use a lightning wallet with built in LSP support like ZEUS, BitKit, Phoenix, etc. Then routing is a non-issue.
I think it's better to use another cryptocurrency like Monero than having to rely on a centralized service just to get a half-decent user experience.
Not requiring custodians is just one part, having to rely on third-party services for basic functionality is another.
It's almost only Bitcoin that has the absurdly expensive fees.
At the same time, privacy pass is a very foreign concept to me. If they are transferable between devices, one could generate a couple and resell them over some other medium (even in person).
Privacy Pass unties the searches from the user account and payment information.
To avoid fingerprinting by config, have a page where the community can share and vote on best configs, then clone and use a popular one that suits your needs.
So while it does add a data point that could help track you, it's not defeating the whole point.
I'm not one of the people that has been concerned about that, but I'm curious to what extent this alleviates those concerns among those that have had them.
I am, it's mind-blowing to me that anyone would login to a search engine (yes, I know how many do it, now).
After a brief verification of the system, I'm pretty sure I'll sign up, now
I honestly feel like any major free search engine is probably doing more to try to track you anyway.
And if you’re going to search something you want to be anonymous, you can just like use another search engine. I honestly haven’t run into the situation where I needed to.
I do worry that some day someone will be able to see how often I forget basic syntax for some JavaScript or Python method - or how often I can’t be bothered to type out a full domain and just search to navigate to it - but that’s a price I’m also willing to pay.
Again, not sure on how the tokens are proven legit without ever sharing them, but there's probably some ~~zero-knowledge proof~~ stuff going on that covers that.
Edit: Not zero-knowledge proof. Seems to be Blind Signature?
It solves the problem of using a paid service without compromising customer’s privacy which is a breakthrough. The rest are different problems and they are universal issues with various existing solutions as you already pointed out.
The very cool thing is that this is the case even if the server tries to misbehave during their phase. This means that users only need to trust the client software, which we open sourced: https://github.com/kagisearch/privacypass-extension
Some posters are mentioning blind signatures, and indeed Privacy Pass can utilise these as a building block. To be precise, however, I should mention that for Kagi we use "Privately Verifiable Tokens" (https://www.rfc-editor.org/rfc/rfc9578.html#name-issuance-pr...) based on "oblivious pseudorandom functions" (OPRFs), which in my personal view are even cooler than blind signatures
Anywho, the person I replied to seemed to be willing and able to go a technical level deeper than the article, and that's something I'm also interested in reading. It sounds like they'd be allowed :)
https://petsymposium.org/popets/2018/popets-2018-0026.php ("Privacy Pass: Bypassing Internet Challenges Anonymously")
I think Cloudflare implemented the same thing? At least the HN comments link to the same paper,
https://news.ycombinator.com/item?id=19623110 ("Privacy Pass (cloudflare.com)", 53 comments)
> For this reason, it is highly recommended to separate token generation and redemption in time, or “in space” (by using an anonymizing service such as Tor when redeeming tokens, see below).
Sure, Tor will random the space. But what about the time? I then went to "see below" and didn't see anything relevant. Or is the idea that, with sufficient request volume, clients mask each other in time?
Also, Tor will only randomize the space insofar as you keep re-establishing a session; the loop remains static for the duration of a session afaik. And re-establishing a session takes like 10 seconds. So is it really randomizing the space?
One token request can produce N tokens. We have it configured where N = 500, so most users will be requesting more tokens fairly infrequently.
I have built blind signature authentication stuff before (similar to privacy pass) and one thing I’m curious about is how you (will) handle multi device access?
I understand you probably launched with only unlimited search users in order to mitigate the same user losing access to their tokens on a different device. But any ideas for long term plans here? When I built these systems in the past, I always had to couple it with E2EE sync. Not only can that be a pain for end users, but you can also start to correlate storage updates with blind search requests.
Either case, this is amazing and I’m gonna be even more excited to not just trust Kagi, but verify that I don’t need to trust y’all. Congrats.
[1] https://www.petsymposium.org/2018/files/papers/issue3/popets... [2] https://en.m.wikipedia.org/wiki/Ecash
I remember Safari as the only browser that implemented it natively, but I guess Orion has it now too.
From Tor docs [0]:
> Add-ons, extensions, and plugins are components that can be added to web browsers to give them new features. Tor Browser comes with one add-on installed: NoScript. You should not install any additional add-ons on Tor Browser because that can compromise some of its privacy features.
How does Kagi square this with Privacy Pass, which requires a browser extension rejected by Tor [1]? Did Kagi analyze whether it is possible to bucket users of Tor into two distinct groups depending on whether the extension is installed? Do I need to trust another organization other than the Tor project to keep the signing keys for the extension safe? Was there any outreach to the Tor community at all prior to releasing this feature?
It's great that they're Torifying the service, but depending on a 3rd party extension is not ideal.
[0] https://support.torproject.org/glossary/add-on-extension-or-...
[1] https://gitlab.torproject.org/tpo/applications/tor-browser/-...
Ok, let's look at the source.
curl -L https://addons.mozilla.org/firefox/downloads/file/4436183/kagi_privacy_pass-1.0.2.xpi > /tmp/extension.xpi
unzip /tmp/extension.xpi -d /tmp/extension
cd /tmp/extension
Alright, here's some nice, clean, easy-to-read Javascript. Nice! Wait, what's that? // ./scripts/privacypass.js
/*
* Privacy Pass protocol implementation
*/
import init, * as kagippjs from "./kagippjs/kagippjs.js";
...
// load WASM for Privacy Pass core library
await init();
I opened ./kagippjs/kagippjs.js and was, of course, greeted with a WASM binary.I personally would not install unknown WASM blobs in Tor browser. Source and reproducible build, please!
Let's continue.
// get WWW-Authenticate HTTP header value
let origin_wwwa_value = "";
const endpoint = onion ? ONION_WWWA_ENDPOINT : WWWA_ENDPOINT;
try {
const resp = await fetch(endpoint, { method: "GET", headers: { 'X-Kagi-PrivacyPass-Client': 'true' } });
origin_wwwa_value = resp.headers.get("WWW-Authenticate");
} catch (ex) {
if (onion) {
// this will signal that WWWA could not fetch via .onion
// the extension will then try normally.
// if the failure is due to not being on Tor, this is the right path
// if the failure is due to being on Tor but offline, then trying to fetch from kagi.com
// won't deanonymise anyway, and will result in the "are you online?" error message, also the right path
return origin_wwwa_value;
}
throw FETCH_FAILED_ERROR;
}
What?? If the Onion isn't reachable, you make a request to the clearnet site? That will, in fact, deanonymize you (although I don't know if Tor browser will Torify `fetch` calls made in extensions). You don't want Tor browser making clearnet requests just because it couldn't reach the .onion! What if the request times out while it's bouncing between the 6 relays in the onion circuit? Happens all the time.The extension is open-source [1], including the Rust code that produces the WASM [2]. You should be able to produce a bit-compatible binary from these repos, and if not, please file a bug!
Do we know what fraction of Kagi users access it through Tor?
Safe-search or not, just transfer both result lists and make the client only show the one you want. The same could be done with languages, where you at least get the results for the bigger ones. Blacklists would hide your blocked crap sites. It may even be possible to implement the ranking adjustments to some extend.
Client-side filtering would put more load on the server and search sources, but I hope the cost increase is tolerable. Blacklisting and reordering could be virtually free. This could make Privacy Pass available to many more users who don't have overly complex account rules.
As far as I understand, the client sends some information A to the server, the server applies some private key X and returns the output B to the client, which then generates tokens C from the output.
If the server uses a different X for every user and then when verifying just checks the X of every user to see which one is valid, couldn’t the server know who created the token?
I think that's the best conceptual overview of a crypto protocol I've ever seen.
And you can validate this, if you try to issue a Privacy Pass search without a private token, you'll get a `WWW-Authenticate` header that kicks off the handshake, and that should be the same for all users for a given epoch (month). E.g.
curl -v -H 'X-Kagi-PrivacyPass-Client: true' 'https://kagi.com/search?q=test'
Or does the extension validate this and the correct value is hardcoded in the extension like stebalien suggested?
A malicious server could maintain separate key pairs for users it wanted to track, but you can't do it for every user because 1) it'd be clear from the WWW-Authenticate header changing, and 2) you'd have to validate tokens against every key, which would quickly get too slow to work.
fetch(new Request("https://kagi.com/search?q=test", { method: "GET", headers: new Headers({ "X-Kagi-PrivacyPass-Client": true })})).then((r) => console.log(r.headers.get("www-authenticate")))
A simple response in the body to something like <https://kagi.com/privacypass> would be easier to check.And you answered someone else:
> It is also something anyone else could do to keep us honest :)
While true I believe for such a feature making it as easy as possible for your users to check independently is just better.
> Plus, if you don't trust the service to not issue special key pairs to track you, you probably won't trust us to not do the same publishing the key material.
You could publish it on some sort of blockchain to make sure it can’t be changed and is public for everyone, right?
> A malicious server could maintain separate key pairs for users it wanted to track, but you can't do it for every user because 1) it'd be clear from the WWW-Authenticate header changing, and 2) you'd have to validate tokens against every key, which would quickly get too slow to work.
Makes sense, thanks for explaining!
Your understanding is correct, that's definitely something we could do. It is also something anyone else could do to keep us honest :)
My understanding is, it's analogous to writing a note to your manager.
That note is a random number written in ink your manager can't actually read; all they can do with that note is sign it. They ask God (used here to represent math itself) how to sign this note, and God gives them a unique signature that also theoretically cannot be used to calculate the number that's written. This signature also proves what you're authorized to do. And then your manager hands the note back to you.
The note's sole function past that point is so you can point to the signature thereon and say "this signature proves I can do this, that, etc."
Thank you so much, I am 100% stealing this
P. S: I don't use the Kagi app in Android.
It's madness - how is it market fairness when iOS literally forces you to use Google? I know Google is paying Apple to do exactly that, but it's so beyond anti-consumer I can't believe it.
I think government issued digital identities should also use this.
I want better search results and willing to pay for it, but not at the cost of linking all my searches to my identity.
Also happy to see they're adding tor support.
I feel like I might hit the default limit of 2000 searches per month, but it's not far off.
This is not intended as criticism, just inquisitive.
Since we have no idea who is issuing search requests in Privacy Pass mode, if there was no limits on token issuance, you could simply generate infinite tokens and give them out (or use them as part of some downstream service), and we'd have no other recourse for rate-limiting to prevent abuse.
Setting a high, but reasonable limit on issuance helps prevent abuse, and if you run out of tokens, you can reach out to support@kagi.com and we'll reset your quota.
It feels like they picked a number no user should hit, while keeping it low enough to not pass Kagi out “free” to all their friends.
What are you hoping to gain with that?
I do value privacy, but I wouldn't pay extra for more private search results. I might pay extra for __better__ search results, but that's hard to measure.
Just curious if anyone has had a legitimately great experience with this product and can communicate its benefits. Bonus points if you're in software dev.
I tried kagi after finally getting sick of some of my google results. Kagi was able to deliver on some of those results.
It's not like, shockingly better results. I do think they're better on average, but I'm not sure.
However, in the cases where I couldn't find what I was looking for on google and could on kagi, well, that's a binary result. I'll take the success and not the failure.
I was surprised by how much better I found the UI. That's actually the thing that sold me on the subscription to begin with. Going in, I would not have expected UI to sell me on such a thing.
I have since customized it somewhat; there are sites I usually really like results from and they are upranked, and sites I don't care for which are downranked. I've felt like this has lead to even better experience, but I haven't gone back to google to compare.
Claude does not have native web access and this has been my only real issue with Claude. I've just converted to Kagi to test, and having Claude with web access is a huge QoL upgrade.
I would recommend at least giving it a try to see if you notice a difference. For my job, the monthly fee more than pays for itself
https://i.imgur.com/PQNm1Yc.png
I want a search engine to return useful results. Right now, google has been captured by revenue generating results. It wouldn't be so bad if useful results were making money, but that doesn't seem to be the case.
Image is of the first two pages of results side by side if a "page" is a "fold" in my browser window.
I tried a different search on iPhone to be sure. The first result was on the 3rd screen.
When did they start doing that? How do people use that crap?
I just tried it on Duck Duck Go and the first result was from Apple.com
On google I get 3 rows of "Products" but the first real result is also apple.com
It’s disheartening to think the great progress we’re making in this sector could be undermined in a few seconds against any companies efforts with a trivial backdoor.
I haven't looked closely enough at this token thingy Kagi is doing but it seems on the surface like it might scratch the itch by letting them decouple the accepting-payment part of their service from the providing-results part such that they know that you've paid, but not which payer you are.
[1] https://www.techradar.com/computing/cyber-security/proton-ma...
The good news is that while the NSA will absolutely be tracking everything you search for while using Kagi they also do the exact same thing with every other search engine you use so what difference does it make.
[1] https://techinformed.com/uk-government-orders-apple-to-hand-...
I’d love to assume this will never happen, I’m just concerned that even if it did I’d never find out - Because unfortunately the more popular this service gets for bad actors, the more of a target it becomes for the government with identification of users.
I guess as a search engine, we could assume the government may leave them well alone and still just focus on content creators.
Cryptography is a literal godsend for people living under oppressive regimes.
Or are you saying the method is designed to look secure but there’s an intentional weakness that makes tracking possible?
I don't think, however, that this means we need to give up on crypto entirely. Just... be aware of the threat model for what you're encrypting.
I'm curious if anyone knows, are companies like Google and Microsoft making more than $10/mo/user? We often talk about paying with our data, but it is always unclear how much that data is worth. Kagi does include some numbers, over here[0], but they seem a tad suspicious. The claim is Google makes $23/mo/user, and this would make their service a good value, but the calculation of $76bn US ad revenue (2023) and $277 per user annually gives 274m users. It's close to 80% of the US population, but I though google search was about 90% of global. And I doubt that all ad revenue is coming from search. Does anyone know the real numbers? Googling I get inconsistent answers and also answers based on different conditions and aggregations. But what we'd be interested here is purely in Google /search/ and not anything else.
[0] https://help.kagi.com/kagi/why-kagi/why-pay-for-search.html
edit: Even just the ability to rank, pin, and block domains alone is crazy useful. I never need to see Pinterest in any image search results again. If I see a crappy blog spam site, I just block it and it never shows up again. It feels like these are basic, fundamental features that every search engine should have had a long time ago. It's pretty sad that Kagi is getting so much praise for doing things that really should have been standard for at least a decade (not sad in any negative way toward Kagi, but because our standards and expectations for search have dropped this low).
1) There is a marginal payment overhead. I'd assume $0.50-0.75, leaving their amount down to $9-ish.
2) It's a fairly niche product with a still-small userbase. ~40k users at ~$9/mo = $360k/mo (I know there's $5/mo users and $25/mo users but I'd assume there are far more $5/mo and $10/mo users than $25/mo users)
3) They have to keep the service running 24/7/365, so you have to hire devs either across multiple time-zones or compensate them enough to be OK fighting fires at 2am.
$5 a month for fewer than 10 searches a day is clearly not a good deal. $10 a month might be worth it for some, but an extra $15 a month on top of that for AI results is kind of crazy.
I don't know Kagi's financials, but this is usually the case for a lot of products with a smaller customer base. For example, a block of Kraft cheddar will be a lot cheaper than an equivalent-sized block from an organic local dairy. There's always a customer base that is willing to pay for a differentiating feature or value.
I'm satisfied paying for it because the product works well and saves me time. I can't say the same for a lot of the random $10 impulse buys I make in a month.
You can also choose if you want the chat to RAG search results into the context for additional info, and then cite those sources. To me, replacing a Claude/ChatGPT subscription with $15 on top of a company I already like, while also getting a bunch of other models was a no-brainer.
You can just quickly write !ai to the toolbar and you have a deepseek chat open. Or !sum to summarize the current page or video.
But kagi does a good job too, indeed.
Kagi saves me much more than $10 of time every month. I definitely don't regret the subscription cost. Their LLM thing (append "?" to your internet search query) is worth more than that on its own.
They're my portal to the web. It's less like an optional web service (like a streaming service), and it feels more like I'm paying for them to be my ISP.
some anecdotal data:
11/2024: 183 searches
12/2024: 360
1/2025: 376
2/2025: already at 222
Will definitely (happily) have to upgrade to the $10 plan. It's been great.
Currently I'm debating with myself if I should go for the $10 plan. I'm all down for supporting kagi, but surprisingly I didn't use as many searches as I thought.
Back-of-the-envelope:
- 2tn searches per year.
- US is 20% of all searches.
- Us revenue is 76bn
$76bn / (2tn * 0.2) = $0.19 / search
So, getting 300 searches for less than $0.02 per search sounds like a pretty good deal.
There's also the matter of Google search quality being increasingly bad, while Kagi's is consistently... okay. They also have a a lot of nice features, liking being able to change the weight of different sites in your list of results.
You can split your searches with search engine shortcuts on the desktop, and the search engine quickbar on mobile.
When I still was on the starter plan, I used Kagi whenever I had a search that if I use google, I know I will:
- get a bunch of listicles and AI slop (Kagi downranks and bundles these)
- get a buch of AI images (again, Kagi clearly labels and downranks these)
- have to do multiple google searches for, but can instead use Quick Answer for
- will get a bunch of Reddit pre-translated results for
- technical / scientific questions, because of the sites I can uprank/downrank/block
I used google for things like:
- highest building in the world
- $bandname Wikipedia / Discogs
- name of thing I can't remember but have the approximate word for
You get the idea.
For me, I use Kagi only at home for personal use. And most months, I don't exceed 300. Of course, if I included work related searches, then yes - 10 searches won't get me far.
I also would be interested in a service like this for attestation on other sites. Device attestation has chilling privacy implications, but if you could have a paid service with a presumably trusted entity like Kagi attest that you are a legitimate user (but hide your identity), maybe more of the Internet could be browsed anonymously, while still minimizing spam.
I get why many sites currently block Tor and VPN users, or even users in incognito or without a phone number, as the Internet is essentially unusable without anti-spam measures. That said, I do think anonymity has its place (especially for browsing, even if commenting weren't allowed), and maybe ideas like this could allow for anonymity without the Internet being riddled with spam.
The browser creates embeddings of user query, then send the embeddings to the server.
To complete a search, the server is a machine and it does not really need text to understand what a user want. A series of numbers, like LLM embeddings, are totally fine (actually it might even be better, because embeddings map similar words closely, like Duck and Bird have similar embeddings).
On the privacy side, LLM embeddings are a bunch of numbers. Even the embeddings are associated with a user, other people cannot make meaning out of the embeddings. Therefore the user's privacy is preserved.
What do you think?
[1] https://arxiv.org/abs/1204.2136 [2] https://arxiv.org/abs/2210.03458
Unfortunately, as described, such a solution would only satisfy a somewhat meaningless notion of privacy. Specifically, the embeddings by definition contain potentially private information about the user, revealing things like "I'm asking about birds" to use your example. Even though it might "compress" the query in a slightly lossy way, it would still reveal a great deal of information about the query.
A true solution to this problem would require something like differential privacy and adding noise to the embeddings. However, the noise required would (likely) end up destroying too much information from the embedding to preserve accuracy of the LLM.
1. Embeddings are very close to a reversible function. It's not hard to take an embedding and query back the closest query semantically from the source LLM.
2. We already don't log any queries from the users. I'm aware this has to be taken on faith from Kagi users. But you can believe that not having any user query data at all anywhere is a significant speed bump for us in feature development, bug tracking and roadmapping.
Also I just tried it and you can't really search for porn
I gave a hundred searches to some normies I know and they told me that they save them to use when Google can't find what they want because Kagi works better (!) so they hoard the searches as backups to Google.
All I know is that I haven't used Google on purpose for a year and it's really turned into an eyesore
I could probably get better results by crawling the web myself at home.
I invite you to look at the number of threads of people complaining about Cloudflare locking their perfectly normal browser out of a stupid amount of the Internet and then carefully consider how you would circumvent those anti-bot controls
This doesn't make sense. Is this saying they're worried about a user on a trial or starter plan using someone else's tokens? That can still happen when tokens are disabled for trial and start plan users: the trial or starter plan user can use tokens generated by someone on an unlimited plan.
> We are working on enabling this feature for Trial and Starter plans, which have access to a limited number of monthly searches. Therefore, they risk a worse user experience if their generated tokens are lost (for example, due to uninstalling the extension)
If they don't track limits and billing at generation time, then there's no "risk [of] a worse user experience if their generated tokens are lost".
This ""risk [of] a worse user experience if their generated tokens are lost" is a logical reason to not enable privacy tokens for Trial and Starter plans. The risk of "users on this plan could redeem more tokens than the limit of searches allowed on their plan" is not a logical reason to not enable privacy tokens for Trial and Starter plans.
Edit: Each account is limited to 2000 tokens per month, unless they email support to request more. (from the FAQ near the bottom).
Still seems like there could be a secondary market for resold tokens, though. Not just as a money making system, but possibly as a privacy initiative? If enough accounts pooled tokens into a shared pool and withdrew a random one from it each time, it would be a further safeguard.
> allowing users to send a small configuration with every request (language, region, safe-search) to automatically customize your search experience to some extent. However, we currently believe this would quickly result in a significant loss of anonymity for you and for other users.
> For manual search settings customization, you can always use bangs in your search query to enable basic settings for a specific query.
https://help.kagi.com/kagi/privacy/privacy-pass.html
So I guess you _could_ share them, but only with so many people or so many searches.
When I'm in a rush this forces me to fall back to Google which often doesn't provide good results for my queries, which is unfortunate
It generally resolves itself in under a minute, but it is still a mildly irritating availability issue that wasn't present earlier. Maybe something to do with load balancing? No clue.
It is clearly stated in the blog post :)
We even considered variations of having some settings preserved in local storage and impact of that on anonymity. Ultimately decided that was not worth it.
Check the FAQ section (towards the end) for full details and analysis:
So for example there could be a built in "developers" preset that might make domains useful to coding higher ranked (and down rank or block things like stack overflow clones). Etc etc.
Basically this could allow a smaller amount of customisation with less ability to identify a specific user.
I also use Orion and I do like the idea someone else had of integrating an option for Kagi Privacy mode into the "incognito" tabs specifically as an option!
Though those still get passed to the server, and your combination of personalization settings is likely to be globally unique, and it's almost certainly unique among the subset of users that are paranoid enough about their privacy not to store preferences in their session... But still.
Starting at $5 a month seems very reasonable to me for a non-essential premium search experience.
> Regional, student, annual...discounts are not possible because we are not currently making any profit to discount it off from.
Lovely!
The cynic in me wants to stop myself from becoming a fanboy because we all know how the story of tech startups goes and it's bound to repeat itself again; the optimist in me still wants to believe that there can be forces of good on the web.
Not exactly sure what you're getting at, but:
"Using CSS, you can fully customize Kagi's search and landing pages from your Appearance settings." : https://help.kagi.com/kagi/features/custom-css.html
EDIT: Seems like it works via https://en.wikipedia.org/wiki/Blind_signature
I don’t know the cost or liability of that at scale.
=================================
From their blog:
>As standardized in [2 - 4], the Privacy Pass protocol is able to accommodate many “architectures.” Our deployment model follows the original architecture presented by Davidson et al. [1], called “Shared Origin, Attester, Issuer” in § 4 of [2].
From [2] RFC 9576 § 3.3 "Privacy Goals and Threat Model" :
>Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy. In particular, this means that Attesters that may be privy to private information about Clients are trusted to not disclose this information to non-colluding parties. Colluding parties are assumed to have access to the same information; see Section 4 for more about different deployment models and non-collusion assumptions. However, Clients assume that Issuers and Origins are malicious.
And From [2] RFC 9576 § 4.1 "Shared Origin, Attester, Issuer" :
>As a result, attestation mechanisms that can uniquely identify a Client, e.g., requiring that Clients authenticate with some type of application-layer account, are not appropriate, as they could lead to unlinkability violations.
Womp womp :(
This is not genuinely private in any meaningful sense of the term. Kagi plays the role of all three parties, and even relies on the very thing section 4.1 says is not appropriate: to use mechanisms that can uniquely identify a client. They utilize a client's session token: "In the case of Kagi’s users, this can be done by presenting their Kagi session cookie to the server."
Frankly, that blog post is disingenuous at best, and malicious at worst.
=================================
I want to be wrong here. Where am I wrong? What am I missing?
> In this model, the Attester, Issuer, and Origin share the attestation, issuance, and redemption contexts.
I haven't read the RFC in detail, but I believe this is where the nuance is: When you enable the privacy pass setting in the extension/browser the redemption context is changed relative to the attestation context by removing the session cookie, to just the information sent by the browser for someone who is not logged in. What remains is your IP address and browser fingerprinting, which can be countered by using Tor.
In the RFC's architecture, the request flow is like so:
1. CLIENT sends anonymous request to ORIGIN
2. ORIGIN sends token challenge to CLIENT
3. CLIENT uses its identity to request token from ISSUER/ATTESTER
4. ISSUER/ATTESTER issues token to CLIENT
5. CLIENT sends token to ORIGIN
You can see how the ISSUER/ATTESTER can identify the client as the source of the "anonymous request" to the ORIGIN because the ISSUER, ATTESTER and ORIGIN are the same entity, so it can use a timing attack to correlate the request to the ORIGIN (1.) with the request to the ISSUER/ATTESTER (3.).
However you can also see that if a lot of time passes between steps (1.) and (3.), then such an attack would be infeasible. Reading past your quote from RFC 9576 § 4.1., it states:
> Origin-Client, Issuer-Client, and Attester-Origin unlinkability requires that issuance and redemption events be separated over time, such as through the use of tokens that correspond to token challenges with an empty redemption context (see Section 3.4), or that they be separated over space, such as through the use of an anonymizing service when connecting to the Origin.
In Kagi's architecture, the "time separation" requirement is met by making the client generate a large batch of tokens up front, which are then slowly redeemed over a period of 2 months. The "space separation" requirement is also satisfied with the introduction of the Tor service.
There is some more discussion in RFC 9576 § 7.1. "Token Caching" and RFC 9577 § 5.5. "Timing Correlation Attacks".
One question you may have is: Why wasn't this solution used in the RFC?
This can be understood if you look at the mentions of "cross-ORIGIN" in the RFC. This RFC was written by Cloudflare, who envisioned it's use across the whole Internet. Different ORIGINs would trust different ISSUERs, tokens from one ORIGIN<->ISSUER network might not work in another ORIGIN<->ISSUER network. This made it infeasible for clients to mass-generate tokens in advance, as a client would need to generate tokens across many different ISSUERS.
Of course, adoption was weak and there ended up being only one ISSUER - Cloudflare, so they adopted the same architecture as Kagi where clients would batch generate tokens in advance (batch size was only 30 tokens though).
RFC 9576 § 7.1. also mentions a "token hoarding" attack, which Cloudflare felt particularly threatened by. Cloudflare's Privacy Pass system worked in concert with CAPTCHAs. Users could trade a completed CAPTCHA for a small batch of tokens, allowing a single CAPTCHA completion to be split into multiple redemptions across a longer time period.
However, rudimentary "hoarding"-like attacks were already in use against CAPTCHAs through "traffic exchanges". Opening up another avenue for hoarding through Privacy Pass would have only exacerbated the problem.
The ISSUER and ATTESTER are different roles. As previously quoted, "Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy." The RFC is explicit that, when all of the roles are held by the same entity, the attestation should not rely on unique identifiers. But that's exactly what a session cookie is.
>You can see how the ISSUER/ATTESTER can identify the client as the source of the "anonymous request" to the ORIGIN because the ISSUER, ATTESTER and ORIGIN are the same entity, and therefore it can use a timing attack to correlate the request to the ORIGIN (1.) with the request to the ISSUER/ATTESTER (3.).
No timing or spacing attack is needed here. If I have to provide Kagi with a valid session cookie in order to get the tokens, then they already have a unique identifier for me. There is no guarantee that Kagi is not keeping a 1-to-1 mapping of session cookies to ISSUER keypairs, or that Kagi could not, if compelled, establish distinct ISSUER keypairs for specific session cookies.
Very true, but again, the RFC describes a completely different threat model with much stronger guarantees. The Kagi threat model:
- Does not provide Issuer-Client unlinkability
- Does not provide Attester-Origin unlinkability
In particular, the model does not assume a malicious Issuer and requires the Client have some level of trust in the Issuer. The Client trusts the Issuer with their private billing information but does not trust the Issuer with their search activity.
The RFC explicitly guarantees the Issuer cannot obtain any of the Client's private information.
That said, I will point out that this Issuer-Client unlinkability issue can be solved by introducing a 3rd-party service or when Kagi starts accepting Monero payments.
> There is no guarantee that Kagi is not keeping a 1-to-1 mapping of session cookies to ISSUER keypairs, or that Kagi could not, if compelled, establish distinct ISSUER keypairs for specific session cookies.
Also completely valid, but also not something Kagi claims to guarantee. They believe the extension should be responsible for guarding attainer issuance partitioning. I don't think it's implemented currently but it shouldn't be too hard, especially since they currently use only 1 keypair.
If the Client, Attester, and Origin are all a single party (Kagi), then it follows from that threat model that Kagi does not provide Kagi-Client unlinkability, no?
Further, this is not what Kagi has advertised in the blog post:
>What guarantees does Privacy Pass offer? > >As used by Kagi, Privacy Pass tokens offer various security properties (§ 3.3, of [2]).
Kagi are explicitly stating that they provide the guarantees of § 3.3. They even use more plain language:
>Generation-redemption unlinkability: Kagi cannot link the tokens presented during token redemption (i.e. during search) with any specific token generation phase. *This means that Kagi will not be able to tell who it is serving search results to*, only that it is someone who presented a valid Privacy Pass token. > >Redemption-redemption unlinkability: Kagi cannot link the tokens presented during two different token redemptions. This means that *Kagi will not be able to tell from tokens alone whether two searches are being performed by the same user*.
As it stands, Kagi cannot meaningfully guarantee those things, because the starting point is the client providing a unique identifier to Kagi.
>That said, I will point out that this Issuer-Client unlinkability issue can be solved by introducing a 3rd-party service or when Kagi starts accepting Monero payments.
Sure, but at that point, there is no need for any of the Privacy Pass infrastructure in the first place.
>Also completely valid, but also not something Kagi claims to guarantee.
I disagree. Their marketing here is "we can't link your searches to your identity, because cryptography."
>They believe the extension should be responsible for guarding attainer issuance partitioning. I don't think it's implemented currently but it shouldn't be too hard, especially since they currently use only 1 keypair.
If Kagi is going to insist on being the attester and on requiring uniquely identifiable information as the basis for issuing tokens, then yes, the only way to even try to confirm that they're not acting maliciously is to keep track not only of distinct keypairs, but also of public and private metadata blocks within the tokens, and to share all of that data (in a trustworthy manner, of course) with other confirmed Kagi users. And if a user doesn't understand all of the nuances that would entail, or all of the nuances just discussed here, and instead just trusts the Kagi-written client implicitly? Then it's all just privacy theater.
Kagi does not provide Kagi-Client unlinkability as the Client's payment information allows Kagi to trivially determine the identity of the Client. Kagi does provide Search-Client unlinkability (what the RFC calls Origin-Client unlinkability). More formally: If we assume Kagi cannot derive any identifying information from the privacy token (which I understand you dispute), then given any two incoming search requests, Kagi would not be able to determine whether those two requests came from the same Client or two different Clients.
> Kagi are explicitly stating that they provide the guarantees of § 3.3. They even use more plain language:
Not 100% sure I am understanding you correctly but if you are claiming that Kagi promises all the unlinkability properties in § 3.3, I would say that would be unfair, since they explicitly deny this in the FAQ at the bottom of the post.
I think they are citing that section as they reference several definitions from it in the text that follows.
> >Generation-redemption unlinkability: Kagi cannot link the tokens presented during token redemption (i.e. during search) with any specific token generation phase. *This means that Kagi will not be able to tell who it is serving search results to*, only that it is someone who presented a valid Privacy Pass token. > >Redemption-redemption unlinkability: Kagi cannot link the tokens presented during two different token redemptions. This means that *Kagi will not be able to tell from tokens alone whether two searches are being performed by the same user*. > > As it stands, Kagi cannot meaningfully guarantee those things, because the starting point is the client providing a unique identifier to Kagi.
These specific unlinkability properties are satisfied given that the earlier assumption about the token not providing identifiable information is true.
> Sure, but at that point, there is no need for any of the Privacy Pass infrastructure in the first place.
Kagi Privacy Pass in combination with a 3rd party can acheive a level of privacy that cannot be matched by architectures that don't involve the Privacy Pass or some other exotic cryptography.
I claim that a 3rd-party service + Kagi Privacy Pass meets all unlinkability properties in the RFC (except Attester-Origin for obvious reasons). Additionally, it guarantees confidentiality of the search request and response from malicious middleboxes, given the assumption about the token is true and that the user has access to a trusted proxy.
> I disagree. Their marketing here is "we can't link your searches to your identity, because cryptography."
Disagreement acknowledged. And yes, that quote is a fairly accurate summary of the marketing!
> If Kagi is going to insist on being the attester and on requiring uniquely identifiable information as the basis for issuing tokens, then yes, the only way to even try to confirm that they're not acting maliciously is to keep track not only of distinct keypairs, but also of public and private metadata blocks within the tokens, and to share all of that data (in a trustworthy manner, of course) with other confirmed Kagi users. And if a user doesn't understand all of the nuances that would entail, or all of the nuances just discussed here, and instead just trusts the Kagi-written client implicitly? Then it's all just privacy theater.
Yeah, I'm glad you are willing to say it at least, a lot of stuff these days is security theatre, people just kinda stick their heads in the sand I guess? I'm still hoping that people will realize that SSL has been long in need of a successor, and frankly BGP needs a complete rework too. It's also surprising to me that people are still willing to use Linux distros, although realistically modern computing as a whole is rotten at it's core. At least PGP is still alive, but it has its problems too...
Your claim is a bit like saying „it’s impossible to encrypt mail, the government wouldn’t allow it“. But PGP still exists.