What really matters isn't how secure this is on an absolute scale, or how much one can trust Apple.
Rather, we should weigh this against what other cloud providers offer.
The status quo for every other provider is: "this data is just lying around on our servers. The only thing preventing a employee from accessing it is that it would be a violation of policy (and might be caught in an internal audit.)" Most providers also carve out several cases where they can look at your data, for support, debugging, or analytics purposes.
So even though the punchline of "you still need to trust Apple" is technically true, this is qualitatively different because what would need to occur for Apple to break their promises here is so much more drastic. For other services to leak their data, all it takes is for one employee to do something they shouldn't. For Apple, it would require a deliberate compromise of the entire stack at the hardware level.
This is very much harder to pull off, and more difficult to hide, and therefore Apple's security posture is qualitatively better than Google, Meta or Microsoft.
If you want to keep your data local and trust no-one, sure, fine, then you don't need to trust anyone else at all. But presuming you (a) are going to use cloud services and (b) you care about privacy, Apple has a compelling value proposition.
No. Apple has a proposition that /may/ be better than the current alternatives?
What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
What kind of targeting advertising am i getting from apple as a user of their products? Genuinely curious. I’ll wait.
The rest of your comment may be factually accurate but it isn’t relevant for “normal” users, only those hyper aware of their privacy. Don’t get me wrong, i appreciate knowing this detail but you need to also realize that there are degrees to privacy.
https://support.apple.com/guide/iphone/control-how-apple-del...
In the App Store and Apple News, your search and download history may be used to serve you relevant search ads. In Apple News and Stocks, ads are served based partly on what you read or follow. This includes publishers you’ve enabled notifications for and the type of publishing subscription you have.
Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
> real-time location data
Locations are end to end encrypted.
"If you have a Mac with Apple silicon or an Apple T2 Security Chip, your data is encrypted automatically."
The non-removable storage is I believe encrypted using a key specific to the Secure Enclave which cleared on factory reset. APFS does allow for other levels of protection though (such as protecting a significant portion of the system with a key derived from initial password/passcode, which is only enabled while the screen is unlocked).
Apple does first party advertising for two relatively minuscule apps.
Facebook and Google power the majority of the world's online advertising, have multiple data sharing agreements, widely deployed tracking pixels, allow for browser fingerprinting and are deeply integrated into almost all ecommerce platforms and sites.
> allows officials to collect material including search history, the content of emails, file transfers and live chats
> The program facilitates extensive, in-depth surveillance on live communications and stored information. The law allows for the targeting of any customers of participating firms who live outside the US, or those Americans whose communications include people outside the US.
> It was followed by Yahoo in 2008; Google, Facebook and PalTalk in 2009; YouTube in 2010; Skype and AOL in 2011; and finally Apple, which joined the program in 2012. The program is continuing to expand, with other providers due to come online.
https://www.theguardian.com/world/2013/jun/06/us-tech-giants...
If the NSA had that info, why go through the trouble?
To defend the optics of a backdoor that they actively rely on?
If Apple and the NSA are in kahoots, it's not hard to imagine them anticipating this kind of event and leveraging it for plausible deniability. I'm not saying this is necessarily what happened, but we'd need more evidence than just the first-party admission of two parties that stand to gain from privacy theater.
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
Is it? I guess this really depends. For E2E storage (e.g. as offered by Proton with openpgpjs), what metadata would be of concern? File size? File type cannot be inferred, and file names could be encrypted if that's a threat in your model.
Whom it is shared with can infer the intent of the data.
but i feel in the context (communication/meta-data inference) that is missing the trees for the forest
Just make absolutely sure you trust your government when using an iDevice.
It always has to be operated by a sponsor in the state who hold encryption keys and do actual deployments etc etc.
The same applies to Azure/AWS/Google Cloud's China regions and any other compute services you might think of.
All the pearl clutching about Apple doing business in China is ridiculous. Who would be better off if Apple withdrew from China? Sure, talldayo would sleep better knowing that Apple had passed their purity test, I guess that’s worth a lot right? God knows consumers in China would be much better off without the option to use iPhones or any other Apple devices. Their privacy and security are better protected by domestic phones I’m sure.
Seriously, what exactly is the problem?
There are other less nefarious reasons for in-country storage laws like this. One is to stop other countries from subpoeanaing it.
But it's also so China gets the technical skills from helping you run it.
This sentence stings right now. :-(
There is such a thing as threat modeling. The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
Well, to be honest, theater is a pretentious word in this context. A better word will be shitshow.
(i never heard of a firewall that claims it filters _some_ packets, or an antivirus that claims that it protects against _some_ viruses)
But any such software must be publicly verifiable otherwise it cannot be deemed secure. That's why they publish each version in a transparency log which is verified by the client and handwavy verified by public brains trust.
This is also just a tired take. The same thing could be said about passcodes on their mobile products or full disk encryption keys for the Mac line. There'd be massive loss of goodwill and legal liability if they subverted these technologies that they claim to make their devices secure.
A simple example of the sort of legal agreement I'm talking about, is a trust. A trust isn't just a legal entity that takes custody of some assets and doles them out to you on a set schedule; it's more specifically a legal entity established by legal contract, and executed by some particular law firm acting as its custodian, that obligates that law firm as executor to provide only a certain "API" for the contract's subjects/beneficiaries to interact with/manage those assets — a more restrictive one than they would have otherwise had a legal right to.
With trusts, this is done because that restrictive API (the "you can't withdraw the assets all at once" part especially) is what makes the trust a trust, legally; and therefore what makes the legal (mostly tax-related) benefits of trusts apply, instead of the trust just being a regular holding company.
But you don't need any particular legal impetus in order to create this kind of "hold onto it and don't listen to me if I ask for it back" contract. You can just... write a contract that has terms like that; and then ask a law firm to execute that contract for you.
Insofar as Apple have engaged with some law firm to in turn engage with a hosting company; where the hosting company has obligations to the law firm to provide a secure environment for the law firm to deploy software images, and to report accurate trusted-compute metrics to the law firm; and where the law firm is legally obligated to get any image-updates Apple hands over to them independently audited, and only accept "justifiable" changes (per some predefined contractual definition of "justifiable") — then I would say that this is a trustworthy arrangement. Just like a trust is a trust-worthy arrangement.
That statement above also applies to Google. There is now way not prevent indirect data sharing with Apple or Google.
Trust us, we are liars. /s
There is a lot of navel gazing in these comments about "the perfect solution", but we all know (or should know) that perfect is the enemy of good enough.
Because as someone who has worked at a few telcos I can assure you that your phone and triangulated location data is stored, analysed and provided to intelligence agencies. And likewise this would be applying to ISPs.
Also, many are unaware or unable to make the determination who or what will own their data before purchasing a device. One only accepts the privacy policy after one taps sign in... and is it really practical to expect people to do this by themselves when buying a phone? That's why regulation needs to step-in and enforce the right decisions are present by default.
Or, if you prefer, you can just look at Google's code and verify that the operating system you put on your phone is made with the code you looked at.
I don't trust Apple - in fact, even the people we trust the most have told us soft lies here and there. Trust is a concept like an integral - you can only get to "almost" and almost is 0.
So you can only trust yourself. Period.
Your future self definitely can't trust your past self. And vice versa. If your future self has a stroke tomorrow, did your past self remember to write a living will? And renew it regularly? Will your future self remember that password? What if the kid pukes on the carpet before your past self writes it down?
Your current self is not statistically reliable. Andrej Karpathy administered an imagenet challenge to himself, his brain as the machine: he got about 95%.
I'm sure there are other classes of self-failure.
The PCC model doesn't guarantee they can't backdoor themselves, but it does make it more difficult for them.
You have fundamentally misunderstood PCC.
It sucks, but what are you going to do for society? Tell them all to sell their iPhones, punk out the NSA like you're Snowden incarnate? Sometimes saving yourself is the only option, unfortunately.
It stands to reason that that control is a prerequisite for "security".
Apple does not delegate its own "security" to someone else, a "steward". Hmmm.
Yet it expects computer users to delegate control to Apple.
Apple is not alone in this regard. It's common for "Big Tech", "security researchers" and HN commenters to advocate for the computer user to delegate control to someone else.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.
What's desperately missing on the client side is a switch to turn this off. It's really intransparent which Apple Intelligence requests are locally processed and which are sent to the cloud, at the moment.
The only sure way to know/prevent it a priori is to... enter flight mode, as far as I can tell?
Retroactively, there's a request log in the privacy section of System Preferences, but that's really convoluted to read (due to all of the cryptographic proofs that I have absolutely no tools to verify at the moment, and honestly have no interest in).
Harder to attack, sure, but no outside validation. Apple's not saying "we can't access your data," just "we're making it way harder for bad guys (and rogue employees) to get at it."
That doesn't make PCC useless by the way. It clearly establishes that Apple mislead customers, if there is any intentionality in a breach, or that Apple was negligent, if they do not immediately provide remedies on notification of a breach. But that's much more a "raising the cost" kind of thing and not a technical exclusion. Yes if you get Apple, as an organisation, to want to get at your data. And you use an iPhone. They absolutely can.
Key extraction is difficult but not impossible.
Refer to the never-ending clown show that is Intels SGX enclave for examples of this.
https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...
https://security.apple.com/documentation/private-cloud-compu...
"Explain it like I'm a lowly web dev"
I think having a description of Apple's threat model would help.
I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.
But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”
1. Stateless computation on personal user data - a property of the application
2. Enforceable guarantees - a property of the application; Nitro Enclaves attestation helps here
3. No privileged runtime access - maps directly to the no administrative API access in the AWS Nitro System platform
4. Non-targetability - a property of the application
5. Verifiable transparency - a mix of the application and the platform; Nitro Enclaves attestation helps here
To be a little more concrete: (1 stateless) You could write an app that statelessly processes user data, and build it into a Nitro Enclave. This has a particular software measurement (PCR0) and can be code-signed (PCR8) and verified at runtime (2 enforceable) using Nitro Enclave Attestation. This also provides integrity protection. You get (3 no access) for "free" by running it in Nitro to begin with (from AWS - you also need to ensure there is no application-level admin access). You would need to design (4 non-targetable) as part of your application. For (5 transparency), you could provide your code to researchers as Apple is doing.
(I work with AWS Nitro Enclaves for various security/privacy use cases at Anjuna. Some of these resemble PCC and I hope we can share more details about the customer use cases eventually.)
Some sources:
- NCC Group Audit on the Nitro System https://www.nccgroup.com/us/research-blog/public-report-aws-...
- Nitro Enclaves attestation process: https://github.com/aws/aws-nitro-enclaves-nsm-api/blob/main/...
What about other staff and partners and other entities? Why do they always insert qualifiers?
Edit: Yeah, we know why. But my point is they should spell it out, not use wording that is on its face misleading or outright deceptive.
Oh I've heard of Apple, they're the company that sued Corellium for letting researchers study iPhone security too well.
No source code, no accountability.
People who post things like you did, unprovoked, when nobody is talking about it and it has nothing to do with the post itself is fucking weird and I'm tired of seeing it happening and nobody calling out how fucking weird it is. It happens a lot on posts about icloud or apple photos or ai image generation. Why are you posting about child porn scanning and expressing a negative view of it for no reason. Why is that what you're trying to talk about. Why is it on your mind at all. Why do you feel it's ok to post about shit like that as if you're not being a fucking creep by doing so. Why do you feel emboldened enough to think you can say or imply shit and not catch any shit for it.
Also worth mentioning that if that had shipped, it would have only taken affect if you uploaded images to iCloud.
There is still the opt-in Communication Safety {3} that tries to interdict sending or receiving media containing nudity if enabled, but Apple doesn’t get notified of any hits (and assuming I’m reading it right the parent doesn’t even get a notification unless the child sends one!).
1: https://archive.ph/x6z0K (WIRED article)
2: https://support.apple.com/en-us/102651 (Adv Data Protection)
3: https://support.apple.com/en-us/105069 (Comm Safety)
They cannot be trust any more. These "Private Compute" schemes are blatant lies. Maybe even scams at this point.
Learn more — https://sneak.berlin/20201112/your-computer-isnt-yours/
Is this the ideal situation? No, probably not. Should Apple do a better job of communicating that this is happening to users? Yes, probably so.
Does Apple already go overboard to explain their privacy settings during setup of a new device (the pages with the blue "handshake" icon)? Yes. Does Apple do a far better job of this than Google or Microsoft (in my opinion)? Yes.
I don't think anyone here is claiming that Apple is the best thing to ever happen to privacy, but when viewed via the lens of "the world we live in today", it's hard to see how Apple's privacy stance is a "scam". It seems to me to be one of the best or most reasonable stances for privacy among all large-cap businesses in the world.
It's not unique to the app, the article is just wrong. It's unique to the /developer/, which is much less specific.
However, it also has a LOT of speculation, with statements like "It seems this is part of Apple’s anti-malware (and perhaps anti-piracy)" and "allowing anyone on the network (which includes the US military intelligence community) to see what apps you’re launching" and "Your computer now serves a remote master, who has decided that they are entitled to spy on you."
However, without this feature (which seems pretty benign to me), wouldn't the average macOS user be actually exposed to more potential harm by being able to run untrusted or modified binaries without any warnings?