Note that this is a slight simplification because, I assume, the reality is irrelevant to understanding the topic:
There are a few different keys [0] that can be chosen at this level of the encryption pipeline. The default one makes data available after first unlock, as described. But, as the developer, you can choose a key that, for example, makes your app's data unavailable any time the device is locked. Apple uses that one for the user's health data, and maybe other extra-sensitive stuff.
[0]: https://support.apple.com/guide/security/data-protection-cla...
> The Mail app database (including attachments), managed books, Safari bookmarks, app launch images, and location data are also stored through encryption, with keys protected by the user’s passcode on their device.
> Calendar (excluding attachments), Contacts, Reminders, Notes, Messages, and Photos implement the Data Protection entitlement Protected Until First User Authentication.
I can confirm that when they say "keys protected by the user's passcode" they mean "protected with class A or B". The most shameful omissions there in my opinion are Messages and Photos, but location data is (from a law enforcement perspective) obviously a big one.
0: https://help.apple.com/pdf/security/en_US/apple-platform-sec...
Edit: Additionally, as to your API question, the system provides notifications for when content is about to become unavailable allowing for an app developer to flush data to disk:
https://developer.apple.com/documentation/uikit/uiapplicatio...
I am not aware of APIs for securely clearing your app's memory (aside from lower level, more manual APIs). This may be one of those cases that relies mostly on sandboxing for protection. I also imagine it's hard to circumvent sandboxing without rebooting. But I'm making a lot of guesses here.
Similarly, if you’re in a situation where you cannot guarantee your phone’s security because it’s leaving your possession, and you’re sufficiently worried, again, power off fully.
This is a terrible idea. When you're crossing a border you have to submit to the rules of entry. If one of those rules is that you let them create an image of your phone with all of its contents, that's the rule. If you say no, then, if you're lucky, you get to turn around and return to where you came from. If you're not lucky, then you get to go to jail.
What needs doing is the ability to make a backup then a way to reconcile the backup at a later date with the contents of a device. That is, I should be able to backup my phone to my home computer (or cloud I guess) and then wipe my phone or selectively delete contents. Then I travel abroad, take photos and movies, exchange messages with people, and so on. Then when I get home I should be able to restore the contents of my phone that were deleted without having to wipe all the new stuff from the trip.
Let’s assume “get back on the plane and leave” is not a viable option.
> GrapheneOS provides users with the ability to set a duress PIN/Password that will irreversibly wipe the device (along with any installed eSIMs) once entered anywhere where the device credentials are requested (on the lockscreen, along with any such prompt in the OS).
In a border interrogation scenario, isn't that just likely to get you arrested for destroying evidence?
You surprised me today.
Now, if you do it, hat off, and even more if you can hire a lawyer and get justice done, but in that case you definitely are not "a normal person".
I don't leave the US.
Additionally, SCOTUS ruled in 2022 (Egbert v Boule) that someone who has had their Fourth Amendment rights violated by CBP agents are not entitled to any damages unless Congress clearly defines a punishment for the violation by a federal agent.
I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.
Under your situation, the best idea is to simply have a wiped device. A Chromebook, for example, allows you to login with whatever credentials you choose, including a near empty profile
this isn't a very useful way to think about it.
they can definitely search your luggage, obviously, but the border guards/immigration officials/random law enforcement people hanging around/etc can also just deny non-citizens entry to a country, usually for any or no reason.
there's documented cases of Australia[0] demanding to search phones of even citizens entering the country, and the US CBP explicitly states they may deny entry for non citizens if you don't give them the password and while they can't deny entry to citizens, they state they may seize the device then do whatever they want to it[1].
0: https://www.theguardian.com/world/2022/jan/18/returning-trav...
1: https://www.cbp.gov/travel/cbp-search-authority/border-searc...
They can. And if you refuse, they can do a lot of very unpleasant things to you. It might against the local law, but it wouldn't really matter in a lot of countries.
you can't be forced to remember a password you "forgot"...
biometric authentication is not always your friend
No, but the border agents also aren't required to let you into the country. (Generally unless you are a citizen.)
So border agents are very different than general laws of the country because while there may be legal protections about what they may be able to force you to do there are much less protections about when you have the right to pass the border (other than entering countries where you are a citizen).
There are steganographic methods to hide your stuff. You can also use burners on either side of the border crossing and keep your main line clean. But bringing a device full of encrypted data (even if it's just your regular photo collection) that you refuse to unlock will probably be suspicious.
I know that there are times when there are no reasons for suspicion and people get stopped anyway. The border agent didn't like your look, or racism, or an order came down from on high to stop everyone from a particular country and annoy them. If that's the case, it's probably still best to not have a lot of incriminating evidence on your person, encrypted or not.
Or, with GrapheneOS, you give them the duress password, on the understanding that you will have to set the device up from scratch IF you ever see it again.
I'd travel with a different device, honestly. I can get a new-in-box android device for under £60 from a shop, travel with that, set it up properly on the other side, and then either leave it behind or wipe it again.
I chose it because it’s a mainstream provider (Nokia) readily available running a supported version of android (12).
If you want to install a custom rom, you can get an older flagship (galaxy s9) and flash it for about the same price.
My point is if your threat model is devices seized at border, then a burner phone is far more suitable for you than a reboot.
The advantage of a burner phone is that it can’t contain anything important, because you’ve never put anything important on it, or connected it to any system that contains important data. So it doesn’t really matter if it’s compromised, because the whole point of a burner, is that it’s so unimportant you can burn it the moment it so much as looks at you funny.
My wife works for the government in a low level role that involves some amount of travel to local authorities (other major areas in Scotland). She has a phone, and strict instructions to never carry it across the border ofmany countries (as a blanket policy). They’re told they’ll be provided a device for travelling and not to install any work apps on it. It’s basic security - don’t travel with information that you can lose control over.
If your threat model is “I think my provider and a nation state are colluding to target me” you probably wouldn’t be posting on HN about it.
1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?
2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?
Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).
I think the reason is to make sure anything from RAM is wiped completely clean. Things like the password should be stored in the Secure Enclave (which encryption keys stored in RAM are derived from) but a reboot would wipe that too + any other sensitive data that might be still in memory.
As an extra bonus, I suppose iOS does integrity checks on boot too, so could be a way to trigger that also. Seems to me like a reboot is a "better safe than sorry" approach which isn't that bad approach.
Typically yeah, I think you're right. But I seem to recall reading that iOS does some special stuff when shutting down/booting related to RAM but of course now I cannot find any source backing this up :/
The important distinction is that, before you unlock your phone for the first time, there are no processes with access to your data. Afterwards, there are, even if you’re prompted for the full credentials to unlock, so an exploit could still shell the OS and, with privilege escalation, access your data.
Before first unlock, even a full device compromise does nothing, since all the keys are on the <flavor of security chip> and inaccessible without the PIN.
Because the state of the phone isn't clean - there is information in RAM, including executing programs that will be sad if the disk volume their open files are stored on goes away.
If your goal is to get to the same secure state the phone is in when it first starts, why not just soft reboot?
I wonder if this explains why the older iPhone I keep mounted to my monitor to use as a webcam keeps refusing to be a webcam so often lately and needing me to unlock it with my password...
From there, you can enable single app mode to lock it into the app you're using for the webcam (I use Camo).
1. Getting there reliably can be hard (see the age-old discussions about zero-downtime OS updates vs rebooting), even more so if you must assume malware may be present on the system (how can you know that all that’s running is what you want to be running if you cannot trust the OS to tell you what processes are running?)
2. It may be faster to just reboot than to carefully bring back stuff.
I’ve also heard people re-purpose old phones (with their batteries disconnected, hopefully) as tiny home servers or informational displays.
The big issue with most platforms out there (x86, multi-vendor, IBVs etc.) is you can't actually trust what your partners deliver. So the guarantee or delta between what's in your TEE/SGX is a lot messier than when you're apple and you have the SoC, SEP, iBoot stages and kernel all measured and assured to levels only a vertical manufacturer could know.
Most devices/companies/bundles just assume it kinda sucks and give up (TCG Optal, TPM, BitLocker: looking at you!) and make most actual secure methods optional so the bottom line doesn't get hit.
That means (for Android phones) your baseband and application processor, boot rom and boot loader might all be from different vendors with different levels of quality and maturity, and for most product lifecycles and brand reputation/trust/confidence, it mostly just needs to not get breached in the first year it's on the market and look somewhat good on the surface for the remaining 1 to 2 years while it's supported.
Google is of course trying hard to make the ecosystem hardened, secure and maintainable (it has been feasible to get a lot of patches in without having to wait for manufacturers or telcos for extended periods of time), including some standards for FDE and in-AOSP security options, but in almost all retail cases it is ultimately an individual manufacturer of the SoC and of the integrated device to make it actually secure, and most don't since there is not a lot of ROI for them. Even Intel's SGX is somewhat of a clown show... Samsung does try to implement their own for example, I think KNOX is both the brand name for the software side as well as the hardware side, but I don't remember if that was strictly Exynos-only. The supply chain for UEFI Secure Boot has similar problems, especially with the PKI and rather large supply chain attack surface. But even if that wasn't such an issue, we still get "TEST BIOS DO NOT USE" firmware on production mainboards in retail. Security (and cryptography) is hard.
As for what the difference is in BFU/AFU etc. imagine it like: essentially some cryptographic material is no longer available to the live OS. Instead of hoping it gets cleared from all memory, it is a lot safer to assume it might be messed with by an attacker and drop all keys and reboot the device to a known disabled state. That way, without a user present, the SEP will not decrypt anything (and it would take a SEPROM exploit to start breaking in to the thing - nothing the OS could do about it, nor someone attacking the OS).
There is a compartmentalisation where some keys and keybags are dropped when locked, hard locked and BFU locked, the main differences between all of them is the amount of stuff that is still operational. It would suck if your phone would stop working as soon as you lock it (no more notifications, background tasks like email, messaging, no more music etc).
On the other hand, it might fine if everything that was running at the time of the lock-to-lockscreen keeps running, but no new crypto is allowed during the locked period. That means everything keeps working, but if an attacker were to try to access the container of an app that isn't open it wouldn't work, not because of some permissions, but because the keys aren't available and the means to get the keys is cryptographically locked.
That is where the main difference lies with more modern security, keys (or mostly, KEKs - key encryption keys) are a pretty strong guarantee that someone can only perform some action if they have the keys to do it. There are no permissions to bypass, no logic bugs to exploit, no 'service mode' that bypasses security. The bugs that remain would all be HSM-type bugs, but SEP edition (if that makes sense).
Apple has some sort of flowchart to see what possible states a device and the cryptographic systems can be in, and how the assurance for those states work. I don't have it bookmarked but IIRC it was presented at Black Hat a year or so ago, and it is published in the platform security guide.
come on dude. they're doing it by default, for > billion people, with their army of lawyers sitting around waiting to defend lawsuits from shitty governments around the world.
In Slovenia, devices have to be turned off the moment they are seized by their owner, prior to putting them into airplane mode.
WHY the hell don't those actions require a passcode or bio authentication??
How often do muggers carry foil pockets even in first world countries? Certainly not in places where there's a mugging almost every week. Some way to track the device on its way to wherever they strip them off for parts would be helpful than not being to track it at all.
It feels like something that needs to be as easy as possible, for safety reasons if not anything else.
Now what I'd like to see is an extension of their protocol that is used to locate iPhones that would also let them accept a "remote wipe" command, even when powered down.
iPhones are still trackable while powered off, at least for a while.
Theft Detection Lock uses AI, your device's motion sensors, Wi-Fi and Bluetooth to detect if someone unexpectedly takes your device and runs away. If Theft Detection Lock detects that your device is taken from you, it automatically locks your device's screen to protect its content.
Downside is that you need to manually disable the automation if you actually wish to use airplane mode (and also remember to re-enable it when done)
I can remember to disable the shortcut whenever I fly and need to enable it.
If they pop my SIM (my provider doesn't use eSIMs...) then there's a PIN on it to prevent use in another device.
The operating theory is that higher management at Apple sees this as a layer of protection. However, word on the street is that members of actual security teams at Apple want it to be unencrypted for the sake of research/openness.
“I also downloaded an older kernel where Apple accidentally included symbols and manually diffed these versions with a focus on the code related to inactivity reboot. The kernel has three strings relating to the feature:” sounds like a little luck there for sure !
How many people are using their phones for some other purpose for which they want their phones to never reboot? And what are they actually doing with their phones?
If you're using the iPhone as some type of IoT appliance, either time limit would be disruptive. But if you e.g. enable Guided Access, the phone will stay unlocked and so shouldn't reboot.
If you're using the iPhone as a phone, who the heck doesn't touch their phone in 24 hours? Maybe if you're on some phone-free camping trip and you just need the iPhone with you as an emergency backup—but in that case, I don't think Inactivity Reboot would be particularly disruptive.
Maybe Apple will lower the window over time?
For people who don’t leave the house that often and have other Apple devices, this suddenly becomes much more frequent.
This feature is not at all related to wireless activity. The law enforcement document's conclusion that the reboot is due to phones wirelessly communicating with each other is implausible. The older iPhones before iOS 18 likely rebooted due to another reason, such as a software bug.
Moreover, you'd have to have some inhibitory signal to prevent everybody's phones restarting in a crowded environment, but any such signal could be spoofed.
That means it's going to be extremely difficult to disable this even if iOS is fully compromised.
Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.
However, considering Apple’s excellent track record on these kind of security measures, I would not at all be surprised to find out that a next generation iPhone would involve the SEP forcing a reboot without the kernels involvement.
what this does is that it reduces the window (to three days) of time between when an iOS device is captured, and a usable* kernel exploit is developed.
* there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched. If you have a captured phone used in a, for example, low stakes insurance fraud case, it’s not at all worth revealing your ownership of a kernel exploit.
Once an exploit is “burned”, they distribute them out to agencies and all affected devices are unlocked at once. This now means that kernel exploits must be deployed within three days, and it’s going to preserve the privacy of a lot of people.
Create a timer function to run a shutdown on a time interval you order. Change shutdown to "restart".
https://support.apple.com/en-us/105121
> With Screen Time, you can turn on Content & Privacy Restrictions to manage content, apps, and settings on your child's device. You can also restrict explicit content, purchases and downloads, and changes to privacy settings.
https://support.apple.com/en-us/111795
> Guided Access limits your device to a single app and lets you control which features are available.
https://support.apple.com/guide/apple-configurator-mac/start...
If you were using an iPad as a home control panel, you'd probably disable the passcode on it entirely - and I believe that'd disable the inactivity reboot as well.
> Apple's Next Device Is an AI Wall Tablet for Home Control, Siri and Video Calls
https://news.ycombinator.com/item?id=42119559
via
> Apple's Tim Cook Has Ways to Cope with the Looming Trump Tariffs
There's literally emails from police investigators spreading word about the reboots, which state that the device goes from them being able to extract data while in AFU, to them not being able to get anything out of the device in BFU state.
It's a bit pointless, IMHO. All cops will do is make sure they have a search warrant lined up to start AFU extraction right away, or submit warrant requests with urgent/emergency status.
True. I wonder if they've considered the SEP taking a more active role in filesystem decryption. If the kernel had to be reauthenticated periodically (think oauth's refresh token) maybe SEP could stop data exfiltration after the expiry even without a reboot.
Maybe it would be too much of a bottleneck; interesting to think about though.
If the kernel is compromised, this is pointless I think. You could just "fake it".
SEP is already very active in filesystem encryption. The real important thing is evicting all sensitive information from memory. Reboot is the simplest and most effective, and the end result is the same.
But, I don't know where the idea of disabling a reboot timer came in? I'm only simply saying that now, you have to have a kernel exploit on hand, or expect to have one within three days - a very tall order indeed.
We (the public) do not know if SEP can control nRST of the main cores, but there is no reason to suspect that it cannot.
This is still being actively researched. I have no evidence, but would not be surprised to find out that a SEP update has been pushed that causes it to pull RAM keys after the kernel panic window has closed.
* This may have been changed since the last major writeup came out for the iPhone 11.
If you were to allow a user to change it, you'd have to safeguard the channel by which the users' desired delay gets pushed into the SE against malicious use, which is inherently hard because that channel must be writable by the user. Therefore it opens up another attack surface by which the inactivity reboot feature itself might be attacked: if the thief could use an AFU exploit to tell the SE to only trigger the reboot after 300 days, the entire feature becomes useless.
It's not impossible to secure this - after all, changing the login credentials is such a critical channel as well - but it increases the cost to implement this feature significantly, and I can totally see the discussions around this feature coming to the conclusion that a sane, unchangeable default would be the better trade-off here.
Then why not simply hardcode some fixed modes of operation? Just as an example, a forced choice between 12, 24, 48, or a maximum of 72 hours. You can't cheat your way into convincing the SE to set an unlimited reset timer. I'm sure there must be a better reason.
Plus, vulnerability often follows complexity. Whether it's human written validation logic being attacked for 6 months in a lab somewhere in Israel or the overly complex UX exposed to some soccer Mom in Minneapolis.
Save money. Save headaches. K.I.S.S.
SpringBoard is the process that shows the home screen, and does part of the lifecycle management for regular user apps. (i.e. if you tap an icon, it launches the app, if you swipe it away in the app switcher, it closes the app)
It is restarted to make certain changes take effect, like the system language. In the jailbreaking days, it was also restarted to make certain tweaks take effect. Of course, it can also just crash for some reason (which is likely what is happening to you)
> Turns out, the inactivity reboot triggers exactly after 3 days (72 hours). The iPhone would do so despite being connected to Wi-Fi. This confirms my suspicion that this feature had nothing to do with wireless connectivity.
Now, a hacker/state who has penetrated a device can do an upload of data from the local decice to a CNC server.
But that seems risky as you need to do it again and again. Or do they just get into your device once and upload everything to CNC?
Here’s some info about how some spyware works: