Having the LED control exposed through the firmware completely defeats this.
> All cameras after [2008] were different: The hardware team tied the LED to a hardware signal from the sensor: If the (I believe) vertical sync was active, the LED would light up. There is NO firmware control to disable/enable the LED. The actual firmware is indeed flashable, but the part is not a generic part and there are mechanisms in place to verify the image being flashed. […]
> So, no, I don’t believe that malware could be installed to enable the camera without lighting the LED. My concern would be a situation where a frame is captured so the LED is lit only for a very brief period of time.
The LED should be connected to camera's power, or maybe camera's "enable" signal. It should not be operable via any firmware in any way.
The led also has to be connected through a one-shot trigger (a transistor + a capacitor) so that it would light up, say, for at least 500 ms no matter how short the input pulse is. This would prevent making single shots hard to notice.
Doing that, of course, would incur a few cents more in BOM, and quite a bit more in being paranoid, well, I mean, customer-centric.
Not a great laptop otherwise, but that was pretty good!
My current notebook, manufactured in 2023, has very thin bar on top of screen with camera, so I need a thin, U-like attachment for the switch, which is hard to find.
[1]: https://www.printables.com/model/2479-webcam-cover-slider
There is no physical microphone cover there, is it ?
Also, loudspeakers can act as microphones, too.
In other words, paranoia gets exhausting in modern times.
(And my smartphone has a replacable battery for that reason to at least sometimes enjoy potentially surveillance free time)
I've been using it daily for 3 years for watching movies and main notebook while traveling.
It's not at all abandonware or e-waste.
No shit. How is the current state btw?
I suppose still not ready to be a daily driver to replace my normal phone?
I'd say that depends on your definition of daily driver and/or how much compromises you're willing to take. I occasionally see members at my larger hackerspace running around with those or other seemingly "unfit" hardware and not complain too much about it ;)
As for phone feature, reliability of that depends on reliability of firmware of the modem, which was always shaky.
Apart from the inconvenience it was somehow liberating knowing there is no microphone physically active.
The other explanation is one of your contacts who were part of the conversation did things that either directly related to thing X, which you spoke about, or something the algorithm see other people do that relates to X, and you got shown ads based on your affiliation to this person.
I've also worked at FAANG and never seen proof to such claims anywhere in the code, and with the amount of people working there who care about these issues deeply I'd expect this to leak by now, if this happens but is siloed...
People have been making claims like this since at least the early 90s, about TV then, and no one ever credibly claims to have worked on something like this. I've worked with purchased ad data and I've never seen this data or anything that implies that it exists. It seems far more likely that its a trick of memory. You ignore most ads you see, but you remember ones that relate to odd topics that interest you.
It's going to happen sooner or later and people will accept it, just like they accepted training of AI models on copyrighted works without permission, or SaaS, or AWS/PaaS, or sending all their photos to Apple/Google (for "backup").
For this to work hangouts.google.com had to not include the HTTP header to block iframing but thankfully if you make up a URL the 404 page served on that domain does not include that http header.
So a webcam hack that lets them watch my 16 year old daughter study would also let them watch her sleeping, getting dressed, and making out with her boyfriend.
My laptop is in my bedroom in winter, right now, because it's one of the smallest rooms and I can heat it easily. I use it in other parts of the house in the other seasons. I do have a sliding cover on the camera. I bought it years ago. The main issue is the microphone.
And yeah, if they had access to my webcam, they would just see a guy staring into the screen or walking back and forth in the room.
Nobody who is themselves sane is going to judge another for random crap they say when they think themselves alone.
You can also use an LED as a light sensor.
and I also came across a YT vid of a console that used a piezo electric speaker for motion sensing.
I wonder if you could use a track pad to pick up sound.
All phones are suspect. We should go back to only carrying pagers.
I would not be surprised if the same is true for some other manufacturers, too, but I can only speak definitely to Mac.
The issue is that lids close too closely + tightly now, and so anything more than a piece of tape winds up focusing all the pressure applied to the closed lid on that one spot in the glass, since the cover winds up holding the display slightly off the base of the laptop when in the closed position.
Not as pretty as a custom cover but cost-effective and can generally be done in under a minute with common office supplies (post-it + scissors) which has its own advantages.
There’s been no damage to the screen from the adhesive although occasionally I’ve had to clean the residual adhesive with 70% IPA, but nothing worse than the typical grime that most laptop monitors pick up.
Possible, I have one IPS monitor with a spot on screen where the color is pale. I had a post-it note there and I guess something bad happened when I tore it off.
I've never tried them on a matte or coated screen though.
Wiring it in like this is suboptimal because this way you might never see the LED light up if a still photo is surreptitiously captured. This has been a problem before: illicit captures that happen so quickly the LED never has time to warm up.
Controlling the LED programmatically from isolated hardware like this is better, because then you can light up the LED for long enough to make it clear to the user something actually happened. Which is what Apple does -- three seconds.
The only time that isolated hardware approach is benefitial in terms of costs would be when you already have to have that microcontroller there for different reasons and the cost difference we are talking about is in the order of a few cents max.
I don't see why they should be mutually exclusive
Firmware programming should require physical access, like temporarily installing a jumper, or pushing some button on the circuit board or something.
(I don't want to suggest signed images, because that's yet another face of the devil).
it sounds like Apple is doing something similar to what you suggest.
Regardless, that's a pretty strong claim. I'd love to learn more if you have a link that can back you up!
:00 Photobooth window open
:03 Camera LED lights up
:05 First image displayed
That might make it harder to develop a hack, but one would hope that if the hardware team tied the LED to a hardware signal, it would not matter if the firmware were reflashed.
You need some logic to enforce things like a minimum LED duration that keeps the LED on for a couple seconds even if the camera is only used to capture one brief frame.
I have a script that takes periodic screenshots of my face for fun and I can confirm the LED stays on even if the camera only captures one quick frame.
A custom PMIC for what's known as the forehead board was designed that has a voltage source that is ALWAYS on as long as the camera sensor has power at all. It also incorporates a hard (as in, tie-cells) lower limit for PWM duty cycle for the camera LED so you can't PWM an LED down to make it hard to see. (PWM is required because LED brightness is somewhat variable between runs, so they're calibrated to always have uniform brightness.)
On top of this the PMIC has a counter that enforces a minimum on-time for the LED voltage regulator. I believe it was configured to force the LED to stay on for 3 seconds.
This PMIC is powered from the system rail, and no system rail means no power to the main SoC/processor so it's impossible to cut the 3 seconds short by yoinking the power to the entire forehead board.
tl;dr On Macbooks made after 2014, no firmware is involved whatsoever to enforce that the LED comes on when frames could be captured, and no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.
0: https://www.usenix.org/system/files/conference/usenixsecurit...
[1] https://support.apple.com/guide/security/hardware-microphone...
We have no way of verifying that anything they said in that document is true.
That said, I still use "Nanoblock" webcam covers and monitor for when either the camera or microphone are activated.
Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby. But naked images of Abby can head off into the ether and be propagated more or less forever, turn up on hate sites, be detrimental to careers etc.
If your threat model is leaking company secrets then sure, microphone bad, as is anything having access to any hardware on your machine.
So sure, maybe people ought to be more concerned about microphones as well, rather than instead.
> Nobody but Abby and Ben care if Ben is caught admitting he cheated on Abby.
That destroys families, standing within a community, and very often careers.
> chats and email, browsing history, etc are all much more likely to result in harm if leaked than a recording of you innocently in your home.
This is far less of an intrusion for most people than recording what they are actually doing in their own home IRL. People know that information can be hacked, they don't expect and react quite differently to someone actually watching them.
> That destroys families, standing within a community, and very often careers.
Yes, but it doesn't stay on the internet forever in quite the same way.
Now I get to some extent what you're saying - aren't the consequences potentially worse from other forms of information leak?
Maybe. It depends on how you weight those consequences. I'd put (for example) financial loss due to fraud enabled by hacking my accounts as far less important than someone spying on me in my own home. Even if they didn't use that to then extort me, and were using the footage for ... uh ... personal enjoyment. I think a lot of people will feel the same way. The material consequences might be lesser, but the psychological ones not so much. Not everything is valued in dollars.
It's also known that people are not very good at assessing risk. People are more word about dying at the hands of a serial killer than they are of dying in a car crash or slipping in the shower. I feel you're underplaying the psychological harm of having all of your data crawled through by a creep (that would include all of your photos, sites visited, messages sent, everything).
All I can really say is that if someone gained access to my machine, the camera would be the least of my concerns. That's true in nearly every context (psychological, financial, physical, etc).
I presume the reason behind this is that video is much more likely to be re-shared. Sending bob a zip of someone's inbox is unlikely to be opened, and even less likely to be shared with strangers. But send bob a video of Alice, and he might open it. Heck, he might not know what the video is until he opens it. So even if he is decent, he might still see it. And if he is less decent and shares it, strangers are much more likely to actually view it.
It's not just about nudity and extortion, but someone having access to watch you, whenever they feel like, in your safe space. That sense of violation that people also feel when (for instance) they have been the victim of burglary - the missing stuff is often secondary to the ruined sense of security. There's a vast difference between leaving your curtains open and having someone spying on you from inside your own home.
Is it rational to put this above other concerns? That's a whole different debate and not one I'm particularly interested in. But it explains why people are concerned about cameras over 'mere' data intrusion.
This isn't true at all, even for private citizens. Your friends, parents, children, and colleagues are all likely to care.
When people are extorted for these kinds of things it's usually catfishing that leads to sexual acts being recorded. That's not related to cybersecurity.
edit: s/baked/naked/ :D
I may be the oddball here, but that 3 second duration does not comfort me. The only time I would notice it is if I am sitting in front of the computer. While someone snapping a photo of me while working is disconcerting, it is not the end of the world. Someone snapping photos while I am away from the screen is more troublesome. (Or it would be if my computer was facing an open space, which it doesn't.)
The exploit mitigations to prevent you from getting an initial foothold.
The sandboxing preventing you from going from a low-privileged to a privileged process.
The permissions model preventing unauthorized camera access in the first place.
The kernel hardening to stop you from poking at the co-processor registers.
etc. etc.
If all those things have failed, the last thing to at least give you a chance of noticing the compromise, that's that LED. And that's why it stays on for 3 seconds, all to increase the chances of you noticing something is off. But things had to have gone pretty sideways before that particular hail-mary kicks in.
And, of course, covers are an option.
[1] https://www.businessinsider.com/lenovo-thinkshutter-laptops-...
- The LED is in parallel, but with the sensor voltage supply, not the chip
- Camera sensor idle voltage = low voltage for the LED (be it with stepping if needed)
- Camera sensor active voltage = high voltage for the LED (again, stepping if needed)
- little capacitor that holds enough charge to run the LED for ~3 seconds after camera goes back to idle voltage.
Good luck hacking that :)
If the LEDs come from a different supplier one day, who is going to make sure they're still within the spec for staying on for 3 seconds?
(And yes, I have long since parted ways with Apple)
Edit:
And to add on: That capacitor needs time to charge so now the LED doesn't actually come on when the sensor comes on, it's slightly delayed!
Agreed, however, that the LED should be controlled by the camera sensor idle vs. active voltage.
You'll pardon us all if we don't really believe you, because a)there's no way for any of us to verify this and b)Apple lied about it before, claiming the LED was hard-wired in blah blah same thing, except it turned out it was software controlled by the camera module's firmware.
The LED being "hard-wired" is a tricky statement to make, and I actually wasn't aware Apple has publicly ever made a statement to that effect. What I can say is that relying on the dedicated LED or "sensor array active" signal some camera sensors provide, while technically hard-wired in the sense there is no firmware driving it, is not foolproof.
Source?
EDIT: It’s not just a capacitor, it’s a full custom chip, that can’t be software-modified, that keeps the light on for 3 seconds. https://news.ycombinator.com/item?id=42260379
If you successfully compromise the host OS and also the secure enclave firmware, that might be enough to let you turn on the camera (without vsync) and reconstruct the correct image via later analysis... but at that point you have committed tens of millions to the hack (so you'd better not overuse it or it'll get noticed & patched).
> no firmware is involved in enforcing the LED stay on for 3 seconds after a single frame is captured.
I think it's simpler to assume that most devices can be hacked and the LED indicator isn't infailable than to always keep in mind which device lines are supposed to be safe and which ones aren't.
https://www.tomsguide.com/phones/iphones/iphone-16s-a18-chip...
Maybe enable a pre-charged capacitor to the LED whenever the circuit is activated? A "minimum duty cycle" for the LED might help solve this.
On my ThinkPad it’s instead painted with a red dot. Because, obviously, the conventional meaning of a red dot appearing on a camera is “not recording”.
My Latitude 7440 has a physical slider switch that covers the camera, in addition to turning it off in a software-detectable way (it shows "no signal" and not just a black screen once the slider is about 50% covering the lens). My only criticism of this is that it's subtle and at a glance hard to tell the difference between open and closed, but I guess you just get used to the slider being to the right.
I was just testing and the white LED comes on when I open something that wants to use the camera, even when the cover is closed. This seems like a useful way to detect something (eg malware) trying to use the camera, and is a good reason to not bluntly cut power to the entire camera module.
There's just a valid an argument to do the same for phones. How many phones ship with camera covers and how many users want them?
You can get a stick on camera cover for $5 or less if you want one. I have them on my laptops but not on my phone. They came in packs of 6 so I have several left.
In some over-engineered world, when the camera cover is engaged the webcam video feed would be replaced by an image of the text "Slide camera cover open" (in the user's language) and an animation showing the user how to do so.
https://www.youtube.com/watch?v=k6AsIqAmpeQ&t=1145s
And adding 2+2, the man being interviewed (Nirav Patel) is the same man who replied to my comment (HN user nrp), i.e. the man who actually did the overengineering.
If you rewind to 17:03, he talks about the changes of what the switch does (previously: USB disconnection, now: as he described in grandparent comment).
That said, I really can't comment on how durable it is. I only remove the cover about a half dozen times a year.
(It could also be contention between thickness of the display vs enterprise customer sensitivity to cameras)
There's also the scenario where the LED or the connections to it simply fail. If the circuit doesn't account for that, then boom, now your camera can function without the light being on.
Can't think of any other pitfalls, but I'm sure they exist. Personally, I'll just continue using the privacy shutter, as annoying as that is. Too bad it doesn't do anything about the mic input.
There's a LOT of pitfalls still (what if you manage to pull power from the entire camera sub-assembly?), this was a fun one to threat-model.
For one the energy to take a picture is probably enough to power a light for a noticeable amount of time.
And if it isn't, a capacitor that absorbs energy and only allows energy through once it's full would allow the light to remain on for a couple of seconds after power subsides.
About being slow, I suppose it does run windows and its infamous 'defender'
I thought this was a solved problem, like, decades ago? At least I remember even the first gen MacBooks having accurate battery percentages, and it’s a more vague memory but my PowerBook G4 did too I think.
Same for your Windows idea...
To put it simply: the charge level, usually, is just a lookup table for voltage (not under load).
I do not know whether the battery is actually experiencing that sudden loss in charge, nor do I care, because in practice the end result is the same...
More also you'd want a hold up time for the light (few seconds at least), as taking pictures would flash them for 1/60 of a second or so.
Even so this whole attack vector isn't solved with this. How long should the light stay on for after the camera is put in standby before a user considers it a nuisance? 5 seconds? So if I turn my back for longer than that I'm out of luck anyways.
The anti-TSO means would be a hardware serial counter with a display on the camera. Each time the camera is activated the number is incremented effectively forming a camera odometer. Then if my previous value does not match the current value I know it's been activated outside of my control.
Maybe the Air?
Threads filled with inaccurate posts like that are a large part of the reason that educating the general populace on security issues is so difficult.
These warnings have hysteresis and logging. They don't disappear the moment you close the device, and you can see which app is using which device.
...and no, ambient light sensor handles the true tone and brightness. It's not the camera.
They briefly saw the LED flash.
But it was not on for any length of time and you could miss it.
This stuff should be completely in hardware, and sensible - stay on for a minimum time, and have a hardware cutoff switch.
Really nasty world they've made for themselves, blackmailing, extorting and generally controlling other people (mostly women and girls, but some men too) with threats of releasing compromising material.
There's no standard that I know, that, like "Secure EFI / Boot" (or whatever exact name it is), locks the API of periphery firmware and that would be able to statically verify that said API doesn't allow for unintended exploits.
That being said: imagination vs reality: the Turing tarpit has to be higher in the chain than the webcam firmware when flashing new firmware via internal USB was the exploit method.
(Source: I architected the feature)
> Macbooks manufactured since 2014 turn on the LED whenever any power is supplied to the camera sensor, and force the LED to remain on for at least 3 seconds.
That convinced me originally I think, good old days! I'd almost forgotten about it. The way you phrased it, it sounded like 50% OS concern to me.
But if cam & LED rly share a power supply, and the LED is always on without any external switch, Good then!
(No, I’m not actually worried about this, I’m far too unimportant for anyone to make a targeted attack against)
edit: looks easily bypassed https://github.com/cormiertyshawn895/RecordingIndicatorUtili...
My guess is that, assuming the most basic and absolute physicial design, the light would flash for silly things like booting, upgrading firmware, checking health or stuff like that.
In this case I was referring to false positives to the user.
This would mean we can't update the firmware without causing the user some paranoia.
Also. Would an app requesting permission to use camera itself send some power to the camera to verify it is available? In a similar vein, what about checking if the camera is available before even showing the user the button to use the camera?
Maybe there's solutions to this, I'm just pointing out some reasons they may have gone the software route instead of the hardware route.
Somebody here has also mentioned Apple using the camera for brightness and maybe color temperature measurement, for which they wouldn't want to enable the LED (or it would effectively always be on).
That doesn't automatically make that a good tradeoff, of course; I'd appreciate such a construction.
That is not true. MacBooks have separate light sensors. And the camera physically cannot activate without the LED lighting up and a notification from the OS. People say a lot of stupid things in the comments…
https://github.com/Hermann-SW/imx708_regs_annotated?tab=read...
Likely UX over security and privacy.
Led, no led, who cares, plastic is blocking the lens. Move the cover away, say hi on zoom, wave, turn the camera back off, cover on, and stay with audio only, as with most meetings :)
"Job done boss!"
That's it. That's what happens. Nobody ever reviews anything in the general industry. It's extremely rare for anyone to raise a stink internally about anything like this, and if they do, they get shouted down as "That's more expensive" even if it is in every way cheaper, or "We'll have to repeat this work! Are you saying Bob's work was a waste of time and money!?" [1]
[1] Verbatim, shouted responses I've received for making similar comments about fundamentally Wrong things being done with a capital W.
I feel really dirty calling lawyers the good guy here, but ...
Other organizations like law enforcement, are also ambivalent about this.
The easy solution, of course, is a folded business card or piece of tape. But tbh I'm not surprised they didn't implement that approach, and likely deliberately.
This definitely happened too on Mac in the past, then they went in damage control mode. Not only had Apple access to turn off the LED while the camera was filming, but there was also a "tiny" company no-one had ever heard off that happened to have the keys allowing to trigger the LED off too. Well "tiny company" / NSA cough cough maybe.
After that they started saying, as someone commented, that it requires a firmware update to turn the LED off.
My laptop has a sticker on its camera since forever and if I'm not mistaken there's a famous picture of the Zuck where he does the same.
I've got bridges to sell to those who believe that the LED has to be on for the camera to be recording.
SPEAKE(a)R: Turn Speakers to Microphones for Fun and Profit
It is possible to manipulate the headphones (or earphones) connected to a computer, silently turning them into a pair of eavesdropping microphones - with software alone. The same is also true for some types of loudspeakers. This paper focuses on this threat in a cyber-security context. We present SPEAKE(a)R, a software that can covertly turn the headphones connected to a PC into a microphone. We present technical background and explain why most of PCs and laptops are susceptible to this type of attack. We examine an attack scenario in which malware can use a computer as an eavesdropping device, even when a microphone is not present, muted, taped, or turned off. We measure the signal quality and the effective distance, and survey the defensive countermeasures.
[0] https://arxiv.org/abs/1611.07350(you also need to plug the speaker directly, mostly limiting it to headphones and laptop speakers)
What's notable about this paper is only that they demonstrate it as a practical attack, rather than just a neat fun fact of audio engineering.
As a fun fact, an LED can also be used as a photometer. (You can verify this with just a multimeter, an LED, and a light source.) But I doubt there's any practical attack using a monitor as a photosensor.
Not only is it common knowledge it's how drive-thru kiosks work!
Source: I used to test microphone/speakers for a kiosk OEM.
Not knowing much about how soundcards work, I imagine it would be feasible to flash some soundcards with custom firmware to use the speaker port for input without the user knowing.
Example https://m.youtube.com/watch?v=1NNP6AFkpjs
:-)
I've seen some theatrical DJs bring a cheap pair, snap them in half, and then use them like a "lollipop." Crowd eats it up. Even older school: using a telephone handset: https://imgur.com/a/1fUghXY
That said the most sensitive information is what we already willingly transmit: search queries, interactions, etc. We feed these systems with so much data that they arguably learn things about us that we're not even consciously aware of.
Covering your camera with tape seems like a totally backwards assessment of privacy risk.
Depends on how you look in underwear.
It depends on the person, I don't think you could gain much from me? I don't say credit card numbers out loud, I don't talk about hypothetical crimes out loud. I don't say my wallet seed phrases out loud. I also don't type in my passwords. Yes you could probably find out what restaurant I'm ordering delivery for, but other than that I suppose my conversations are really boring.
For the mic, perhaps you could disable it by plugging in an unconnected trrs plug into the audio jack. I'm not sure how low level the switching of the microphone source is when you do this, so maybe it's not a good method.
It did most likely physically damage it forever, but at least I now know it's OFF for good.
if you have a case on your phone its a lot less destructive too since you can just stuff the sugru into the microphone hole in the case. the case i was using was soft rubber so it was easy enough to pop out the corner of the case to be able to use the microphone for a call.
that wasnt my daily phone at the time though so im not sure how well it would work in reality. i could see myself forgetting to pop out the case when i get a call and the other person just handing up before i realised what was going on.
it also doesnt work on every phone. i tried the same thing on a pixel 5 but blocking the mic hole did nothing, but that phone uses an under screen speaker so maybe there is something similar going on with the mic
https://support.apple.com/en-ca/guide/security/secbbd20b00b/...
I use the -disu flags
https://mic-lock.com/products/copy-of-mic-lock-3-5mm-metalli...
This still doesn't stop a program from switching the input from external back to the internal mics though afaik
Also, on Qubes OS, everything runs in VMs and you choose explicitly which one has the access to microphone and camera (non by default). Admin VM has no network.
I haven't seen any microphone integrated in the processor.
Yet
Are there examples of using IMUs to get audio data you could point to? A quick search didn't reveal anything.
And there's this post, which includes an audio clip: https://goughlui.com/2019/02/02/weekend-project-mma8451q-acc...
Why?
They are very low level input and generally need a pre-amp just to get the signal outside the microphone. However conceptually at least they are there and so maybe someone can get it to work.
For video, it is extortion. For microphone, it's much harder.
Nor is there on any free system for which you didn't make every hardware component yourself, as well as audit the executable of the compiler with which you compiled every executable. (You did self-compile everything, hopefully?)
If the components follow standards and have multiple independent implementations, you can be reasonable confident it's not backdoored in ways that would require cooperation across the stack. At least you raise the cost bar a lot. Whereas for a vertically integrated system, made by a company headquartered in a jurisdiction with a national security law that permits them to force companies to secretly compromise themselves, the cost of compromise is so low that it would be crazy to think it hasn't been done.
Purchased music is DRM free. Streaming music was never DRM free, since you arguably do not "own" music that you have not purchased. Though I'm sure record labels would love if they could get DRM back on purchased music again.
But this is a pretty extremist take. Just because a company doesn't push source code and you can't deterministically have 100% certainty, doesn't mean you can't make any assertions about the software.
To refuse to make any claims about software without source is as principled as it is lazy.
Imagine an engineer brought to a worksite, and they don't have blueprints, can he do no work at all? Ok, good for you, but there's engineers that can.
Reversing the software is table stakes for assurance work already so suggesting source is a requirement just doesn’t match reality.
Which is to say, every system in actual widespread use. All such CPUs, GPUs, storage devices, displays, etc. run closed microcode and firmware. It'd be funny if it wasn't so profoundly sad.
And even if they didn't, the silicon design is again, closed. And even if it wasn't closed, it's some fab out somewhere that manufactures it into a product for you. What are you gonna do, buy an electron microscope, etch/blast it layer by layer, and inspect it all the way through? You'll have nothing by the end. The synchrotron option isn't exactly compelling either.
I'll just highlight this excerpt of your own words for you, and usher you to evaluate whether your position is even internally consistent.
Trusting someone doing the right thing when you purchase is different from trusting them not tampering things remotely in the future. Companies can change management, human can change their mind. The time factor is important
There sure is a difference in threat model, but I don't think the person I was replying to appreciates that, which is kind of what triggered my reply.
For example, I completely trust Emacs maintainers, as I have yet to see any malice or dark patterns coming from them. The same applies to other free and open source software I use on a daily basis. These projects respect my privacy, have nothing to hide, and I have no problem trusting them.
On the other hand, I see more and more dark patterns coming from Apple, say when signed out of their cloud services. They pour millions into their privacy ads, but I do not trust them to act ethically, especially when money is on the table.
Does this not make sense?
That being said, I have seen "patterns" with open source software as well, so I'm hesitant to agree on trusting it. But that's a different problem.
I also know how little hardware, microcode and firmware can be trusted, so that doesn't help either.
Only outstanding individuals such as Jia Tan.
There are actual compromises caught this way too, it's not (entirely) just for show. A high-profile example would be Kaspersky catching a sophisticated data exfiltration campaign at their own headquarters: https://www.youtube.com/watch?v=1f6YyH62jFE
So it is definitely possible, just maybe not how you imagine it being done.
If the attacker has little to lose (e.g. because they're anonymous, doing this massively against many unsuspecting users etc.), the chance of them eventually succeeding is almost certain.
Lenovo put a little physical switch—they call it "ThinkShutter"—that serves to physically obstruct the webcam lens to prevent recording. It's supposed to have only two positions: lens obstructed or not. But if the user accidentally slides it halfway, you can still record video with the lens unobstructed but somehow the webcam LED turns off. It's because the ThinkShutter actually moves 2 pieces of plastic: 1 to cover the lens, 1 to cover the LED. But the piece covering the LED blocks it first, before the other piece of plastic blocks the lens. I discovered this accidentally yesterday while toying with a X1 Carbon... I am reporting it to Lenovo.
They fail to develop a reliable webcam indicator, and patch that with some half-assed attempt at physical view obstruction. The whole approach is a demonstration of bad engineering and unreliability. And that's just the part that became public.
The windows are there just to make the humans inside more comfortable, similar to how many people would be more comfortable without a camera pointed at them.
Flashing firmware is a big hill to climb for bad guys in most peoples worlds.
These usually get neither an LED nor a switch, and unlike cameras can't easily be covered, nor pointed away from potentially sensitive topics/subjects.
Also, getting a voice sample in the first place gets significantly easier that way: Not everybody publishes video or audio recordings of themselves online.
Which reminds me, to strengthen your point, it doesn't have 100% keystroke recognition, but there are works[1] on keylogging via audio, and 93% via Zoom-quality audio streams is concerning enough for me.
Lots of ThinkPads have «Microphone is muted» LED. Not exactly what's requested (and is bound to a software mute/unmute shortcut), but it's better than nothing regarding state of machine being observable with a quick glance.
echo 1 | sudo tee /sys/devices/platform/thinkpad_acpi/leds/platform\:\:micmute/brightness
is enough to turn the LED on without muting the mic.For example, I'd not be happy having my voice auto-transcribed by some malware as I authenticate to my bank providing my SSN etc (which as an authentication method is of course horribly insecure, but that's a different discussion).
Of course, this will vary from person to person, but as mentioned above, just being able to mechanically cover a camera when required makes it less of an issue for me.
If someone drains my accounts, I'm definitely screwed.
You ring your bank and it's reversed almost instantly. Your photos on the internet you have no way of doing anything about them, they are there forever.
A young Danish woman had nude photos leaked by an old boyfriend. She had her friend take better pictures and posted those herself so now no one can find the original photos. Not suggesting that as a solution, but I thought it was a pretty fun response.
This never really registered to me, before a former colleague commented on the nonsense with people putting tape over their camera. If an attacker has access to your camera, or microphone, then they have access to pretty much very thing else. The difference in damage is negligible, it's already total for most.
If people are truly concerned about the camera in their laptop, then keep the computer in a dedicate room, shut it down when your done (or close the lid, if it's a laptop).
Sure it's kinda dumb that the LED is software controlled and that there's not a physical switch for turning off the microphone, but even having those things done correct doesn't negate the amount of damage someone could do with they have that kind of access to your devices.
This is obviously incorrect.
There is lots of software that can get access to your camera/microphone but not have access to anything else, like browser-based applications. And on Mac even locally installed applications are limited; getting access to user data directories requires a separate permission grant from webcam access.
You might also simply have nothing incriminating on your machine.
An exception to that rule is if they have hardware switches for turning off the power supply to the camera and microphone.
Currently, I am very happy with my Framework, where the LED is hardwired into the power supplied to the camera[1].
[0]: https://en.wikipedia.org/wiki/Optic_Nerve_(GCHQ)
[1]: https://community.frame.work/t/how-do-the-camera-and-microph...
Can we use it to indicate additional information?
Can we make it standard with the other LEDs?
Can we dim it so it's more pleasant to use at night or make it a customisable colour?
I'm sure plenty of other questions take you down the same path and you've just destroyed one of the LEDs most useful functions.
Hanlon's Razor: Don't attribute to malice what can be attributed to stupidity
Chesterton's Fence (worth googling as it's a nice little parable, or it might be a derivation of the parable,can't quite remember): Can boil it down to: if you don't know what something does assume it serves a purpose until you've figured out what purpose it used to serve. In this case I'm implying these people are playing the part of wanting to change the fence without knowing it's purpose.
I'll bet it went something like this: As originally specified, the user need was "LED privacy indicator for the webcam." Product management turns that into two requirements:
1) LED next to webcam.
2) LED turns on and off when webcam turns on and off.
Requirement 1 gets handed to the EEs, and requirement 2 gets handed to the firmware engineers. By the time a firmware engineer gets assigned the job of making the LED turn on and off, the hardware designers are already 1 or 2 board spins in. If the firmware engineer suggested that we revise the board to better fit the intention of the user needs, one of two things will happen:
1) They'll get laughed out of the room for suggesting the EEs and manufacturing teams go through another cycle to change something so trivial.
2) They'll get berated by management because it's "not the engineers' place to make decisions about product requirements."
Of course this is all spitballing. I've definitely never been given a requirement that obviously should have been a hardware requirement. I've definitely never brought up concerns about the need to implement certain privacy and security-critical features in hardware, then been criticized for that suggestion. And I've definitely never, ever written code that existed for the sole purpose of papering over bad product-level decision making.
Nope, never. Couldn't be me.
The very first home made monitoring camera I made with a Raspberry Pi 1 and the camera module you could disable the LED in the config.
So it seems to be an old pattern. Definitely would make the most sense to focus on privacy and make the LED hard wired but here we are.
Nobody sane would hopefully design it this way. :)
In late 2014 was the last big webcam vulnerability "hype" I remember [1], which led to a wave of media attention, webcam covers, vendor statements that LED-control is / will be hard-wired etc.
I'm more interested how this big attention impacted future designs of laptops (like my cheap HP here, which has a built-in camera cover)
[1]: https://www.usenix.org/conference/usenixsecurity14/technical...
Would be good to keep that LED ON well after the Camera switches off (Not sure what that minimum would be without causing an inconvenience - but how about 15 minutes ? - Long enough to educate the users to worry about their privacy and perhaps take breaks between making video calls !) - Just a thought.
Many of lenovo have that even included their gaming laptop line (it's actually even better and more convient on that one, thanks to the larger size available).
Doesn't solve the problem this article talks about, but if that's something that worries you I would still trust that more than most (and it's a lot less weirdo looking than taping your camera).
(I personally just leave the tape there all the time, because if I need to videoconference, I’d rather connect my mirrorless camera with a much better lens and sexy bokeh.)
[0] https://shop.eff.org/products/laptop-camera-cover-set-ii
Cameras and microphones and write enable must have physical switches, not software ones. When will people learn?
Never.
Me, I unplug the camera and mike when not in use.
Seeing the webcam actually vanish from the list of devices is very nice. :D
Your preferences are not everybody's. Personally, I'd be totally fine with a camera and microphone LED that is guaranteed to activate whenever there is power/signal flowing from either.
> Me, I unplug the camera and mike when not in use.
That's a bit hard to do on a laptop that has both built in.
The Framework laptops have two tiny switches near the camera that physically turn off the mic and camera, and it presumably wouldn't be difficult for other manufacturers to follow suit if enough people cared.
I used to design airplane parts and systems. A guarantee isn't worth squat. Being able to positively verify it is what works.
You're right that I don't use a laptop for videoconferencing. I wouldn't use the builtin mike and camera anyway, as a 5 cent microphone can make it hard for the other party to understand you. I use a semi pro mike. If you're in business, I recommend such a setup.
> Being able to positively verify it is what works.
How do you positively verify that a device only contains the microphones you're aware of?
For type A connectors that is only 1500 cycles. Mini USB connectors raise that 5000 cycles. Micro USB and USB-C raise it to 10000.
For a type A just plugging and unplugging twice a day every workday would reach 1500 cycles in a little over 3 years.
What I do now for things that I'm going to plug/unplug a lot where the thing is expensive enough that I don't want to risk the connector wearing out before I'm ready to replace the thing is use a short extension cable or an inexpensive hub. The extension cable or hub can be relatively permanently connected and the thing that is frequently plugged/unplugged connects to that.
I feel like people were pleading for this when people were getting ratted and began taping over their cameras, and the tiny number of laptop manufacturers just ignored what would be a cheap easy change. Eventually, people just accepted that it must be impossible to install a switch. I couldn't ever think of any motivation for a lack of a switch other than government pressure, so I've always assumed that the cameras and microphones are backdoored.
I don't get how "some tape" became the standard solution for these thousand dollar devices.
Black electrical tape was also the solution for the blinking 12:00 on consumer VCRs.
Different persons learn this at different times (or never).
But then market dynamics come into play, as well as the current state of the legal code / enforcement.
I do have a laptop and it have a physical cover I can slide into place. Short of black blutack I've not got a decent option for the mic though.
Privacy and security risks of the future loom big.
what would be even better is PHYSICAL HARDWARE POWER SWITCHES for microphones, speakers, and webcam
this ought to be a manufacturer regulation, no more ridiculousness
Each should have their own switch, otherwise they will group them all into one "privacy mode" switch that also includes something you basically can't live without. Like the keyboard doesn't work in privacy mode or something. Plus, I'd like to be able to leave some of these off by default, only switching them on when I want to use that feature.
I imagine a company good at design (e.g. Apple) could make these small, elegant and easy to use.
This is a capitalism.
If there are any independent phone makers listening - I would switch from iPhone to Android if it had switches like these.
https://pine64.org/documentation/PinePhone/Privacy_switches/
I looked at pictures of the phone in their store, and cannot see the same switches as are shown on the page you linked to. I'll assume they're under the back cover or something. I hope at some point they'll move these to a quickly-accessible location on the outside. I'll leave them some feedback to that effect as well.
I... don't? Depends on the company, but I trust that my company has no override for the hardware based LED light on my Mac, as well as the software based microphone indicator. If they did, I would consider this highly scandalous for apple
I keep covering them up with bits of paper (because like most people, I don't trust LEDs or switches) that look ugly and invariably get blown off by a gust of wind and have to be reapplied when moving.
It just seems like at some point around 2010 some cabal decided that every device with a screen needs to have a camera facing the user and a microphone.
The whole point of a laptop is to be able to move around and travel with it.
FWIW you can still encounter laptops without webcams (MNT reform comes to my mind) and you can also choose to disable/load/unload the kernel modules for them dynamically on linux distros and BSDs
We should always assume that everything is possible in the digital world. And act accordingly.
EDIT: I keep a piece of black electric tape over any of my notebook's webcams.
This.
Some of the linux webcam drivers drivers have had the option to specify the behavior of the LED via a parameter since way back, including turning it completely off.
I remember this was the case ~20 years ago.
One example (look for the led-option) https://www.kernel.org/doc/html/v5.1/media/v4l-drivers/phili...
This is straight from the documentation:
"But with: `leds=0,0`the LED never goes on, making it suitable for silent surveillance"
Deliverable computer security is something which does _NOT_ exist. If somebody is telling you otherwise, they are trying to sell you something.
Less worse scenario: you need to strip down software (including the SDK) and hardware to the bear minimum, and to aim for excrutiating simplicity still able to do a good enough job (this is currently mostly denied by the "planned obsolescence" and "overkill complexity" from Big Tech or brain damaged "standards/SDK"). Maybe, then, and I say maybe, you may have a little chance, but usually that chip(CPU) you are using is full of backdoors (or convenient bugs) anyway.
As an extra feature, firmware could hook up to the software that uses the webcam to send a "hang up" signal when I turn off the webcam. That way I don't have to find some button in whatever software the person I'm having a call with wants to use — I just "turn off" my webcam.
Personally I didn't think Lenovo's later keyboards were too bad. The one on my T490s was wonderful. However since my work moved to the T14s series, the keyboards have become terrible. The key movement range is too low now, and the feel is crap. It's too bad because Lenovo was the last holdout which still had decent keyboards. The T14s is also bad in other ways, the body got thinner but the screen got a lot thicker and heavier so it's actually worse to carry than the T490s.
Anyway, ontopic: I'm not surprised these cam controller firmwares can be hacked. It's very specific to the controller though.
However, most people I know that care about privacy close the cam door anyway, or put a sticker over it. I use the SpyFy. https://spy-fy.com/collections/webcam-covers . Good luck hacking that.
What worries me a lot more is the microphone. It doesn't have a light, and it's really hard to block. A simple sticker won't do much. These things are super sensitive. I can literally hear myself talking in the other room with the right boost settings.
I swapped its keyboard with an x220 one, which is the thing to do if you are into the older thinkpad KB feel.
Ideally, the same should also be true of the microphone pre-amp, with its own LED separate from the camera one.
Of course, if everyone does that, attackers will just start pulsing Thinklights and seeing if anything enumerates, I suppose.
adding an LED implant control by flashing a USB, internally on an "8051-based" CPU, where the value's dependence is a feature of a dynamic memory allocator.
going one step further with cron tasks scheduled at irregular intervals would be interesting.
used to be able to do the inverse with an old TV set using a RFID controller back in the early 90's.
See: Therac-25
There are some of these out there, from major brands (HP?). Asus seems to have more. ( https://rehack.com/reviews/best-laptops-without-webcams/ ) They tend to be workstation grade, sometimes gaming, machines at higher price points. For new laptops, see if you can customize it out on their site.
While searching for one on amazon/ebay stinks, you can find ones without webcam (doublecheck for integrated microphone status in product details too though) by looking manually for terms like "no webcam"... vendors usually don't want returns due to surprises so it will be mentioned in the product title.
links: https://laptopwithlinux.com/laptops-without-webcam/?currency...
"I saw something in the news, so I copied it. I put a piece of tape — I have obviously a laptop, personal laptop — I put a piece of tape over the camera. Because I saw somebody smarter than I am had a piece of tape over their camera."
https://www.npr.org/sections/thetwo-way/2016/04/08/473548674...
Is there anyone who doesn't do this?
As other mentioned Apple has had either good circuit design or now 'attestation' (which has other concerns, but that's more of a state actor worry).
That said it reminds me of the fun reversal of how a decade or so ago, Windows Phone 'lost' the ability to get the hot app SnapChat, because they did not want to give apps the ability to 'detect' a screenshot command in the name of privacy. Now, We have Copilot on windows, and LinkedIn tells me when I've screenshotted a post as a notification.
Well who's laughing from within a tinfoil Faraday cage now?
WTF? Am I the only one who thinks webcams shouldn't even need accessible firmware in the first place? I have one which is over 2 decades old and it has worked perfectly fine (albeit at a low rsolution) since the day it was bought, with no need for any firmware updates.
In reality, remote code execution should be considered game over, end of story. Trying to obfuscate to hide that fact just ends up creating more unknown places for malware to persistently hide. The same knowledge that allows one to write new camera firmware also allows one to verify it on every boot. Meanwhile the camera model that hasn't been publicly documented is an ever-present black box.
I don't see why this is the first thing you think of, when the infinitely more obvious thing to point out is that the indicator LED should be impossible to address and be connected in series with the power pin of the camera instead. Case in point, most other comments in this very discussion thread.
Conversely, your comment (to me) reads like you're trying to derail conversation and argue in favor of weakening device security in whatever flavor you find compelling. Very intellectually honest of you to present those ideas this way.
> you're trying to derail conversation and argue in favor of weakening device security
No, I'm arguing in favor of analyzing security in terms of device owners rather than manufacturers. "Security" isn't simply some singular property, but is rather in the context of a specific party [1]. It's certainly possible to build hardware that verifies running software and also doesn't privilege the manufacturer with an all-access pass. Just no manufacturers have done it, because centralizing control in their favor is easier.
[0] even this case is borderline. Your series LED suggestion isn't likely to be work because it will drop at least 1.6v, and constrain the current draw of the camera. Also if the firmware can be reprogrammed such that it can take pictures using very low average current draw, you haven't actually solved the problem. Alternatively, an LED in parallel with the power supply will require at least an additional resistor (if not a diode and a capacitor), which costs real money in the eyes of a design engineer working at consumer volumes.
[1] eg how the TSA that drones on about "security", while they're actually making individual travelers less secure from having to unpack and splay their belongings out, making them easy targets for immediate or later theft. They're not talking about your security, they're talking about their operation's security.