Unfortunately the vast majority of people do their jobs poorly. The entire industry is set-up to support people doing their job poorly and to make doing your job well hard.
If I deploy digital signage the only network access it should have is whitelisted to my servers' IP addresses and it should only accept updates that are signed and connections that have been established with certificate pinning.
This makes it nearly impossible for a remote attacker to mess with it. Look at the security industry that has exploded from the rise of IoT. There's signage out there (replace with any other IoT/SCADA/deployed device) with open ports and default passwords, I guarantee it.
IoT is just a computer, but it's also a computer that you neglect even more than the servers/virtual machines you're already running poorly.
People don't want to accept this, or even might be affronted by this.
There are some places doing things well - but it's the vast minorities of companies out there, because you are not incentivised to do things well.
"Best practises" or following instructions from vendors does not mean you are doing things well. It means you are doing just enough that a vendor can be bothered to support. Which in a lot of cases is unfettered network access.
"Sorry you can't deploy our signs because we haven't deployed our custom LoRa towers in your area" is just gonna get laughs.
Perhaps not in a practical or educational sense but in the real world, of people with non-cryptographic or security related jobs, a certificate is a PITA that goes beyond the functional requirements.
I have seen many insecure building automation systems that are maintained by reclassified HVAC technicians. The movies about hackers taking over an elevator are entirely accurate.
Companies are being incredibly lazy (at our expense), and the author states this obliquely:
>virtually the entire software landscape has been designed with the assumption of internet connectivity
It's not that companies are being lazy at our expense; it's that nobody wants to pick up the bill. If you write something to work against an online system, the fact it is online implies it adheres to some standard that you can work with, so solving the problem for one online client creates an artifact that is likely applicable to many clients.
Air-gapped systems drift. They get bespoke. They get very out of date. So you have the two practical problems of labor: (a) the product created solves the problem here, today, but nobody else benefits from repurposing that solution and (b) the developer isn't gaining as many transferrable skills for the next gig, and they know it, and so the developers who are willing to do the air-gapped work are harder to find and more expensive.
(I believe this is also the reason you see air-gap a lot more often in government security and banks: they can afford to retain talent past the current project with the certitude there will be more projects in the future).
That's a feature, not a bug.
Almost the entire downfall of the modern tech industry can be attributed to two things: greed, and the fetishization of "scale."
Not everything has to scale. Not everything should scale. Scale is too often used as an excuse to pinch pennies. If you business model only works at massive scale, then your business model might be broken. (Not always, but more often than most people think.)
It's a sucker's play to take the gig at price X, work on it for a year or two, and then get tossed to the curb when the project wraps with the only skills growth to show for it a combination of those ineffable fundamentals ("everything Turing-complete is fundamentally equivalent") that are useful forever (but can be picked up on any job) and some knowledge of Bob's House of Air-Gapped Machine's circa-1997 Flash install that their in-house kiosk infrastructure ran on.
There are jobs that'll pay for that Flash experience, but they're a lot harder to find than if Bob's House had been using some modern web architecture and you'd picked up, say, AWS experience.
But I won't say that the designing engineer was bad at their job, I would say that the product manager was bad at their job... but probably got promoted, because the company made a bigger profit and delivered faster because security didn't get any attention.
And that's why we need regulation, because "this product is secure" is not easily and cheaply verifiable and carries no penalties for being incorrect. The market can't tell, so everything is a lemon.
And don’t get me wrong: I’ve had managers that made it impossible to do the actual development job well. But it’s still my responsibility to do my job well so I escalated that. Most times I caused changes to improve things. If not I quit the job.
Personal accountability doesn’t just evaporate when someone else passes on bad orders. It’s not a fun position to be in but I think if engineers in general actually take responsibility for their own work, and confront management if that’s the source of issues, then that would improve things.
If you let yourself be pushed around into doing subpar work for deadlines you’re just signalling that it’s ok.
And yet over 200 motherboards and laptops have their secure boot root of trust key set to a log-ago-leaked example key from a development kit, named "DO NOT TRUST - AMI Test PK" [1]
The firmware industry at large just ain't good at this stuff.
(Of course from the perspective of the firmware industry, they can make a non-internet-connected heating timer or a washing machine control board that will work fine and reliably with no software updates, for 25+ years - while us PC software cowboys make software so bad crashes are just a fact of life, and bug fix/security updates are a daily occurrence. So the firmware industry isn't all bad - only when they start putting things onto the internet.)
BTW, I looked at the board and noted that Bosch doesn’t even make the controller. They get it from Diehl Controls, an OEM who only makes appliance controllers.
So why aren't their employers investing in educating their devs & PMs about security? (rhetorical - we all know why)
If you're using (say) Python in your client code, call SSLSocket.getpeercert() and check if your company's domain is in the subjectAltName:
* https://docs.python.org/3/library/ssl.html#ssl.SSLSocket.get...
You can ensure it is a valid cert from a valid public CA (like Let's Encrypt) instead of doing your own private CA (which you would specify with SSLContext.load_verify_locations()).
Could you elaborate what you mean by this? It seems to me that your comment just highlights another set of problems that should (in theory) motivate people to think more clearly about the ways their system communicates with the internet.
I don't see where you disagree with the blog author. Or are you saying that it's fundamentally impossible to improve security in internet-connected systems because people are not equipped to do so?
Ideally updates should come from a central source internally to the organisation that has been vetted and approved by the organisation itself. Clearly CrowdStrike knows this and that's why they offer N, N-1, N-2 updates for their falcon sensor.
It's easier to remote into a box and just pull updates from the internet though.
Granted I have not had dozens of jobs, but the only place I have worked where security was treated as the first class issue that it is, (and this type of CrowdStrike incident probably wouldn't have happened), is at one of the largest financial services companies in the world. And it did not hamper development, it actually improved it because you couldn't make stupid mistakes like relying on externally hosted CDN content for your backend app. But for people that don't do their job well, it's a pain. "Hey why doesn't my docker image build on a prod machine, why can't I download from docker hub?"
Ex - you say this:
> Hospital computers should not be connected to the internet.
But then you immediately jump onwards, as though what you've said is obvious common sense - but I don't think it is.
Can you explain to me why you believe hospital computers shouldn't be connected to the internet, and then discuss and weigh the downsides of NOT connecting them?
Because there I think that comment exposes the exact mindset that the author was discussing... no obvious appreciation or understanding of the situation, just an ill-informed off-the-cuff remark.
Can you tell me how you plan to implement cross facility dosage tracking for patients?
Can you let me know how you're going to send CT scans or x-rays to correct expert?
Can you tell me how that patient's records are going to be updated, how billing is going to be generated, or how their personal doctor is going to be notified of their condition?
I can think of a LOT of reasons that hospital computers really should be connected to a network. Maybe not every computer, maybe not every network, but even that distinction appears to be far beyond the thought you put into it before immediately saying "Hospital computers should be connected to the internet".
You're basically making the author's point for him here.
Why has everyone seemingly discarded these ideas or forgotten? Yes, it is a pain to manage. Industry could reduce this pain, but investing in good security isn't profitable.
Blame the profit incentive. Blame the VCs (especially the people who own this website)
* It's much much easier to secure a network when you completely disallow client-to-client communication and block all communication to clients not initiated by them.
* Trusting the client that attackers can physically access is a recipe for disaster.
* Because VPNs are just an application on the internet.
VPNs and VLANs are a technology that allow this. I think 'Zero Trust Architecture' wonks have done a disservice to industry. If your 'zero trust' app has a bug then your device is exposed (probably) directly to the internet, naked.
If you layer your security - starting with the bare minimum of VLANs, VPNs, network segregation, etc. then you can layer on top zero trust technologies.
What ends up happening is that people build their own pseudo-VPN with user space applications that network together a bunch of machines existing over the internet, potentially exposing dozens of new internal networks to malware vectors.
And this is were I fundamentally disagree with the author. Nothing you've listed requires access to the internet - it requires access to a network.
It's a lot easier to just deploy everything and set the firewall to any-any and go home because it's working.
Like the author says, it's hard and difficult to find the right level, but to scoff at the simplest of advice of "it shouldn't be on the internet" is giving up.
Ok - now what? I don't understand the disagreement you seem to think you have.
That network inevitably requires a connection to the outside world or those exact same features I listed above stop working. So you're just shifting blame without an answer...
So continue with your path - now I have an X-Ray machine that's connected to a network. The router on that network still has to connect to the internet to facilitate functional use of the machine, so let's assume crowdstrike is running there - tell me how your advice of "Don't connect it to the internet" is meaningful here?
I have an expert radiologist who I want to confer with on my patients x-ray, he's in a different state - what is your advice? How is my problem solved with a banal "Just don't connect it!"?
If you don't you could use dark fibre, MPLS, etc.
Connecting to an network should not automatically get you routable access to the internet. A lot of headaches from cyber attacks would go away with this was the principle. But for a lot of people connecting to the "network" means unfettered "Internet" connectivity.
I've worked at places that have used dark fibre and MPLS and had no security problems, as that's not an easy attack vector. Moving to 'SDN WAN' means now sites are connected to the _Internet_ and a simple DDOS can take your branch down.
This may or may not be important to you, but the general advice of keeping off the Internet as much as possible unless you need it is good advice (and a vendor not offering central deployment of updates is a pretty poor excuse).
There are some pretty solid documentation on this and has been for some time, the knowledge simply has been lost or discarded because this kind of knowledge was considered 'arcane' or 'restrictive'.
There were times when infrastructure had devel/testing/production environments with staged rollouts and deployment.
Production had only only the minimal access, with admin config routable only to a private network, hidden behind the frontend cluster. Things were hard for admins and hackers alike.
There was at one point gated networks and the idea of militarized and demilitarized zones, router level firewalls, outgoing connection limiting. Centralized logging (nah, don't do that, just run your apps on a POD and forget your security, forensic recover of your app is dead by the next deployment (probably twice over today) already) and many many more things.
We bought the newthink of 'web security' as how we should build our infra as the the true way. When we see it fall apart on a blue friday afternoon do we look back to see the bigger picture ? No, we can't take responsibility for the weakness because any suggestion of personal responsibility that requires work is out of the question.
Connected to network ≠ connected to Internet.
If you're suggesting that the hundreds of thousands of healthcare providers just in the US all get together and lease some dark fiber -- there's zero chance that happens.
Besides, doctors routinely use resources on the public internet.
I am suggesting that not all computers in a hospital need to have an Internet-accessible connection.
Nor even all applications on a PC need to be Internet-accessible: if you're using Windows you can have an application icon that does not launch a program locally, but rather initiates an RDP session to a terminal server (TS) and runs the program there, while the app window is displayed locally (like X11 forwarding). The (TS) server is dual-homed so the app that runs there can (e.g.) connect to patient records network, but the local PC that is just being used as a display is not on that sensitive network.
Certainly it's doable: other commenters point out that Sweden and Poland have systems like this. But they are also relatively small countries, and presumably their governments have decided to foot the bill for most or all of it. The idea of the US doing something like that is unfortunately laughable, and you can be sure healthcare institutions aren't going to pay for it.
I am expecting (health) IT staff to make an assessment on whether a system needs to (a) have Internet access at all, or (b) have Internet access perhaps through a filter/proxy.
There are entire classes of devices that should not / do not need to be able to reach www.google.com or 1.1.1.1. While my DC HVAC & PDU and IPMI networks have network connectivity, but they do not have Internet connectivity.
Many countries solve this by having a separate network for hospitals, but it is not the only way.
In general, it is a trade-off between security and convenience. Yes, you can't send an e-mail without an Internet connection (well... not easily). But do you need to? From the computer that controls the MRI machine? Or is it just easier to say "we need Internet because updates"?
Of course you can.
Scheduling an urgent care appointment is connected to my account at the hospital network. When I step into the hospital and get tests done, everything is automatically uploaded to a web portal where I can view it and doctors can easily forward my test results to other facilities. Lot of the imaging work is actually done at third party facilities but they still show up in my medical records , presumably having been forwarded.
When my appointment begins my doctor can look up any comments I left when scheduling the appointment so she knows why I'm there.
Is it possible all the different buildings and facilities that are part of the hospital Network I belong to which extends across multiple counties across my state could all be running on their own private isolated network that is air gapped with medical records manually transferred over to the web portal via sneaker net?
Sure, but no one is going to do that.
Or, you could… you know… talk to her.
I work for a healthcare company that runs several hospitals and primary care clinics. When you become a patient, you're given a little notebook and a branded pen so that inbetween appointments, the patient writes down every little health question and problem they have. When the patient shows up for the appointment, the little notebook is reviewed by the doctor.
Convenience should not always trump security.
At my child's pediatrician I can upload images through the web portal and have a nurse call me back anytime of the day or night. If there's any follow-up questions at my child's next appointment, his doctor has full access to all communications that happened through both text message and the web portal.
This kind of 24-hour digitally connected healthcare access with a huge boon as a first-time parent and made life a lot easier, especially when incidents like when my son woke up at 3:00 in the morning screaming and then projectile vomited 6 ft across his room (which by the way, was nothing to worry about and apparently completely normal... So long as it just happened once.)
We should reduce the security of the entire healthcare system because you can't adult?
I much prefer being able to send a quick message to a healthcare provider on my phone and have them get back to me same or next day.
Goalposts moved.
This kind of 24-hour digitally connected healthcare access with a huge boon as a first-time parent and made life a lot easier
Your moved goalposts pretend that we don't also offer a 24-hour medical hotline.
I commented that my hospital network has the ability for me to see test results, communicate with medical providers asynchronously, and lets healthcare providers communicate what I said to them with each other.
Your counter proposal was a notepad and a pen.
Those two things have very different feature sets.
A site not directly hooked up to the main hospital system where I can add notes of things I want to talk to during my next visit, sure, maybe that is a decent proposal to trade off security vs convenience.
> We should reduce the security of the entire healthcare system because you can't adult?
Do you expect patients dealing with depression or anxiety to be able to keep track of that notepad? How about patients who don't have secure housing, or who live with an abusive partner?
Or anyone who is elderly and just forgets things now and then.
Or anyone who is not neurotypical and has issues with memory?
As you mentioned in your other comment however, this presumes a certain mindset in people where they are willing to plan upfront and are mindful of the dependencies their software needs. As you say, just pulling whatever from Docker hub is certainly easier.
Internally hosted repositories also allow you to pull and install updates at your own pace, possibly days after they have been released upstream. So if a patch is borked you won't be affected.
But then how will doctors google the patients' symptoms?
If your answer is "they should already know all that is required to do their job without looking it up online", then consider whether you hold yourself to the same standard. I don't.
Having the hospital admin and some machines connected outwards seems like a recipe for killing patients.
List of medication side effects, dosing guidelines and so forth have been common throughout the industry almost since it's very inception. Indeed, there are books going back thousands of years across multiple cultures around the world that are just referenced guides for medical practitioners.
google ai: remove heart
doctor: ok lets schedule the operation!
Yes - but I don't think think it's that hard. There's 90% easy work to be more secure than most out there. It just requires expertise and for people to change how they work.
Instead people spend $bn on cyber security when you can get 97% of the way there by following good standards and knowing your systems.
I am by no means perfect, I spent all day Friday fixing hundreds of machines manually that had BSOD'd from CrowdStrike. In this case the vendor had made it impossible to do my job well because they offered zero control on how these updates are rolled out - there is no option to put them through QA first. Unlike the sensor itself, which we do roll out gradually after it has been proofed in QA.
Rather ironically said appliance (that basically acts as a man-in-the-middle in remote access) prevents me from going the last mile in securely configuring my systems. I would not be surprised if the appliance self-updates, but I'm not sure.
Regardless you could make the case that practices like these do not improve overall security, but instead just cost a large amount of money that you could hire three security-minded engineers from.
The question is where to lay the blame for the crap, and how to change that.
I would love to see the author's "lists" turned into a table of sorts, and then any given piece of software could be rated by how many situations on each list it works in without modification, works in with trivial config tweaks, works in with more elaborate measures, or cannot work in. Turn the whole table green and your software is more attractive to certain environments.
But there are also cases where the software could be perfectly run in air-gapped systems but people are unwilling to put in the work (for some reason or another). For example everyone could run their own Docker image mirror that only contains images that are actually needed and pulls them from upstream with some delay. Docker allows you to pull images from your own registry. But not veryone is willing to operate their own registry.
I guess Sjunet can be seen as an industry-wide air-gapped environment. I'd say it improves security, but at a smaller cost than each organization having its own air-gapped network with a huge allowlist.
"Zero Trust Architecture" and not thinking to deeply about the extent to which you're not actually removing overall trust from the system, just shifting and consolidating much of it from internal employees to external vendors.
I'm not even thinking about CS here. It's curious to see what the implications on individual agency and seem to become when the "Zero Trust" story is allowed to play out - not by necessity but because it's "the way we do things now".
(As the wiki page you linked notes, the concept is older and there are certainly valuable lessons there. I am commenting on the "ZTA" trend kicked off by NIST. I bet the NSA are happy about warm reception of the message from industry...)
In practice, no big company follows any of those practices. So, yeah, anything that's derived from "Zero Trust Architecture" is wrong from its inception.
>The worst IT outage ever!
>>The worst IT outage so far.
Why?
They can just as effectively use (e.g.) Nessus/Rapid7/Qualsys to do security sweeps of that network as any other.
At my last job we had an IoT HVAC network that we regularly scanned from a dual-homed machine where the on-network devices could not get to the general Internet (no gateway).
There is future tech on ancient software stacks. There is no safe solution to put it on the net directly.
AWS was an example in the article. Easy to get a fixed IP? True. Getting a fixed IP for outgoing traffic? Not that easy anymore - AWS is nice, but for many application it just isn't a solution.
Post-its with passwords are the most classical example, but removing internet access from an entire institution is just gonna lead to people bringing their own mobile networked devices and does honestly sound like a completely braindead idea.
If sjunet is managed as a number of interconnected airgapped networks then I for sure find that more secure than a Internet connected network. The attacker surely still have vectors in but whole classes of common attacks are mitigated.
Even if it is just "one big intranet" it is still better than one big intranet with one really good ((zero) trust me bro!) firewall to the Internet.
Various levels of zero trust principles can easily be applied within sjunet. That makes it better in my eyes.
For critical infrastructure I find this an important step. In the end security relies on us stupid humans. And it is easier to manage an airgap. It is the number of things we do afterwards to bypass it which is the problem.
The idea of an Intranet is still sound. But private does not mean secure. It is just a security layer. The next layer is if you run it fully open. Are the rooms locked? Do you require 802.11X certificates for connectivity? Are all ports open for all clients/hosts. Do you have a sensible policy for you host configuration? Have you segmented the network even further? Etc. Etc.
So your point is still valid for sure! You should secure it like on the public Internet aka a hostile environment. That is the important takeaway.
My point is that is should no be used as an argument against a private network. For large critical infrastructure such as hospitals it makes good sense. It is an added layer for the attacker to overcome - it is not security theater. For some the hassle might not be worth the while but that is then the trade off as with all forms of security.
It ain't binary but discussion often end up like that. Done right it can be additive. Done wrong it just adds pain and agony.
We all dread the security theatre. I boldly claim this aint't it.
Yes, because we all know how secure the tings on the public Internet are. /s
Nobody's saying that a private network doesn't have to be properly secured, you're fighting a strawman argument
Some good-looking ideas almost always result in beneficial implementations, some good-looking ideas almost always result in bad implementations.
If the "good" idea has some bad implementations as well as some good implementations (like the swedish network example?) then perhaps you shouldn't dismiss the "good" idea so quickly
So people choosing to create a new network are, with high confidence, going to end up with networks that are substantially worse at moving bits around cost effectively than the internet. The reality that they are inconvenient and expensive is built in once the deliberate choice is made to avoid the internet. It might be worth the cost, but the cost comes with the idea.
The same argument was against seat belts in cars and bicycle/motorcycle hemlets. IMHO this arguments is rarely good. False sense of security should not be addressed by removing protection.
> provides an excuse to bad security policies
It should not be used as an excuse but bad policies in air-gaped network is less bad than bad policies in the Interned connected one. I doubt policies will be quickly improve as soon as you connect to the Internet.
That's a (highly predictable) implementation problem of HSCN, not a problem with the idea. These complaints boil down to the same old thing: stupidly written law setting a (potentially) good policy up for failure.
It's a network that interconnects county offices, town halls and such, giving them access to the central databases where citizens' personal information are stored. It's what is used when e.g. changing your address with the government, getting a new ID card, registering a child or marriage etc.
As far as I know, the "Źródło" app runs on separate, "airgapped" computers, with access to the internal network but not the internet, using cryptographic client certificates (via smart cards) for authentication.
Are the latest patches security updates ?
A bit like tor but without all the creepy stuff I guess.
If there are, a bridge could be made willingly or not. OFC it's more secure than everything on the internet.
What a tongue twister for non danish speaking people :D
(Source: I speak Danish as a second language. I used to think Georgian was the language with the most consecutive consonants but then I learned how little the Danes respect their vowels so now I know better)
> Sundheds data nettet
Sund-hed is "sound-ness" (or even "sound-hood"), i.e. health.
> The health data network
Eyetwister
See https://www.diskusjon.no/blogs/entry/878-orddeling-en-engels...
If the two networks are entirely separate, and they absolutely must be, then there's no reason for addressing concerns of one to influence the other one iota. (Except that certain OSes might have baked-in assumptions about things like the 127/8 network, so you'd have to work around those.)
Every tool and die shop in your neighborhood industrial park contains CNC machines with Ethernet ports that cannot be put on the Internet. Every manufacturing plant with custom equipment, conveyor lines and presses and robots and CNCs and pump stations and on and on, use PLC and HMI systems that speak Ethernet but are not suitable for exposure to the Internet.
The article says:
> In other words, the modern business computer is almost primarily a communications device.
> There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems.
which ignores the entirety of the manufacturing sector as well as the electronic devices produced by that sector. Millions of embedded systems and PLCs produce value all day long by checking once every millisecond whether one or more physical or logical digital inputs have changed state, and if so, changing the state of one or more physical or logical digital outputs.
There's no need for the resistance welder whose castings were built more than a century ago, and whose last update was to receive a PLC and black-and-white screen for recipe configurations in 2003 to be updated with 2024 security systems. You just take your clipboard to it, punch in the targets, and precisely melt some steel.
Typically, you only connect to machines like this by literally picking up your laptop and walking out to the machine with an Ethernet patch cable. If anything beyond that, I expect my customers to put them on a firewalled OT network, or bridge between information technology (IT) and operations technology (OT) with a Tosibox, Ixon, or other SCADA/VPN appliance.
Now perhaps you're not working on anything someone might want to exploit, but PLCs are often found in critical infrastructure as well as high-end manufacturing facilities, which make them attractive targets for malicious actors. Whether because they're attempting to exploit critical infrastructure or infect a poorly secured device that high value end-points (such as engineering laptops) might eventually connect to directly.
https://www.cisa.gov/news-events/cybersecurity-advisories/aa... - Water Infra
https://claroty.com/team82/research/evil-plc-attack-using-a-...
Damned right. That would be a special type of malfeasance that should earn criminal punishment, if healthcare equipment worked that way.
One food court had kiosks with Windows and complete access to the Internet. Somebody could download malware and steal credit card data. Every time I used one, I turned it off or left a message on the screen. Eventually they started running it in kiosk mode.
Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.
The third was an interactive display for a brand of beer. This one wasn't going to cause any harm, but I liked to leave Notepad open with "Drink water" on it. Eventually they turned it off. That's one way to fix it, I guess.
I don't know the details of how the parking kiosks near me are setup, but I can only assume they're put together really poorly because once, after mashing buttons in frustration, it started refunding me for tickets that I'd not purchased. You'd have thought "Don't give money to random passers by" would have been fairly high on the list of requirements, but there we are.
Oof, I feel this one. I tried to get IntelliJ's JRE trust store to understand that there was a new certificate for zscaler that it had to use and there were two or three different JDKs to choose from, and all of their trust stores were given the new certificate and it still didn't work and we didn't know why.
It has interesting limitations due to the amateur radio spectrum used. Including total ban commercial use.
As that is the social contract of the spectrum, you get cheap access to loads of spectrum between 136kHz and 241GHz, but can't make money with it.
Only in the Netherlands and Germany is it really widespread: https://hamnetdb.net/map.cgi . Here in Spain it's not available anywhere near me.
With HF yes you can use the various atmospheric layers to reflect depending on band but in those bands the available bandwidth is extremely low (the entire HF range itself is only 30mhz and the amateurs only have a few small slices of that). The only practical digital operations there are Morse, RTTY (basically telex) and some obscure extremely-slow GPS synced data modes like WISPR and FT8 that are basically for distance bragging rights but don't transmit useful payload.
So in effect, no. In this case line of sight or at least short distances (VHF/UHF) are required.
Also, I don't have space for huge antennas that HF requires as I'm in a small apartment in the middle of a built-up city.
Beyond that there are plenty of even more ridiculous examples of things that are now connected to the internet, like refrigerators, kettles, garage doors etc. (I don't know if many, or any, of these things were affected by the CrowdStrike incident, but if not, it's only a matter of time until the next one.)
As for the claim that non-connected systems are "very, very annoying", my experience as a user is that all security is "very, very annoying". 2FA, mandatory password changing, locked down devices, malware scanners, link sanitisers - some of it is necessary, some of it is bullshit (and I'm not qualified to tell the difference), but all of it is friction.
Of course. But not the Internet.
1. These systems shouldn't allow outbound network flows. That will stop all auto-updates, which you can then manage via internal distribution channels.
2. Even without that, you can disable auto-updates on many enterprise software products - Windows notably, but also Crowdstrike itself. I heard about CS customers disabling auto-update and doing manual rollouts who were saved by this practice.
3. Tacking on to number 2 - gradual rollout of updates which you've done some smoke testing on. Just in case. Again - I heard of CS customers who did a gradual rollout, and managed to only have a fraction of their machines impacted.
It may not be the software in question, but proprietary snowflake entitlement management software that has a lot of black box and proprietary voodoo, that does not have any disaster recovery capacity and would be considered tech debt a decade ago... Disgracefully came into life in the year 2021. It did not gracefully recover from clownstrike to say the least.
Or, in the case of crowdstrike. I can imagine support starts to get some calls, at some time you realize that something has gone horribly wrong. An update, maybe not obvious which, is wreaking havoc. How do you stop it? Have you foreseen this scenario and have a simple switch to stop sending updates?
Or, do you cut the internet? Unlike the movies there isn't a single cord to pull, maybe the servers are in a different building or some cloud somewhere. They probably have a CDN, can you pull the files efficiently?
Now maybe by the time they discovered this it was mostly too late, all online systems might already have gotten the latest updates (but even if that is the case, do they know that is the case?).
Not air-gap, temporal gap.
And I don't think that is enough. I agree that it easier and sufficient for most systems to just be connected over the internet. But health, aviation and critical infrastructure in general should try to be offline as much as possible. Many of the issues described with being offline stem from having many third party dependencies (which typically assume internet access). In general but for critical infrastructure especially you want as little third party dependencies as possible. Sure it's not as easy as saying "we don't want third party dependencies" and all is well. You'll have to make a lot of sacrifices. But you also have a lot to gain when dramatically decreasing complexity, not only from a security standpoint. I really do believe there are many cases where it would be better to use a severely limited tech stack (hardware and software) and use a data diode like approach where necessary.
One of the key headaches mentioned when going offline is TLS. I agree and I think the solution is to not use TLS at all. Using a VPN inside the air-gapped network should be slightly better. It's still a huge headache and you have to get this right, but being connected to the internet at all times is also a HUGE headache.
Does a computer that can access your accounting system need to access the internet? Or email?
A user could run two computers, one that’s for internet stuff, and one that does important internal stuff. But that’s a silly idea because it’s costly.
However, we can achieve the same thing with virtualization, where a user’s web browser is running in a container/VM somewhere and if compromised, goes away.
Stuff like this exists throughout society in general. When should a city employee carry a gun? On one end of the spectrum, the SWAT team probably needs guns. On the other end, the guy who put a note on my door that my fence was leaning into the neighbor’s property didn’t have a gun. So the question is, is a a traffic stop closer to the SWAT team or the guy kindly letting me know I’ve violated a city ordinance?
I don’t know why these things don’t get reassessed. Is it that infrastructure is slower to iterate on? Reworking the company’s network infrastructure, or retraining law enforcement departments, is a big, costly undertaking.
I did find it surprising however that so many systems shown on TV run Windows.
Digital signage screens, shopping registers all sorts of stuff that I assumed would be running Linux.
It is surprising to me that systems with functions like a cash register would be doing automatic updates at all.
I agree that it does not make sense to use Windows for this sort of thing.
OR the solution is a powerpoint or mp4 file running on a TV for signage.
If every office computer is already Windows, IT has management applications like GPO, SCCM/Intune, or RMMs like Datto/Ninjaone to deploy policy and manage Windows computers remotely. It then makes sense to just keep using those, rather than making a whole new system just for the digital signage computers.
Since MS has a kiosk mode officially, they probably assume either choice is good enough.
Yeah that's weird, at least do it via some on-premise "proxy". Windows has WSUS and I'd assume that Crowdstrike has something similar. I know that TrendMicro provides, or have provide an update service, allowing customers to rollout patches at their own pace.
Sadly very few things seems to run correctly without internet access these days. I get the complaint about management and updates for something like things in people homes, but if you're an airport, would it be so bad to have critical infrastructure not on the internet? I don't really care if the digital signs run Windows, there are plenty of reasons why you'd choose that, but why run Crowdstrike on those devices? Shouldn't they be read-only anyway?
I’m not saying that Windows is great. I haven’t willingly used it in 15 years. But you can’t keep your head in the sand about the sad state of Linux and anything graphical, especially on esoteric hardware.
POS systems are often effectively Internet-connected, because they need to report stock levels, connect to financial networks, process BNPL applications, etc. it’s completely warranted to treat them like ‘endpoints’, because they are.
> Using a general-purpose desktop designed for corporate executives running Excel and PowerPoint is just the wrong technology choice for such an application.
Agree, which is why most of the time you use Windows Embedded for Point of Service or Windows IoT Enterprise. Which again, is Windows.
Good? No, but that's the reality of things.
I can say it’s not easy to configure but once done it’s very stable and simple.
The thing the drives me nuts is not even that, which is bad enough, but with the assumption that the Internet connection is always stable and it is legitimate to say "wait until some connections are up" again, as though there are no such things as power outages, network-level errors, cable tears, physical socket failures and such.
Are these people not writing blogs to be read?
And just to be ahead of it, just because you are able to read it doesn't mean it wouldn't be easier and more comfortable to read in a more suitable font.
That's a subjective opinion.
I vastly prefer monospaced fonts. They're easier to read!
There are some exceptions. Obviously, code is one of these, as code is explicitly differently structured. Dyslexia is another one where monospaced fonts might actually increase readability.
But overall they decrease readability compared to other font types.
... so therefore thinline grey-on-gray text is ideal! Good meeting, let's do lunch.
You can nitpick the linked site, but it is amazingly readable compared to sites that feel compelled to adhere to modern fashions, like having blinking, throbbing nonsense in the field of vision making it impossible to concentrate on the actual text, or making the text too small unless you have exactly the same ultra-retina 8K HD phone the author does, or thinking "contrast" is a city in Turkey.
Either way, both are a better choice compared to a monospaced font.
The whole field is trend-driven, to the point good advice becomes bad and vice-versa on a cycle. For example, voice activation is now trendy, despite being known as a horrible UI and not accessible besides; it struggles with accent, dialect, and speech dysfluency, but it's in fashion, so it must be a good interface, right? Previous gurus, such as Jef Raskin, are ignored, and regressions (flat UI) are held up as progress.
Sure, I agree with you there. However, text and more broadly readability are not purely UI. Readable text should be part of a good UI, but it is a field in itself and which actually has been quite extensively researched. Unfortunately, as you aptly point out, a lot of people ignore this sort of thing in favor of what is trendy or (in the case of this blog) specific aesthetics.
There are a few things that are quite good understood about what makes text readable on a display. These have been reaffirmed by research spanning decades at this point. Yet they are often ignored in favor of other things.
Whitelist all needed IPs for business functionality, enable the whole Internet once every 3 hours for an hour.
Bonus points if you can do it by network segment.
It would be enough to spare half your computers from the CrowdStrike issue, since I believe the update was pulled after an hour.
Will any-one do this? Probably not. But it is worth entertaining as a possibility between the fully on connectivity and the fully disconnected.
I really don't like this mentality. The IP I'm serving some service might change. DNS is a useful thing.
That depends on the phase of your "every 3 hours for an hour" signal, and the phase of "the update was pulled after an hour.". That's a 33% overlap. Feelin' lucky?
That's non-ironically the problem. Current software culture creates "secure software" with a 200 million line of code attack surface and then act surprised when it blows up spectacularly. We do this because there is effectively no liability for software vendors or for their customers. What software security vendors sell is regulatory compliance, not security.
[...] With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. [...] Devices in the network use Bluetooth to scan for nearby items.
Full email content:
Find My Device network is coming soon
You can use Find My Device today to locate devices when they’re connected to the internet. With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. You can also find any compatible Fast Pair accessories when they’re disconnected from your device. This includes compatible earbuds and headphones, and trackers that you can attach to your wallet, keys, or bike.
To help you find your items when they’re offline, Find My Device will use the network of over a billion devices in the Android community and store your devices’ recent locations.
How it works
Devices in the network use Bluetooth to scan for nearby items. If other devices detect your items, they’ll securely send the locations where the items were detected to Find My Device. Your Android devices will do the same to help others find their offline items when detected nearby.
Your devices’ locations will be encrypted using the PIN, pattern, or password for your Android devices. They can only be seen by you and those you share your devices with in Find My Device. They will not be visible to Google or used for other purposes.
You’ll get a confirmation email in 3 days when this feature is turned on for your Android devices. Until then, you can opt out of the network through Find My Device on the web. Your choice will apply to all Android devices linked to [email]. After the feature is on, you can manage device participation anytime through Find My Device settings on the device.
Learn more
This is also why almost all news is non-sense for an expert in given domain. Basically... "It's not that simple."
A long time ago I worked at a broker trader where all communications, including servers communications, had to go through zscaler as a man in the middle.
What had been routine all of a sudden became impossible.
Turns out that git, apt, pip, cabal and ctan all had different ideas about how easy they should make this. After a month of fighting each of them I gave up. I just downloaded everything from their public ftp repos and build from source over a week. I wish good luck to whoever had to maintain it.
I'm looking at you, Node.js and Firefox.
Node at least supports an environment variable to add certificates to the list of trusted certs, but it's not as simple as an option to use the system store.
"Many computer scientists believe that people who talk about computer autonomy are indulging in a lot of cybernetic hoopla. Most of these skeptics are engineers who work mainly with technical problems in computer hardware and who are preoccupied with the mechanical operations of these machines. Other computer experts seriously doubt that the finer psychic processes of the human mind will ever be brought within the scope of circuitry, but they see autonomy as a prospect and are persuaded that the social impact will be immense.
Up to a point, says Minsky, the impact will be positive. “The machine dehumanized man, but it could rehumanize him.” By automating all routine work and even tedious low-grade thinking, computers could free billions of people to spend most of their time doing pretty much as they d—n please. But such progress could also produce quite different results. “It might happen”, says Herbert Simon, “that the Puritan work ethic would crumble to dust and masses of people would succumb to the diseases of leisure.” An even greater danger may be in man’s increasing and by now irreversible dependency upon the computer
The electronic circuit has already replaced the dynamo at the center of technological civilization. Many US industries and businesses, the telephone and power grids, the airlines and the mail service, the systems for distributing food and, not least, the big government bureaucracies would be instantly disrupted and threatened with complete breakdown if the computers they depend on were disconnected. The disorder in Western Europe and the Soviet Union would be almost as severe. What’s more, our dependency on computers seems certain to increase at a rapid rate. Doctors are already beginning to rely on computer diagnosis and computer-administered postoperative care. Artificial Intelligence experts believe that fiscal planners in both industry and government, caught up in deepening economic complexities, will gradually delegate to computers nearly complete control of the national (and even the global) economy. In the interests of efficiency, cost-cutting and speed of reaction, the Department of Defense may well be forced more and more to surrender human direction of military policies to machines that plan strategy and tactics. In time, say the scientist, diplomats will abdicate judgment to computers that predict, say, Russian policy by analyzing their own simulations of the entire Soviet state and of the personalities—or the computers—in power there. Man, in short, is coming to depend on thinking machines to make decisions that involve his vital interests and even his survival as a species. What guarantee do we base that in making these decisions the machines will always consider our best interests? There is no guarantee unless we provide it, says Minsky, and it will not be easy to provide—after all, man has not been able to guarantee that his own decisions are made in his own best interests. Any supercomputer could be programmed to test important decisions for their value to human beings, but such a computer, being autonomous, could also presumably write a program that countermanded these “ethical” instructions. There need be no question of computer malice here, merely a matter of computer creativity overcoming external restraints."
an open source example: https://blog.openziti.io/no-listening-ports
This was a resource management problem, a process problem.
Meaning: if your process are invalid, you can also fail in off-line scenario. If you do not treat quality control, or tests correctly you gonna have a bad time.
Online amplifies failure at least as well as it amplifies success. Offline maintenance is quite unlikely to bluescreen 8 million devices before anyone has time to figure out something's going wrong.
It's basically an admission that the software may be full of vulnerabilities and the only way to protect it is to limit its exposure to the outside world.
The root of the problem is that almost all software is poorly designed and full of unnecessary complexity which leaves room for exploitation. Companies don't have a good model for quality software and don't aim for it as a goal. They just pile on layer upon layer of complexity.
Quality software tends to be minimalistic. The code should be so easy to read that an average hacker could hack it in under an hour if there was an issue with it... But if the code is both simple and there is no vulnerability within it, then you can rest assured that there exist no hackers on the face of the earth who can exploit it in unexpected ways.
The attack surface should be crystal clear.
You don't want to play a game of cat and mouse with hackers because it's only a matter of time before you come across a hacker who can surpass your expectations. Also, it's orders of magnitude more work to create complex secure software than it is to create simple secure software.
The mindset to adopt is that bad code deserves to be hacked. Difficulty involved in pulling off the hack is not a factor. It's a matter of time before hackers can disentangle the complexity.
I never understood this. You never have absolute security, that’s why you must apply the Swiss cheese model. Obscurity is definitely a worthy slice to have. Few people can attack you if you can only be attacked in person.
All security is really just the swiss-cheese model. Some entities just invest in more slices than others to keep more sophisticated/determined attackers out (such as nation states).
What other practical model is there for security then defense in depth? "Just make 100% bulletproof computers with no faults?"
- Systems that store the code in read-only memory. Example: slot machines.
- Systems with backup systems completely different from the main system, implemented by a different group, and thoroughly tested. Example: Airbus aircraft.
- Systems continuously sanity-checked by hard-wired checkers. Example: Traffic lights.
- Systems where the important computational functions are totally stateless and hardware reset to a ground state for each transaction. Example: #5 Crossbar.
Obscurity is one layer, and it does protect against drive by attacks.
Obscurity as the only layer does not work.
Obscurity as an added layer improves security.
The criticism of "security through obscurity" is specifically Kerchoff's Principle, which applies to cryptographic systems. It is not an absolute rule outside of that domain.
There's a reason Stuxnet was an exception. These things are not very common and the only reason we even know about it is because it managed to spread further than its intended target.
I disagree with this, no internet is not an obscurity this is more like incapsulation for the sake of having controllable interface via setters and getters only.
If some computer rules something (something big as an airport or something tiny like a washing machine) how often it really needs an update of something system-related like the kernel? How many MB of code with potential 0-days are you going to expose to the wild for the sake of that autoupdate?
An untrusted network (the internet) is a risk. Removing access from that network is one way to mitigate that risk.
Obscurity doesn’t remove a risk, it just reduces its likelihood. An obscurity approach here would be more akin to changing your SSH port from 22 to some random number rather than blocking SSH entirely.
But instead we have protocols where the security boundary represents thousands of pages of specifications, parsing of complex structures in elevated context, network requests on behalf of untrusted users, logging without input escaping, and a dozen "unused" extensions added by some company in 1990s to be backwards compatible with their 5 bit EBDIC machines.
Just think of it as a very efficient firewall.
Ah yes, security through absolute perfection.
People talk a lot about security but nobody actually values it. You just send out some Uber Eats coupons or free Credit Protection vouchers and keep on doing what you were doing and in a month everyone has forgotten.