When PerSourcePenalties are enabled, sshd(8) will monitor the exit
status of its child pre-auth session processes. Through the exit
status, it can observe situations where the session did not
authenticate as expected. These conditions include when the client
repeatedly attempted authentication unsucessfully (possibly indicating
an attack against one or more accounts, e.g. password guessing), or
when client behaviour caused sshd to crash (possibly indicating
attempts to exploit sshd).
When such a condition is observed, sshd will record a penalty of some
duration (e.g. 30 seconds) against the client's address.
https://github.com/openssh/openssh-portable/commit/81c1099d2...It's not really a reversable patch that gives anything away to attackers: it changes the binary architecture in a way that has the side-effect of removing the specific vulnerability and also mitigates the whole exploit class, if I understand it correctly. Very clever.
That's a previously-announced feature for dealing with junk connections that also happens to mitigate this vulnerability because it makes it harder to win the race. Discussed previously https://news.ycombinator.com/item?id=40610621
On June 6, 2024, this signal handler race condition was fixed by commit
81c1099 ("Add a facility to sshd(8) to penalise particular problematic
client behaviours"), which moved the async-signal-unsafe code from
sshd's SIGALRM handler to sshd's listener process, where it can be
handled synchronously:
https://github.com/openssh/openssh-portable/commit/81c1099d22b81ebfd20a334ce986c4f753b0db29
Because this fix is part of a large commit (81c1099), on top of an even
larger defense-in-depth commit (03e3de4, "Start the process of splitting
sshd into separate binaries"), it might prove difficult to backport. In
that case, the signal handler race condition itself can be fixed by
removing or commenting out the async-signal-unsafe code from the
sshsigdie() function
The cleverness here is that this commit is both "a previously-announced feature for dealing with junk connections", and a mitigation for the exploit class against similar but unknown vulnerabilities, and a patch for the specific vulnerability because it "moved the async-signal-unsafe code from sshd's SIGALRM handler to sshd's listener process, where it can be handled synchronously".The cleverness is that it fixes the vulnerability as part of doing something that makes sense on its own, so you wouldn't know it's the patch even looking at it.
- /\* Log error and exit. \*/
- sigdie("Timeout before authentication for %s port %d",
- ssh_remote_ipaddr(the_active_state),
- ssh_remote_port(the_active_state));
+ _exit(EXIT_LOGIN_GRACE);
[1] https://security-tracker.debian.org/tracker/source-package/o...
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2024-6387 (tracking task)
https://bugzilla.redhat.com/show_bug.cgi?id=2294905 (Fedora 39 issue)
EL 9 is also affected, but not yet released. The tracking task will update as things move along.
Amazon Linux 2023 is affected; Amazon Linux 1 & 2 are not. Status updates will be posted to https://explore.alas.aws.amazon.com/CVE-2024-6387.html
GLSA 202407-09: https://glsa.gentoo.org/glsa/202407-09
Package metadata & log: https://packages.gentoo.org/packages/net-misc/openssh
> Successful exploitation has been demonstrated on 32-bit Linux/glibc systems with ASLR. Under lab conditions, the attack requires on average 6-8 hours of continuous connections up to the maximum the server will accept. Exploitation on 64-bit systems is believed to be possible but has not been demonstrated at this time. It's likely that these attacks will be improved upon.
void
sigdie(const char *fmt,...)
{
#ifdef DO_LOG_SAFE_IN_SIGHAND
va_list args;
va_start(args, fmt);
do_log(SYSLOG_LEVEL_FATAL, fmt, args);
va_end(args);
#endif
_exit(1);
}
to this: void
sshsigdie(const char *file, const char *func, int line, const char *fmt, ...)
{
va_list args;
va_start(args, fmt);
sshlogv(file, func, line, 0, SYSLOG_LEVEL_FATAL, fmt, args);
va_end(args);
_exit(1);
}
which lacks the #ifdef.What could have prevented this? More eyes on the pull request? It's wild that software nearly the entire world relies on for secure access is maintained by seemingly just two people [2].
[1] https://github.com/openssh/openssh-portable/commit/752250caa...
[2] https://github.com/openssh/openssh-portable/graphs/contribut...
void CloseAllFromTheHardWay(int firstfd) //Code here must be async-signal-safe! Locks may be in indeterminate state
{
struct rlimit lim;
getrlimit(RLIMIT_NOFILE,&lim);
for (int fd=(lim.rlim_cur == RLIM_INFINITY ? 1024 : lim.rlim_cur);fd>=firstfd;--fd)
close(fd);
}
Although to be honest, getrlimit isn't actually on the list here: https://man7.org/linux/man-pages/man7/signal-safety.7.htmlBut I hope that removing the comment or modifying code with a comment about async-signal-safe might have been noticed in review. The code you quoted only has the mention SAFE_IN_SIGHAND to suggest that this code might need to be async-signal-safe
It's open source. If you feel you could do a better job, then by all means, go ahead and fork it.
You're not entitled to anything from open source developers. They're allowed to make mistakes, and they're allowed to have as many or as few maintainers/reviewers as they wish.
https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...
> Or do you really think that if someone has such concerns, their only recourse is to start contributing to the project?
Yes, I think one way to not come off as entitled when being critical to volunteers is to also offer volunteer work yourself.
And it's most helpful to provide feedback directly to the developers through their preferred means of communication.
> Or do you think they're not valid concerns?
Irrelevant what I think here, that's kind of the point. That's just my opinion.
> That project of course being one of the most security-sensitive projects one could imagine.
Agreed that the project is important. However, this is irrelevant, too, unless you're bolstering your "valid concerns" argument.
So what level of contribution is the bar here? I mean, what's the commit count? Do I have to be developing core features for years? Does writing docs count? Do I have to volunteer for a particular project before I can in any way criticize it, or is just any open source work okay?
> And it's most helpful to provide feedback directly to the developers through their preferred means of communication.
This is not feedback meant directly for the developer - it's valid questions that were meant to spark a discussion here on HN. Of course, with users like you around, that's difficult.
> However, this is irrelevant, too, unless you're bolstering your "valid concerns" argument.
It is relevant, because it's absurd to think that just any developer can just go and contribute to such a project.
All I offered was a way to not sound entitled. Personally, I certainly hold the opinions of someone that's helping me much higher than the opinion of someone that isn't.
Another approach to avoid sounding entitled could be to post a more thoughtful and comprehensive analysis on HN or a blog, rather than nitpicking a commit and posting broad questions like "what could have prevented this?" and insinuating that the volunteers need to do better.
Finally, if it's true that not "just any developer" can contribute to OpenSSH... well it's open-source. Fork it. Or build your own.
And I must've missed this bit in your original few comments.
> Finally, if it's true that not "just any developer" can contribute to OpenSSH... well it's open-source. Fork it. Or build your own.
What good would that do? Would that enable the forker to voice their complaints about the original OpenSSH that's used by literally everyone else without people like you chiming in?
By the by, is it at all relevant that OpenSSH development is funded by at least 1 non-profit and probably other sources as well? They're not volunteers.
(And even if they were volunteers, users are quite within their rights to voice concerns and criticisms about software in a constructive manner. If open source developers don't want to face that, they can not develop open source software.)
We've been over this. I disagree that there isn't insinuations or isn't entitlement in GP. It's okay to disagree.
> What good would that do? Would that enable the forker to voice their complaints about the original OpenSSH that's used by literally everyone else without people like you chiming in?
This can do a lot of good. This a solution to the problem that you have. If others agree with your critique and approach (which is likely), then they also will appreciate your project. This is how projects like Neovim started, and arguably why Neovim has been as successful as it is.
> By the by, is it at all relevant that OpenSSH development is funded by at least 1 non-profit and probably other sources as well? They're not volunteers.
I was under the impression that it was largely volunteer work, or at least, severely underpaid development which is pretty normal in the open source world. I will take your word on this one, I don't have the time to go look at non-profit financials.
> And even if they were volunteers, users are quite within their rights to voice concerns and criticisms about software in a constructive manner.
100% agree, the keywords being "constructive manner." Higher effort than nitpicking a commit and asking broad questions.
obligatory xkcd https://xkcd.com/2347/
1. Using a proper programming language that doesn't allow you to setup arbitrary functions as signal handlers (since that's obviously unsafe on common libcs...) - e.g. you can't do that in safe Rust, or Java, etc.
2. Using a well-implemented libc that doesn't cause memory corruption when calling async-signal-unsafe functions but only deadlocks (this is very easy to achieve by treating code running in signals as a separate thread for thread-local storage access purposes), and preferably also doesn't deadlock (this requires no global mutexes, or the ability to resume interrupted code holding a mutex)
3. Thinking when changing and accepting code, not like the people who committed and accepted [1] which just arbitrarily removes an #ifdef with no justification
4. Using simple well-engineered software written by good programmers instead of OpenSSH
Point (3) seems like a personal attack on the developers/reviewer, who made human errors. Humans do in fact make mistakes, and the best build toolchain/test suite in the world won’t save you 100% of the time.
Point (4) seems to imply that OpenSSH is not well-engineered, simple, or written by good programmers. While all of that is fairly subjective, it is (I feel) needlessly unkind.
I’d invite you to recommend an alternative remote access technology with an equivalent track record of security and stability in this space — I’m not aware of any.
Except for those of us who live in a world where most of their OS and utilities and libraries were originally written decades before Rust existed, and often even before Java existed. And where "legacy" C code pretty much underpins everything running on the public internet and which you need to connect to.
There's a very real risk that reimplementing every piece of code on a modern internet connected server in exciting new "safe" languages and libc-type things - by a bunch of "modern" programmers who do not have the learning experience of decades worth of mistakes - will end up with not just new implementation of old and fixed bugs and security problems, but also with new implementations that are incompatible in strange and hard to debug ways with every existing piece of software that uses SSH protocols as they are deployed in the field.
I, for one, and not going to install and put into production version 1.0 of some new OpenSSH replacement written in Rust out Go or Java, which has a non zero chance of strange edge case bugs that are different when connecting to SSH on different Linux/BSD/Windows distros or versions, across different cpu architectures, and probably have subtly different bugs when connecting to different cloud hyperscalers.
This is actually an interesting variant of a signal race bug. The vulnerability report says, “OpenBSD is notably not vulnerable, because its SIGALRM handler calls syslog_r(), an async-signal-safer version of syslog() that was invented by OpenBSD in 2001.” So a signal-safety mitigation encouraged OpenBSD developers to put non-trivial code inside signal handlers, which becomes unsafe when ported to other systems. They would have avoided this bug if they had done one of their refactoring sweeps to minimize the amount of code in signal handlers, according to the usual wisdom and common unix code guidelines.
I notice that nowadays signalfd() looks like a much better solution to the signal problem, but I've never tried using it. I think I'll give it a go in my next project.
Update: according to https://man7.org/linux/man-pages/man7/signal-safety.7.html strcmp() actually is async-signal-safe as of POSIX.1-2008 TC2.
On x86 and some other (mostly extinct) architectures that have string instructions, the string functions are usually best implemented using those (you might get a generation where there's a faster way and then microcode catches back up). And specifically (not just?) on x86 there was/is some confusion about who should or would restore some of the flags that control what these do. So you could end up with e.g. a memcpy or some other string instruction being interrupted by a signal handler and then it would continue doing what it did, but in the opposite direction, giving you wrong results or even resulting in buffer overflows (imagine interrupting a 1 MB memcpy that just started and then resuming it in the opposite direction).
Not necessarily. An implementation might choose to e.g. use some kind of cache similar to what the JVM does with interned strings, and then a function like strcmp() might behave badly if it happened to run while that cache was halfway through being rebuilt.
For arbitrary pointers no. But it could special-case e.g. string constants in the source code and/or pointers returned by some intern function (which is also how the JVM does it - for arbitrary strings, even though they're objects, it's always possible that the object has been GCed and another string allocated at the same location).
For example, recently I wanted to call `gettid()` in a signal handler. Which I guessed was just a simple wrapper around the syscall.
However, it seems this can cache the thread ID in thread local storage (can't remember exact details).
I switched to making a syscall instead.
https://man7.org/linux/man-pages/man7/signal-safety.7.html
If it's on this list I generally trust it is safe.
I guess my point is, that if it's not, even a simple function may appear safe, but could do surprising things.
IIRC, the rule is also that said global variable must have the type "volatile sig_atomic_t".
https://github.com/bminor/musl/blob/master/src/misc/syslog.c
Everything there is either on stack or in static variables protected from reentrancy by the lock. The {d,sn,vsn}printf() calls there don't allocate in musl, although they might in glibc. Have I missed anything here?
https://www.freebsd.org/security/advisories/FreeBSD-SA-24:04...
> Finally, if sshd cannot be updated or recompiled, this signal handler race condition can be fixed by simply setting LoginGraceTime to 0 in the configuration file. This makes sshd vulnerable to a denial of service (the exhaustion of all MaxStartups connections), but it makes it safe from the remote code execution presented in this advisory.
Setting 'LoginGraceTime 0' in sshd_config file seems to mitigate the issue.
> If the value is 0, there is no time limit.
Isn't that worse?
(As the advisory notes, you do then have to deal with the DoS which the timeout setting is intended to avoid, where N clients all connect and then never disconnect, and they aren't timed-out and forcibly disconnected on the server end any more.)
> In our experiments, it takes ~10,000 tries on average to win this race condition; i.e., with 10 connections (MaxStartups) accepted per 600 seconds (LoginGraceTime), it takes ~1 week on average to obtain a remote root shell.
You can belt and suspenders with an external tool that watches for sshd in pre-auth for your real timeout and kills it or drops the tcp connection [1] (which will make the sshd exit in a more orderly fashion)
Edit: 18.04 Bionic is unaffected, the ssh version is 7.6 which is too old.
If you don't have extended support: You're vulnerable to worse, easier to exploit bugs :)
[1] https://security-tracker.debian.org/tracker/CVE-2024-6387
One thing which (as an independant person, who isn't doing any of the work!) is it often feels like in order to 'win', people are expected to find a full chain which gives them remote access, rather than just finding one issue, and getting it fixed / getting paid for it.
It feels to me like finding a single hole should be sufficient -- one memory corruption, one sandbox escape. Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?
So there is a need to differentiate between "real" security bugs [like this one] and non-security-impacting bugs, and demonstrating how an issue is exploitable is therefore very important.
I don't see the need to demonstrate this going away any time soon, because there will always be no end of non-security-impacting bugs.
Or there is no real consideration if that's actually an escalation of context. Like, "Oh if I can change these postgres configuration parameters, I can cause a problem", or "Oh if I can change values in this file I can cause huge trouble". Except, modifying that file or that config parameter requires root/supervisor access, so there is no escalation because you have full access already anyhow?
I probably wouldn't have to look at documentation too much to get postgres to load arbitrary code from disk if I have supervisor access to the postgres already. Some COPY into some preload plugin, some COPY / ALTER SYSTEM, some query to crash the node, and off we probably go.
But yeah, I'm frustrated that we were forced to route our security@ domain to support to filter out this nonsense. I wouldn't be surprised if we miss some actually important issue unless demonstrated like this, but it costs too much time otherwise.
Hospitals often try to make this argument about unsecure MySQL connections inside their network for example. Then something like heart bleed happens and lo and behold all the "never see an adversary" soft targets are exfil.
I believe this has happened to curl several times recently.
Let me give you a different perspective.
Imagine I make a serialisation/deserialisation library which would be vulnerable if you fed it untrusted data. This is by design, users can serialise and deserialise anything, including lambda functions. My library is only intended for processing data from trusted sources.
To my knowledge, nobody uses my library to process data from untrusted sources. One popular library does use mine to load configuration files, they consider those a trusted data source. And it's not my job to police other people's use of my library anyway.
Is it correct to file a CVE of the highest priority against my project, saying my code has a Remote Code Execution vulnerability?
However, if you (or anybody else) catch a program passing untrusted data to any library that says "trusted data only", that's definitely CVE worthy in my books even if you cannot demonstrate full attack chain. However, that CVE should be targeted at the program that passes untrusted data to trusted interface.
That said, if you're looking for bounty instead of just some publicity in reward for publishing the vulnerability, you must fullfil the requirements of the bounty and those typically say that bounty will be paid for complete attack chain only.
I guess that's because companies paying bounties are typically interested in real world attacks and are not willing to pay bounties for theoretical vulnerabilities.
I think this is problematic because it causes bounty hunters to keep theoretical vulnerabilities secret and wait for possible future combination of new code that can be used to attack the currently-theoretical vulnerability.
I would argue that it's much better to fix issues while they are still theoretical only. Maybe pay lesser bounty for theoretical vulnerabilities and pay reduced payment for the full attack chain if it's based on publicly known theoretical vulnerability. Just make sure that the combination pays at least equally good to publishing full attack chain for 0day vulnerability. That way there would be incentive to publish theoretical vulnerabilities immediately for maximum pay because otherwise somebody else might catch the theoretical part and publish faster than you can.
No need to imagine, the PyYAML has that situation. There have been attempts to use the safe deserialization by default, with an attempt to release a new major version (rolled back), and it settled on having a required argument of which mode / loader to use. See: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=PyYAML
Yes, it is correct to file a CVE of the highest priority against your project, because "only intended for processing data from trusted sources" is a frankly ridiculous policy for a serialization/deserialization library.
If it's your toy project that you never expected anyone to use anyway, you don't care about CVEs. If you want to be taken seriously, you cannot play pass-the-blame and ignore the fact that your policy turns the entire project into a security footgun.
Truly, it's a design decision so ridiculous nobody else has made it. Except Python's pickle, Java's serialization, Ruby's Marshal and PHP's unserialize of course. But other than that, nobody!
If something is supposed to load arbitrary code, not just data, that needs to be super clear at a glance. If it comes across as a data library, but allows takeover, you have a problem. Especially if there isn't a similar data-only function/library.
There are almost always various weaknesses which do not become exploitable until and unless certain conditions are met. This also becomes evident in contests like Pwn2Own where multiple vulnerabilities are often chained to eventually take the device over and remain un-patched for years. Researchers often sit on such weaknesses for a long time to eventually maximize the impact.
Sad but that is how it is.
It should be.
> Maybe at the moment there are just too many little issues...
There are so many.
Minimal patches for those can't/don't want to upgrade: https://marc.info/?l=oss-security&m=171982317624594&w=2
[1] https://www.tarsnap.com/spiped.html
* spiped can be used transparently by just putting a "ProxyCommand" in your ssh_config. This means you can connect to a server just by using "ssh", normally. (as opposed to wireguard where you need to always be on your VPN, otherwise connnect to your VPN manually before running ssh)
* As opposed to wireguard which runs in the kernel, spiped can easily be set-up to run as a user, and be fully hardened by using the correct systemd .service configuration [4]
* The protocol is much more lightweight than TLS (used by stunnel), it's just AES, padded to 1024 bytes with a 32 bit checksum. [5]
* The private key is much easier to set up than stunnel's TLS certificate, "dd if=/dev/urandom count=4 bs=1k of=key" and you're good to go.
[1] https://packages.debian.org/bookworm/spiped
[2] https://www.freshports.org/sysutils/spiped/
[3] https://archlinux.org/packages/extra/x86_64/spiped/
[4] https://ruderich.org/simon/notes/systemd-service-hardening
The spiped documentation recommends a key size with a minimum of 256b of entropy. I'm curious why you've chosen such a large key size (4096b) here? Is there anything to suggest 256b is no longer sufficient for the general case?
It doesn't matter if you have more than 256 bits, as your key file gets hashed with SHA256 at the end[1]. It could be 5GiB it would be the same. So yes, you're right to mention that more bits don't add more security.
[1] https://github.com/Tarsnap/spiped/blob/2194b2c64de65eed119ab...
They run sshd with the -D option already, logging everything to stdout and stderr, as their systemd already catches this output and sends it to journal for logging.
So I don't see anywhere they would be calling syslog, unless sshd does it on its own.
At most maybe add OPTIONS=-e into /etc/sysconfig/sshd.
1. It affects OpenSSH versions 8.5p1 to 9.7p1 on glibc-based Linux systems.
2. The exploit is not 100% reliable - it requires winning a race condition.
3. On a modern system (Debian 12.5.0 from 2024), the researchers estimate it takes: - ~3-4 hours on average to win the race condition - ~6-8 hours on average to obtain a remote root shell (due to ASLR)
4. It requires certain conditions: - The system must be using glibc (not other libc implementations) - 100 simultaneous SSH connections must be allowed (MaxStartups setting) - LoginGraceTime must be set to a non-zero value (default is 120 seconds)
5. The researchers demonstrated working exploits on i386 systems. They believe it's likely exploitable on amd64 systems as well, but hadn't completed that work yet.
6. It's been patched in OpenSSH 9.8p1 released in June 2024.
Stupid question, perhaps, but if those two lines inside the sshd_config are commented out with '#', does this mean that grace period and max. sessions are technically unlimited and therefore potentially vulnerable?
Found my own answer: If the values are commented out, it means that the default values are being used. If the file hasn't been modified the default values are those you see inside the config file.
In our experiments, it takes ~10,000 tries on average to win this race
condition, so ~3-4 hours with 100 connections (MaxStartups) accepted
per 120 seconds (LoginGraceTime). Ultimately, it takes ~6-8 hours on
average to obtain a remote root shell, because we can only guess the
glibc's address correctly half of the time (because of ASLR).
MaxStartups default is 10The exploit tries to interrupt handlers that are being run due to login grace period timing out - so we are already at a point where authentication workflow has ended without passing all the credentials.
Plus, in the "Practice" section, they discuss using user name value as a way to manipulate memory at a certain address, so they want/need to control this value.
Alerting is useless, with the volume of automated exploits attempted.
But people here are going to explain that fail2ban is security theater...
This is all a matter of threat and risk management. If you know what you are doing then fail2ban or portknocking is another layer on your security.
Security theater in my opinion is something else: nonsense password policies, hiding your SSID, whitelisting MACs, ...
More resourceful attackers could automate attempted exploit using a huge botnet, and it'd likely look similar to the background of ssh brute force bots that we already see 24/7/365.
10:30:60 is mentioned in the man for start:rate:full, so I set mine to that value.
Thanks for the quote
Mitigate by using fail2ban?
Nice to see that Ubuntu isn't affected at all
In theory, this could be used (much quicker than the mentioned days/weeks) to get local privilege escalation to root, if you already have some type of shell on the system already. I would assume that fail2ban doesn't block localhost.
If fail2ban isn't going to blocklist localhost, then it isn't a mitigation for this vulnerability because RCE implies LPE.
But, sure, in that case fail2ban won't mitigate, but that's pretty damn obviously implied. For 99% of people and situations, it will.
It's going to apply to the amount of servers that an attacker has low-privileged access (think: www-data) and an unpatched sshd. Attackers don't care if it's an RCE or not: if a public sshd exploit can be used on a system with a Linux version without a public Linux LPE, it will be used. Being local also greatly increases the exploitability.
Then consider the networks where port 22 is blocked from the internet but sshd is running in some internal network (or just locally for some reason).
Right, which is almost none. www-data should be set to noshell 99% of the time.
> or just locally for some reason).
This is all that would be relevant, and this is also very rare.
I get the point.
My point was the example being given is less than 1% of affected cases.
> It’s very common on linux pentests to need to privesc from some lower-privileged foothold
Sure. Been doing pentests for 20+ years :)
> So yes I would expect this turns out to be a useful privesc in practice.
Nah.
I don’t get it then… Do you never end up having to privesc in your pentests on linux systems? No doubt it depends on customer profile but I would guess personally on at least 25% of engagements in Linux environments I have had to find a local path to root.
Of course I do.
I'm not saying privsec isn't useful, I'm saying the cases where you will ssh to localhost to get root are very rare.
Maybe you test different environment or something, but on most corporate networks I test the linux machines are dev machines just used for compiling/testing and basically have shared passwords, or they're servers for webapps or something else where normal users most who have a windows machine won't have a shell account.
If there's a server where I only have a local account and I'm trying to get root and it's running an ssh server vulnerable to this attack, of course I'd try it. I just don't expect to be in that situation any time soon, if ever.
And you don't actually pentest the software which those users on the windows machine are using on the Linux systems? So you find a Jenkins server which can be used to execute Groovy scripts to execute arbitrary commands, the firewall doesn't allow connections through port 22, and it's just a "well, I got access, nothing more to see!"?
You really love your assumptions, huh?
> it's just a "well, I got access, nothing more to see!"?
I said nothing like that, and besides that, if you were not just focused on arguing for the sake of it, you would see MY point was about the infrequency of the situation you were talking about (and even then your original point seemed to be contrarian in nature more than anything).
Huh? execve(2), of course, lets to execute arbitrary files. No need to spawn a tty at all. https://swisskyrepo.github.io/InternalAllTheThings/cheatshee...
>This is all that would be relevant, and this is also very rare.
Huh? Exploiting an unpatched vulnerability on a server to get access to a user account is.. very rare? That's exactly what lateral movement is about.
For example:
> Huh? Exploiting an unpatched vulnerability on a server to get access to a user account is.. very rare?
The 'this' I refer to is very clearly not what you've decided to map it to here. The 'this' I refer to, if you follow the comment chain, refers to a subset of something you said which was relevant to your point - the rest was not.
It doesn't matter if 99% of the situations you can think of are not problematic. If 1% is feasible and the attackers know about it, it's an attack vector.
No mention on 22.04 yet.
AMD to the rescue - fortunately they decided to leave the take-a-way and prefetch-type-3 vulnerability unpatched, and continue to recommend that the KPTI mitigations be disabled by default due to performance costs. This breaks ASLR on all these systems, so these systems can be exploited in a much shorter time ;)
AMD’s handling of these issues is WONTFIX, despite (contrary to their assertion) the latter even providing actual kernel data leakage at a higher rate than meltdown itself…
(This one they’ve outright pulled down their security bulletin on) https://pcper.com/2020/03/amd-comments-on-take-a-way-vulnera...
(This one remains unpatched in the third variant with prefetch+TLB) https://www.amd.com/en/resources/product-security/bulletin/a...
edit: there is a third now building on the first one with an unpatched vulnerabilities in all zen1/zen2 as well… so this one is WONTFIX too it seems, like most of the defects TU Graz has turned up.
https://www.tomshardware.com/news/amd-cachewarp-vulnerabilit...
Seriously I don’t know why the community just tolerates these defenses being known-broken on the most popular brand of CPUs within the enthusiast market, while allowing them to knowingly disable the defense that’s already implemented that would prevent this leakage. Is defense-in-depth not a thing anymore?
Nobody in the world would ever tell you to explicitly turn off ASLR on an intel system that is exposed to untrusted attackers… yet that’s exactly the spec AMD continues to recommend and everyone goes along without a peep. It’s literally a kernel option that is already running and tested and hardens you against ASLR leakage.
The “it’s only metadata” is so tired. Metadata is more important than regular data, in many cases. We kill people, convict people, control all our security and access control via metadata. Like yeah it’s just your ASLR layouts leaking, what’s the worst that could happen? And I mean real data goes too in several of these exploits too, but that’s not a big deal either… not like those ssh keys are important, right?
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Reg file data sampling: Not affected
Retbleed: Not affected
Spec rstack overflow: Vulnerable: Safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Srbds: Not affected
Tsx async abort: Not affected
Only regular stuffYet it provides valuable separation between kernel and userspace address ranges.
iirc the predecessor to KPTI was made before these hw flaws were announced as a general enhancement to ASLR.
AMD aside, Spectre V2 isn't even default mitigated for userspace across the board, you must specify spectre_v2=on for userspace to be protected.
https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...
AMD's security bulletin is actually incredibly weaselly and in fact quietly acknowledges KPTI as the reason further mitigation is not necessary, and then goes on to recommend that KPTI remain disabled anyway.
https://www.amd.com/en/resources/product-security/bulletin/a...
> The attacks discussed in the paper do not directly leak data across address space boundaries. As a result, AMD is not recommending any mitigations at this time.
That's literally the entire bulletin, other than naming the author and recommending you follow security best-practices. Two sentences, one of which is "no mitigations required at this time", for an exploit which is described by the author (who is also a named author of the Meltdown paper!) as "worse than Meltdown", in the most popular brand of server processor.
Like it's all very carefully worded to avoid acknowledging the CVE in any way, but to also avoid saying anything that's technically false. If you do not enable KPTI then there is no address space boundary, and leakage from the kernel can occur. And specifically that leakage is page-table layouts - which AMD considers "only metadata" and therefore not important (not real data!).
But it is a building block which amplifies all these other attacks, including Specter itself. Specter was tested in the paper itself and - contrary to AMD's statement (one of the actual falsehoods they make despite their weaseling) - does result in actual leakage of kernel data and not just metadata (the author notes that this is a more severe leak than meltdown itself). And leaking metadata is bad enough by itself - like many kinds of metadata, the page-table layouts are probably more interesting (per byte exfiltrated) than the actual data itself!
AMD's interest is in shoving it under the rug as quietly as possible - the solution is flushing the caches every time you enter/leave kernel space, just like with Meltdown. That's what KPTI is/does, you flush caches to isolate the pages. And AMD has leaned much more heavily on large last-level caches than Intel has, so this hurts correspondingly more.
But I don't know why the kernel team is playing along with this. The sibling commenter is right in the sense that this is not something that is being surfaced to users to let them know they are vulnerable, and that the kernel team continues to follow the AMD recommendation of insecure-by-default and letting the issue go quietly under the rug at the expense of their customers' security. This undercuts something that the kernel team has put significant engineering effort into mitigating - not as important as AMD cheating on benchmarks with an insecure configuration I guess.
There has always been a weird sickly affection for AMD in the enthusiast community, and you can see it every time there's an AMD vulnerability. When the AMD vulns really started to flow a couple years ago, there was basically a collective shrug and we just decided to ignore them instead of mitigating. So much for "these vulnerabilities only exist because [the vendor] decided to cut corners in the name of performance!". Like that's explicitly the decision AMD has made with their customers' security. And everyone's fine with it, same weird sickly affection for AMD as ever among the enthusiast community. This is a billion-dollar company cutting corners on their customers' security so they can win benchmarks. It's bad. It shouldn't need to be said, but it does.
I very much feel that - even given that people's interest or concern about these exploits is fading over time - that even today (let alone a couple years ago) Intel certainly would not have received the same level of deference if they just said that a huge, performance-sapping patch was "not really necessary" and that everyone should just run their systems in an insecure configuration so that benchmarks weren't unduly harmed. It's a weird thing people have where they need to cover all the bases before they will acknowledge the slightest fault or problem or misbehavior with this specific corporation. Same as the sibling who disputed all this because Linux said he was secure - yeah, the kernel team doesn't seem to care about that, but as I demonstrated there is still a visible timing thing even on current BIOS/OS combinations.
Same damn thing with Ryzenfall too - despite the skulduggery around Monarch, CTS Labs actually did find a very serious vuln (actually 3-4 very serious exploits that let them break out of guest/jailbreak PSP and bypass AMD's UEFI signing and achieve persistence, and it's funny to look back at the people whining that it doesn't deserve a 9.0 severity or whatever. Shockingly, MITRE doesn't give those out for no reason, and AMD doesn't patch "root access lets you do root things" for no reason either.
https://www.youtube.com/watch?v=QuqefIZrRWc
I get why AMD is doing it. I don't get why the kernel team plays along. It's unintentionally a really good question from the sibling: why isn't the kernel team applying the standards uniformly? Here's A Modest Security Proposal: if we just don't care about this class of exploit anymore, and KASLR isn't going to be a meaningful part of a defense-in-depth, shouldn't it be disabled for everyone at this point? Is that a good idea?
you need to either implement processor-level ASLR protections (and probably these guarantees fade over time!) or kpti/flush your shit when you move between address spaces. Or there needs to be an understanding from the kernel team that they need to develop under the page allocation model that attackers can see your allocation patterns after initial breaches. like let's say they breach your PRNG key. Should there be additional compartmentalization after that? Multiple keys at multiple security boundaries / within the stack more generally to increase penetration time across security boundaries?
seemingly the expectation is one or the other though, because ASLR security is being treated as a security boundary.
I also very much feel that at this point KPTI is just a generalized good defense in depth. If that's the defense that's going to be deployed after your shit falls through... let's just flush it preemptively, right? That's not the current practice but should it be?
The linux-firmware repo does not provide AMD microcode updates to consumer platforms unlike Intel.
you probably want to do `export WITH_TLB_EVICT=1` before you make, then run ./kaslr. The power stuff is patched (by removing the RAPL power interface) but there is still timing differences visible on my 5700G and the WITH_TLB_EVICT makes this fairly obvious/consistent:
```csv
452,0xffffffffb8000000,92,82,220
453,0xffffffffb8200000,94,82,835
454,0xffffffffb8400000,110,94,487
455,0xffffffffb8600000,83,75,114
456,0xffffffffb8800000,83,75,131
457,0xffffffffb8a00000,109,92,484
458,0xffffffffb8c00000,92,82,172
459,0xffffffffb8e00000,110,94,499
460,0xffffffffb9000000,92,82,155
```
those timing differences are the presence/nonpresence of kernel pages in the TLB, those are the KASLR pages, they’re slower when the TLB eviction happens because of the extra bookkeeping.
then we have the stack protector canary on the last couple pages of course:
```csv
512,0xffffffffbf800000,91,82,155
513,0xffffffffbfa00000,92,82,147
514,0xffffffffbfc00000,92,82,151
515,0xffffffffbfe00000,91,82,137
516,0xffffffffc0000000,112,94,598
517,0xffffffffc0200000,110,94,544
518,0xffffffffc0400000,110,94,260
519,0xffffffffc0600000,110,94,638
```
edit: the 4 pages at the end of the memory space are very consistent between tests and across reboots, and the higher lookup time goes away if you set the kernel boot option "pti=on" manually at startup, that’s the insecure behavior as described in the paper.
log with pti=on kernel option: https://pastebin.com/GK5KfsYd
```csv
513,0xffffffffbfa00000,92,82,147
514,0xffffffffbfc00000,92,82,123
515,0xffffffffbfe00000,92,82,141
516,0xffffffffc0000000,91,82,134
517,0xffffffffc0200000,91,82,140
518,0xffffffffc0400000,91,82,151
519,0xffffffffc0600000,91,82,141
```
environment: ubuntu 22.04.4 live-usb, 5700G, b550i aorus pro ax latest bios
( https://www.openssh.com/txt/release-9.8 )
Darn - here I was hoping Alpine was properly immune, but it sounds more like "nobody's checked if it works on musl" at this point.
1. Find things that are 0.0.0.0 port 22, example, https://gist.github.com/james-ransom/97e1c8596e28b9f759bac79...
2. Force them to the local network, gcloud compute firewall-rules update default-allow-ssh --source-ranges=10.0.0.0/8 --project=$i;
Defensive programming tells us to minize code in signal handlers and the safest is to avoid using the signal at all when possible :).
Compared to ssh, wireguard configs feel too easy to mess up and risk getting locked out if its the only way of accessing the device.
An RCE in wireguard would be enough -- no need to compromise both.
Stacking these services on top of each other in this way does not necessarily mean that an attacker has to compromise both services in order to compromise a host. The parent poster's flawed reasoning appeared to lead to a false sense of security as a result.
Because an $80 black box wireless KVM from a foreign country is way more secure! (Just kidding, though it is not internet-accesible by default.)
$ rpm -q openssh
openssh-8.7p1-38.0.1.el9.x86_64
The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1
https://blog.qualys.com/vulnerabilities-threat-research/2024...
> The flaw affects RHEL9 as the regression was introduced after the OpenSSH version shipped with RHEL8 was published.
$ ps ax | grep sshd | head -1
1306 ? Ss 0:01 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
As mentioned elsewhere here, is -D sufficient to avoid exploitation, or is -e necessary as well? $ man sshd | sed -n '/ -[De]/,/^$/p'
-D When this option is specified, sshd will not
detach and does not become a daemon. This
allows easy monitoring of sshd.
-e Write debug logs to standard error instead
of the system log.
RHEL9 is also 64-bit only, and we see from the notice:"we have started to work on an amd64 exploit, which is much harder because of the stronger ASLR."
On top of writing the exploit to target 32-bit environments, this also requires a DSA key that implements multiple calls to free().
There is a section on "Rocky Linux 9" near the end of the linked advisory where unsuccessful exploit attempts are discussed.
https://github.com/openssh/openssh-portable/blob/V_9_8_P1/ss...
sshd.c handles no_daemon (-D) and log_stderr (-e) independently. log_stderr is what is given to log_init in log.c that gates the call to syslog functions. There is a special case to set log_stderr to true if debug_flag (-d) is set, but nothing for no_daemon.
I can't test it right now though so I may be missing something.
openssh-8.7p1-38.0.2.el9.x86_64.rpm
openssh-server-8.7p1-38.0.2.el9.x86_64.rpm
openssh-clients-8.7p1-38.0.2.el9.x86_64.rpm
The changelog addresses the CVE directly. It does not appear that adding the -e directive is necessary with this patch. $ rpm -q --changelog openssh-server | head -3
* Wed Jun 26 2024 Alex Burmashev <alexander.burmashev@oracle.com> - 8.7p1-38.0.2
- Restore dropped earlier ifdef condition for safe _exit(1) call in sshsigdie() [Orabug: 36783468]
Resolves CVE-2024-6387
https://rockylinux.org/news/2024-07-01-rocky-linux-9-cve-202...
https://archlinux.org/packages/core/x86_64/openssh/
edit be sure to manually restart sshd after upgrading; my systems fail during key exchange after package upgrade until restarting the sshd service:
% ssh -v 192.168.1.254
OpenSSH_9.8p1, OpenSSL 3.3.1 4 Jun 2024
... output elided ...
debug1: Local version string SSH-2.0-OpenSSH_9.8
kex_exchange_identification: read: Connection reset by peer
Connection reset by 192.168.1.254 port 22
> NB. if you're updating via source, please restart sshd after installing, otherwise you run the risk of locking yourself out.
https://github.com/openssh/openssh-portable/commit/03e3de416...
Edit: Already reported at https://gitlab.archlinux.org/archlinux/packaging/packages/op...
> Hidden path communication enables the hiding of specific path segments, i.e. certain path segments are only available for authorized ASes. In the common case, path segments are publicly available to any network entity. They are fetched from the control service and used to construct forwarding paths.
---
- OpenSSH < 4.4p1 is vulnerable to this signal handler race condition, if not backport-patched against CVE-2006-5051, or not patched against CVE-2008-4109, which was an incorrect fix for CVE-2006-5051;
- 4.4p1 <= OpenSSH < 8.5p1 is not vulnerable to this signal handler race condition (because the "#ifdef DO_LOG_SAFE_IN_SIGHAND" that was added to sigdie() by the patch for CVE-2006-5051 transformed this unsafe function into a safe _exit(1) call);
- 8.5p1 <= OpenSSH < 9.8p1 is vulnerable again to this signal handler race condition (because the "#ifdef DO_LOG_SAFE_IN_SIGHAND" was accidentally removed from sigdie()).
edit: maybe i should add an iptable rule to only allow ssh from my IP.
What benefits do you see? I mean, you still expose some binary that implements authentication and authorization using cryptography.
I think that even RBAC scenarios described in the link above should be achievable with OpenSSH, right?
I don't use port-knocking but I really just don't get all those saying: "It's security theater".
We had not one but two major OpenSSH "near fiasco" (this RCE and the xz lib thing) that were both rendered unusable for attackers by using port knocking.
To me port-knocking is not "security theater": it adds one layer of defense. It's defense-in-depth. Not theater.
And the port-knocking sequence doesn't have to be always the same: it can, say, change every 30 seconds, using TOTP style secret sequence generation.
How many exploits rendered cold dead in their tracks by port-knocking shall we need before people stop saying port-knocking is security theater?
Other measures do also help... Like restrictive firewalling rules, which many criticize as "it only helps keep the logs smaller": no, they don't just help keep the logs smaller. I'm whitelisting the three ISP's IP blocks anyone can reasonably be needing to SSH from: now the attacker needs not only the zero-day, but it also need to know he needs to be on one of those three ISPs' IPs.
The argument that consists in saying: "sshd is unexploitable, so nothing else must be done to protect the server" is...
Dead.
It's not security theater but it's kind of outdated. Single Packet Authentication[0] is a significant improvement.
> How many exploits rendered cold dead in their tracks by port-knocking shall we need before people stop saying port-knocking is security theater?
Port knocking is one layer, but it shouldn't be the only one, or even a heavily relied upon one. Plenty of people might be in a position to see the sequence of ports you knock, for example.
Personally, I think if more people bothered to learn tools like SELinux instead of disabling it due to laziness or fear, that is what would stop most exploits dead. Containers are the middleground everyone attached to instead, though.
Unless you are talking about your own personal use-case, in which case, feel free to follow your deepest wishes
Firewall is a joke, too. Who can manage hundreds and thousands of even-changing IP ? Nobody. Again: I'm not talking about your personal use-case (yet I enjoy connecting to my server through 4G, whereever I am)
Fail2ban, on the other hand, is nice: every systems that relies on some known secret benefits from an anti-bruteforce mechanism. Also, and this is important: fail2ban is quick to deploy, and not a PITA for users. Good stuff.
If your connections crap and there’s packet loss, some of the sequence may be lost.
Avoiding replay attacks is another whole problem - you want the sequence to change based on a shared secret and time or something similar (eg: TOTP to agree the sequence).
Then you have to consider things like NAT…
I've been on networks that allow nothing except the standard TCP & UDP ports for HTTP(S), SSH and DNS (and even then DNS was restricted to their local name server).
Port knocking renders SSH unusable: I'm not going to tell my users "do this magic network incantation before running ssh". They want to open a terminal and simply run ssh.
See the A in the CIA triad, as well as U in the Parkerian hexad.
In a world where tailscale etc. have made quality VPNs trivial to implement, why would I both with port knocking?
Source? Wireguard can do 1GB/s on decade old processors[1]. Even openvpn can do 258 Mb/s, which realistically can saturate the average home internet connection. Also, if we're talking about SSH connections, why does throughput matter? Why do you need 1 gigabit of bandwidth to transfer a few keystrokes a second?
> Also, if we're talking about SSH connections, why does throughput matter?
scp, among other things, runs over ssh.
Ironically scp/sftp caused me more bandwidth headaches than wireguard/openvpn. I frequently experienced cases where scp/sftp would get 10% or even less of the transfer speed compared to a plain http(s) connection. Maybe it was due to packet loss, buffer size, or qos/throttling, but I wasn't able to figure out a definitive solution.
It limits the amount of data that's "in the cable" (which needs to be more if the cable is long).
> The default SSH window size was 64 - 128 KB, which worked well for interactive sessions, but was severely limiting for bulk transfer in high bandwidth-delay product situations.
> OpenSSH later increased the default SSH window size to 2 MB in 2007.
2 MB is still incredibly little.
It means that on a 100 ms connection, you can not exceed 160 Mbit/s, even if your machines have 10 Gbit/s.
OpenSSH is one of the very few TCP programs that have garbage throughput on TCP. This is also what makes rsync slow.
The people from [1] patched that.
You can support that work here: https://github.com/rapier1/hpn-ssh
In my opinion, this should really be fixed in OpenSSH upstream. I do not understand why it doesn't just use normal automatic TCP window size scaling, like all other TCP programs.
All the big megacorps and almost every other tech company in existence uses SSH, yet nobody seems to care that it's artificially 100x slower than necessary.
[1]: http://www.allanjude.com/bsd/AsiaBSDCon2017_-_SSH_Performanc...
It's in the name; Secure Shell, vs. Virtual Private Network. One of them has to deal with users, authentication, shells, chroots. The other mostly deals with the network stack and encryption.
I used port knocking for a while many years ago, but it was just too fiddly and flaky. I would run the port knocking program, and see the port not open or close.
If I were to use a similar solution today (for whatever reason), I'd probably go for web knocking.
In my case, I didn't see it as a security measure, but just as a way to cut the crap out of sshd logs. Log monitoring and banning does a reasonable job of reducing the crap.
OpenBSD is notably not vulnerable, because its
SIGALRM handler calls syslog_r(), an async-signal-safer version of
syslog() that was invented by OpenBSD in 2001.
Saving the day once again.Just this morning was the first time I read the words MT-Safe, AS-Safe, AC-Safe. But I did not know there were “safer” functions as well.
Is there also a “safest” syslog?
You can still have comparisons between the rich too, so Bezos is richer than Gates and he's also the richest if you're just considering the pair. But add Musk to the mix, and he's no longer the richest.
I guess that last example looks like you have two attributes - rich as some objective "has a lot of money" and comparatively rich (richer, richest). For safe, it's kind of similar, except that as soon as you are saying one thing is safer than the other, then you are implicitly acknowledging that there are areas where the thing isn't safe, and if you're admitting that you can't also call it safe without contradicting yourself.
If you add a single grain of salt to a glass of that water, it's no longer pure. Drinking it you probably wouldn't notice, and some people might colloquially call it "pure", but we know it isn't because we added some salt to it.
If you add a teaspoon of salt to to a different glass of pure water, it's also no longer pure, and now most people would probably notice the salt and recognise it's not pure.
If you add a tablespoon of salt to to a different glass of pure water, it's definitely not pure and you probably wouldn't want to drink it either.
You could say the teaspoon of salt glass is purer than the tablespoon of salt glass, the grain of salt glass is purer than both of them and so the purest of the three. And yet, we know that it isn't pure water, because we added something else to it.
So pure > purest > purer > less pure. Also note that I was required to use "less pure" for the last one, because all of them except pure are "impure" or "not pure", even though were what I originally thought of writing.
In general you're right. For safety it's just that 'safest' implies some sort of practicality: the best - most safe - from a set of options. But the safest option isn't necessarily strictly safe.
(Say your dog's stuck on a roof on a windy day, you decide the safest option is scaffolding (safer than a ladder or free climbing), but it's not safe, you just insist on rescuing your dog.)
36C3 - A systematic evaluation of OpenBSD's mitigations
Sure, see any of the previous exploits for sshd, or any other software shipped in the OpenBSD default install.
> I keep asking this as a long-time OpenBSD user who is genuinely interested in seeing it done, but so far everyone who has said "it's flawed" also reserved themselves the convenience of not having to prove their point in a practical sense.
The point is they have very little in the way of containing attackers and restricting what they can. Until pledge and unveil, almost all their focus in on eliminating bugs which hey, great, but let's have a little more in case you miss a bug and someone breaks in, eh?
An insecure dockerized webserver protected with SELinux is safer than Apache on a default OpenBSD install.
Would you like to point to one that successfully utilizes a weakness in OpenBSD itself, which is the topic and implied statement of the video, rather than a weakness in some application running under the superuser?
Just to underline, I'm not interested in discussing the hows and whys of containing arbitrary applications where one or more portions are running under euid 0. I'm interested in seeing OpenBSD successfully attacked by an unprivileged process/user.
If OpenBSD users installed it through OpenBSD repositories and are running it will they be affected? Yes? Then it counts against the system itself.
It's the way most distros handled security vulnerabilities, though. Without looking, I'm certain Ubuntu has a security advisory for that vulnerability.
So I agree it might not be fair on the face of it or if doing a technical analysis or something, but if you want to compare OpenBSD security to other Linux distros by vulnerability count, (and so many who don't know better do), then vulnerabilities should be measured in the same way across both systems.
I'm sorry, what? What kind of nonsense distinction is this?
Are you trying to very disingenuously try and claim only kernel exploits count as attacks against OpenBSD?
Why the hell wouldn't a webserver zero-day count? If an OS that claims to be security focused can't constrain a misbehaving web server running as root then it's sure as hell not any type of secure OS.
> I'm interested in seeing OpenBSD successfully attacked by an unprivileged process/user.
You realize there is very little that OpenBSD does to protect against LPE if there is any LPE vuln on their system, right? Surely you're not just advocating for OpenBSD based on their own marketing? If you want to limit the goalposts to kernel vulns or LPE's that already require an account you're free to do so, but that's rather silly and not remotely indicative of real world security needs.
If it's a security focused OS, it should provide ways to limit the damage an attacker can do. OpenBSD had very very little in that regard and still does, although things are slightly better now and they have a few toys.
And hey, fun fact, if you apply the same OpenBSD methodology and config of having a barebones install, you'll suddenly find at least dozens of other operating systems with equivalent or better track records.
Plan 9 has had less vulnerabilities than OpenBSD and has had more thought put into its security architecture[0], so by your metric it's the more secure OS, yeah?
> Are you trying to very disingenuously try and claim only kernel exploits count as attacks against OpenBSD?
Not at all. I clearly underlined that I'm not looking for cases fitting that specific scenario. The only moving of goalposts is entirely on your behalf by very disingenously misrepresenting my question in a poor attempt to try make your answer or whatever point fit. And on top of that, the tasteless pretending to be baffled...
The thing is, we're trying to talk about the security of OpenBSD compared to its competition.
But you're trying to avoid letting anyone do that by saying only an attack against something in the default install you can do with a user account counts, which is absolutely ridiculous.
I'm not moving the goalposts nor am I pretending in any sense. Your approach just doesn't make sense, measure or indicate anything useful or relevant about the security of OpenBSD. I stated so and explained why.
But hey, keep believing whatever you want buddy.
> "But you're trying to avoid letting anyone do that by saying only an attack against something in the default install you can do with a user account counts, which is absolutely ridiculous."
I don't know who "we" are. The question I asked another poster, where you decided to butt in, regarded escalation from an unprivileged position and nothing else.
Nobody but yourself said anything along the lines of "only attacks against things in the default install 'count'", nor drew drew up comparisons against "the competition". You clearly have some larger axe to grind, but you're doing it in a discourse playing out only in your head, without reading what others actually wrote.
I don't see as much of it in your recent history, which is good, but it's not good to let yourself get sucked into this sort of tit-for-tat spat.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
We are the people having this discussion. That should be obvious. It's kind of funny you accused me of pretending to be baffled, lol. The irony.
You certainly had no issue discussing this top with me until I called out your claims/methodology as nonsense.
> The question I asked another poster, where you decided to butt in,
Welcome to the Internet!
> regarded escalation from an unprivileged position and nothing else.
Yes. And I pointed out why this is an absolutely nonsense approach. You realize getting root on OpenBSD is significantly easier than several other setups or Linux distro's you've probably never heard of, though, right?
So, what is it? Afraid to be wrong? You brought too much into the OpenBSD marketing, so now it's a sunk cost for your ego?
> Nobody but yourself said anything along the lines of "only attacks against things in the default install 'count'", nor drew drew up comparisons against "the competition".
This is exactly what you imply when you want to limit attacks to LPE's that require a user account, lol.
> You clearly have some larger axe to grind, but you're doing it in a discourse playing out only in your head, without reading what others actually wrote.
No axe to grind. Just calling out bad claims and reasoning.
Even now, you've successfully got us discussing semantics and nonsense instead of you actually addressing the bs claims you made. Stellar job.
By the time a commenter gets to violating the site guidelines as egregiously as this, it's almost always the case that they should have stopped posting a lot sooner.
Code standards are very strict in OpenBSD and security is always a primary thought...
For starters.
And if you want to simply go by vulnerability counts, as though that meant something, let's throw in MenuetOS and TempleOS.
Oh, SEL4 is without any doubt useful, it wouldn't be as popular and coveted if it wasn't, but I think you are trying to say widespread.
However, you seem to have taken my examples literally and missed my point, which is trying to judge the security of an OS by its vulnerabilities is a terrible, terrible approach.
> but it does include everything out of the box to be a web server
Sure, and so do plenty of minimal linux distros, and if you use the same metrics and config as OpenBSD then they'll have a similar security track record.
And honestly, Linux with one of the RBAC solutions puts OpenBSD's security to shame.
Do yourself a favor and watch the CCC talk someone else linked in the thread.
There is a laptop running OpenIndiana illumos on my desk. I mean useful, though through the lens of my usecases (read: if it can't run a web browser or server, I don't generally find it useful). I've only really heard of seL4 being popular in embedded contexts (mostly cars?), not general-purpose computers.
> However, you seem to have taken my examples literally and missed my point, which is trying to judge the security of an OS by its vulnerabilities is a terrible, terrible approach.
No, I think your examples were excellent for illustrating the differences in systems; you can get a more secure system by severely limiting how much it can do (seL4 is a good choice for embedded systems, but in itself currently useless as a server OS), or a more useful system that has more attack surface, but OpenBSD is a weirdly good ratio of high utility for low security exposure. And yes of course I judge security in terms of realized exploits; theory and design is fine, but at some point the rubber has to hit the road.
> Sure, and so do plenty of minimal linux distros, and if you use the same metrics and config as OpenBSD then they'll have a similar security track record.
Well no, that's the point - they'll be better than "fat" distros, but they absolutely will not match OpenBSD. See, for example, this specific sshd vuln, which will affect any GNU/Linux distro and not OpenBSD, because OpenBSD's libc goes out of its way to solve this problem and glibc didn't.
> Do yourself a favor and watch the CCC talk someone else linked in the thread.
I don't really do youtube - is it the one that handwaves at allegedly bad design without ever actually showing a single exploit? Because I've gotten really tired of people loudly proclaiming that this thing is so easy to exploit but they just don't have time to actually do it just now but trust them it's definitely easy and a real thing that they could do even though somehow it never seems to actually happen.
I mean, OpenBSD does security mitigation sealioning, so nobody really wants to engage with their stupider ideas
Better to stick to standard definitions in the future so you won't have to explain your personal definitions later on.
> No, I think your examples were excellent for illustrating the differences in systems; you can get a more secure system by severely limiting how much it can do
So you not only missed the point but decided to take away an entirely different message. Interesting.
Yes, limiting attack surface is a basic security principle. The examples I gave were not to demonstrate this basic principle, but to show that trying to gauge security by amount of vulnerabilities is foolish.
> seL4 is a good choice for embedded systems, but in itself currently useless as a server OS
Plan 9 then. Or any of other numerous OS projects that have less vulns than OpenBSD and can meet your arbitrary definition of 'useful'. The point is that trying to measure security by vuln disclosures is a terrible, terrible method and only something someone with no clue about security would use.
> but OpenBSD is a weirdly good ratio of high utility for low security exposure.
OpenBSD is just niche, that's it. Creating OpenSSH brought a lot of good marketing, but if you really look at the OS from a security perspective and look at features, it's lacking.
> Well no, that's the point - they'll be better than "fat" distros, but they absolutely will not match OpenBSD.
They absolutely will be better than OpenBSD, because they have capabilities to limit what an attacker can do in the event they get access, as opposed to putting all the eggs in the 'find all the bugs before they get exploited' basket. OpenBSD isn't anything special when it comes to security. That, really, is the point. Anything otherwise is marketing or people who have fell for marketing IMO.
> I don't really do youtube
There's a lot of good content only on that platform. Surely you can use yt-dlp or freetube or something.
> is it the one that handwaves at allegedly bad design without ever actually showing a single exploit?
That summary isn't remotely accurate, so I'd have to say no.
> Because I've gotten really tired of people loudly proclaiming that this thing is so easy to exploit but they just don't have time to actually do it just now but trust them it's definitely easy and a real thing that they could do even though somehow it never seems to actually happen.
They have remote holes listed on their homepage. Both those cases led to remote root and this supposedly secure OS had nothing to offer, while most Linux distros did. Let's make this simple. Linux allows you to contain a remote root exploit with tools like RBAC and MAC extensions. OpenBSD offers nothing. In the event both systems have the same vulnerability (of which this titular instance is not an example of) allowing remote root, Linux will be the safer system if set up correctly.
But honestly, I've gotten really tired of OpenBSD stans regurgitating that it's supposedly secure and thinking that being able to point to a lack of vulnerabilities in a barebones default install is some kind of proof of that.
I kid, but really you probably shouldn't on Production. You should be exporting your logs and everything else. The host or VM bootstrapped golden images with everything as needed.
It is okay to start that way and figure out your enternals but that isn't for Production. Production is a locked down closed environment.
Recomment from another Hacker News post.
Might actually make for a fun side project to build a SSH server using that library and see how well it performs.
The dangerous interaction between signals and other functions is outside of what Rust can help with.
Rust doesn't have an effect system nor a similar facility to flag what code is not signal-handler-safe. A Rust implementation could just as likely call something incompatible.
Rust has many useful guarantees, and is a significant improvement over C in most cases, but let's be precise about what Rust can and can't do.
My understanding is that a sound implementation of signal handling in Rust will require the signal handler to be Send, requiring it only has access to shared data that is Sync (safe to share between threads). I guess thread-safe does not nessecarily imply signal-safe, though.
And of course you could still call to a signal-unsafe C function but that requires an unsafe block, explicitly acknowledging that Rust's guarentees do not apply.
A safe-Rust thread-safe Send+Sync function is allowed to call `format!()`, or `Box::new`, or drop a `Vec`, all of which will directly cause the exact same vulnerability as in SSH.
There is nothing in Rust that can say "you can't drop a Vec", and there are no tools that will let you find out whether any function you call may do it. Rust can't even statically prove that panics won't happen. Rust's panic implementation performs heap allocations, so any Rust construct that can panic is super unsafe in signal handlers.
Of course, you are still trusting that the implementation is sound.
While this is a common misconception, I'm already tired of enthusiastic "safu language fans" trying to explain what bound checks mean to me so apologizes for being mean.
But no, in 2024 you should focus on temporal memory safety which is much harder to eliminate compared to "just add boundz CHK everYwHErE!!!@".
Not an alternative and won't be for the foreseeable future.
[0]: https://github.com/francoismichel/ssh3?tab=readme-ov-file#-s...