Interestingly, the RCE fix was "smuggled" in public almost a month ago.

    When PerSourcePenalties are enabled, sshd(8) will monitor the exit
    status of its child pre-auth session processes. Through the exit
    status, it can observe situations where the session did not
    authenticate as expected. These conditions include when the client
    repeatedly attempted authentication unsucessfully (possibly indicating
    an attack against one or more accounts, e.g. password guessing), or
    when client behaviour caused sshd to crash (possibly indicating
    attempts to exploit sshd).

    When such a condition is observed, sshd will record a penalty of some
    duration (e.g. 30 seconds) against the client's address.
https://github.com/openssh/openssh-portable/commit/81c1099d2...

It's not really a reversable patch that gives anything away to attackers: it changes the binary architecture in a way that has the side-effect of removing the specific vulnerability and also mitigates the whole exploit class, if I understand it correctly. Very clever.

  • fanf2
  • ·
  • 6 days ago
  • ·
  • [ - ]
That's not the RCE fix, this is the RCE fix https://news.ycombinator.com/item?id=40843865

That's a previously-announced feature for dealing with junk connections that also happens to mitigate this vulnerability because it makes it harder to win the race. Discussed previously https://news.ycombinator.com/item?id=40610621

The ones you link are the "minimal patches for those can't/don't want to upgrade". The commit I am linking to is taken straight from the advisory.

    On June 6, 2024, this signal handler race condition was fixed by commit
    81c1099 ("Add a facility to sshd(8) to penalise particular problematic
    client behaviours"), which moved the async-signal-unsafe code from
    sshd's SIGALRM handler to sshd's listener process, where it can be
    handled synchronously:

      https://github.com/openssh/openssh-portable/commit/81c1099d22b81ebfd20a334ce986c4f753b0db29

    Because this fix is part of a large commit (81c1099), on top of an even
    larger defense-in-depth commit (03e3de4, "Start the process of splitting
    sshd into separate binaries"), it might prove difficult to backport. In
    that case, the signal handler race condition itself can be fixed by
    removing or commenting out the async-signal-unsafe code from the
    sshsigdie() function
The cleverness here is that this commit is both "a previously-announced feature for dealing with junk connections", and a mitigation for the exploit class against similar but unknown vulnerabilities, and a patch for the specific vulnerability because it "moved the async-signal-unsafe code from sshd's SIGALRM handler to sshd's listener process, where it can be handled synchronously".

The cleverness is that it fixes the vulnerability as part of doing something that makes sense on its own, so you wouldn't know it's the patch even looking at it.

No, it's a fix. It completely removes the signal race as well as introducing a mitigation for similar future bugs
These lines from the diff linked above are the fix:

    - /\* Log error and exit. \*/
    - sigdie("Timeout before authentication for %s port %d",
    -     ssh_remote_ipaddr(the_active_state),
    -     ssh_remote_port(the_active_state));
    + _exit(EXIT_LOGIN_GRACE);
  • loeg
  • ·
  • 6 days ago
  • ·
  • [ - ]
Has this fix been pushed to / pulled by distributions yet?
It's fixed in Debian 12[1]. Debian 11 and earlier's SSH version was not vulnerable.

[1] https://security-tracker.debian.org/tracker/source-package/o...

  • loeg
  • ·
  • 5 days ago
  • ·
  • [ - ]
And https://bugzilla.redhat.com/show_bug.cgi?id=2294904 (Fedora 40 issue)

EL 9 is also affected, but not yet released. The tracking task will update as things move along.

  • loeg
  • ·
  • 5 days ago
  • ·
  • [ - ]
Fix pushed in openssh-9.3p1-11.fc39 and (in progress) openssh-9.6p1-1.fc40.4.
Ubuntu's also got patches out for 22.04 LTS, 23.10, and 24.04 LTS. See https://ubuntu.com/security/notices/USN-6859-1.

Amazon Linux 2023 is affected; Amazon Linux 1 & 2 are not. Status updates will be posted to https://explore.alas.aws.amazon.com/CVE-2024-6387.html

Gentoo: update to "net-misc/openssh-9.7_p1-r6" available since ~Mon 1.Jul.2024.

GLSA 202407-09: https://glsa.gentoo.org/glsa/202407-09

Package metadata & log: https://packages.gentoo.org/packages/net-misc/openssh

SUSE has the fixes under testing. I assume you could install them directly from OBS. I have not tried because I have no exposed system. https://www.suse.com/security/cve/CVE-2024-6387.html
Interesting that this comment has remained the topmost one for 2 days despite being incorrect and being corrected in the message right below it. I wonder if people are only reading the first message in the thread and upvoting and then leaving with the wrong impression.
It appears you’ve not read past the topmost reply to the topmost comment, and left with the wrong impression.
So it seems!
One interesting comment in the OpenSSH release notes

> Successful exploitation has been demonstrated on 32-bit Linux/glibc systems with ASLR. Under lab conditions, the attack requires on average 6-8 hours of continuous connections up to the maximum the server will accept. Exploitation on 64-bit systems is believed to be possible but has not been demonstrated at this time. It's likely that these attacks will be improved upon.

https://www.openssh.com/releasenotes.html

  • ·
  • 6 days ago
  • ·
  • [ - ]
From the diff introducing the bug [1], the issue according to the analysis is that the function was refactored from this:

  void
  sigdie(const char *fmt,...)
  {
  #ifdef DO_LOG_SAFE_IN_SIGHAND
   va_list args;
  
   va_start(args, fmt);
   do_log(SYSLOG_LEVEL_FATAL, fmt, args);
   va_end(args);
  #endif
   _exit(1);
  }
to this:

  void
  sshsigdie(const char *file, const char *func, int line, const char *fmt, ...)
  {
   va_list args;
  
   va_start(args, fmt);
   sshlogv(file, func, line, 0, SYSLOG_LEVEL_FATAL, fmt, args);
   va_end(args);
   _exit(1);
  }
which lacks the #ifdef.

What could have prevented this? More eyes on the pull request? It's wild that software nearly the entire world relies on for secure access is maintained by seemingly just two people [2].

[1] https://github.com/openssh/openssh-portable/commit/752250caa...

[2] https://github.com/openssh/openssh-portable/graphs/contribut...

It's always easy with hindsight to tell how to prevent something. In this case, a comment might have helped why the #ifdef was needed, eg

  void CloseAllFromTheHardWay(int firstfd) //Code here must be async-signal-safe! Locks may be in indeterminate state
  {
    struct rlimit lim;
    getrlimit(RLIMIT_NOFILE,&lim);

    for (int fd=(lim.rlim_cur == RLIM_INFINITY ? 1024 : lim.rlim_cur);fd>=firstfd;--fd)
      close(fd);
  }
Although to be honest, getrlimit isn't actually on the list here: https://man7.org/linux/man-pages/man7/signal-safety.7.html

But I hope that removing the comment or modifying code with a comment about async-signal-safe might have been noticed in review. The code you quoted only has the mention SAFE_IN_SIGHAND to suggest that this code might need to be async-signal-safe

  • loeg
  • ·
  • 5 days ago
  • ·
  • [ - ]
The ifdef name was a big clue! "SIGHAND" is short for signal handler. Sure, there is an implicit connection here from "signal handler" to "all code must be async signal safe," but that association is pretty well known to most openssh authors and code reviewers. Oh well, mistakes happen.
  • pja
  • ·
  • 5 days ago
  • ·
  • [ - ]
OpenBSD refactored their system to use async signal safe re-entrent syslog functions, so it’s possible that the author of this code simply assumed that it was safe to make this change, forgetting (or completely unaware) that other platforms (which the openBSD ssh devs don’t actually claim to support) were still using async unsafe functions.
> What could have prevented this? More eyes on the pull request? It's wild that software nearly the entire world relies on for secure access is maintained by seemingly just two people [2].

It's open source. If you feel you could do a better job, then by all means, go ahead and fork it.

You're not entitled to anything from open source developers. They're allowed to make mistakes, and they're allowed to have as many or as few maintainers/reviewers as they wish.

https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...

Strangely aggressive and unproductive response. GP has valid points. They are not acting entitled in any way.
I disagree. An obvious interpretation of what is quoted is entitlement.
How would you say someone should express such concerns without coming off as entitled? Or do you think they're not valid concerns? Or do you really think that if someone has such concerns, their only recourse is to start contributing to the project? That project of course being one of the most security-sensitive projects one could imagine.
> How would you say someone should express such concerns without coming off as entitled?

> Or do you really think that if someone has such concerns, their only recourse is to start contributing to the project?

Yes, I think one way to not come off as entitled when being critical to volunteers is to also offer volunteer work yourself.

And it's most helpful to provide feedback directly to the developers through their preferred means of communication.

> Or do you think they're not valid concerns?

Irrelevant what I think here, that's kind of the point. That's just my opinion.

> That project of course being one of the most security-sensitive projects one could imagine.

Agreed that the project is important. However, this is irrelevant, too, unless you're bolstering your "valid concerns" argument.

> Yes, I think one way to not come off as entitled when being critical to volunteers is to also offer volunteer work yourself.

So what level of contribution is the bar here? I mean, what's the commit count? Do I have to be developing core features for years? Does writing docs count? Do I have to volunteer for a particular project before I can in any way criticize it, or is just any open source work okay?

> And it's most helpful to provide feedback directly to the developers through their preferred means of communication.

This is not feedback meant directly for the developer - it's valid questions that were meant to spark a discussion here on HN. Of course, with users like you around, that's difficult.

> However, this is irrelevant, too, unless you're bolstering your "valid concerns" argument.

It is relevant, because it's absurd to think that just any developer can just go and contribute to such a project.

I'm not trying to come off as combative, it seems that you are.

All I offered was a way to not sound entitled. Personally, I certainly hold the opinions of someone that's helping me much higher than the opinion of someone that isn't.

Another approach to avoid sounding entitled could be to post a more thoughtful and comprehensive analysis on HN or a blog, rather than nitpicking a commit and posting broad questions like "what could have prevented this?" and insinuating that the volunteers need to do better.

Finally, if it's true that not "just any developer" can contribute to OpenSSH... well it's open-source. Fork it. Or build your own.

> All I offered was a way to not sound entitled

And I must've missed this bit in your original few comments.

You come off as combative when you dismiss valid, innocent questions as being "entitled" or attributing insinuations to GP that simply aren't there.

> Finally, if it's true that not "just any developer" can contribute to OpenSSH... well it's open-source. Fork it. Or build your own.

What good would that do? Would that enable the forker to voice their complaints about the original OpenSSH that's used by literally everyone else without people like you chiming in?

By the by, is it at all relevant that OpenSSH development is funded by at least 1 non-profit and probably other sources as well? They're not volunteers.

(And even if they were volunteers, users are quite within their rights to voice concerns and criticisms about software in a constructive manner. If open source developers don't want to face that, they can not develop open source software.)

> You come off as combative when you dismiss valid, innocent questions as being "entitled" or attributing insinuations to GP that simply aren't there.

We've been over this. I disagree that there isn't insinuations or isn't entitlement in GP. It's okay to disagree.

> What good would that do? Would that enable the forker to voice their complaints about the original OpenSSH that's used by literally everyone else without people like you chiming in?

This can do a lot of good. This a solution to the problem that you have. If others agree with your critique and approach (which is likely), then they also will appreciate your project. This is how projects like Neovim started, and arguably why Neovim has been as successful as it is.

> By the by, is it at all relevant that OpenSSH development is funded by at least 1 non-profit and probably other sources as well? They're not volunteers.

I was under the impression that it was largely volunteer work, or at least, severely underpaid development which is pretty normal in the open source world. I will take your word on this one, I don't have the time to go look at non-profit financials.

> And even if they were volunteers, users are quite within their rights to voice concerns and criticisms about software in a constructive manner.

100% agree, the keywords being "constructive manner." Higher effort than nitpicking a commit and asking broad questions.

For what it's worth, I think you're probably doing more harm than good to the open source movement. Reconsider your approach, or focus it on the users who are actually acting entitled.
> It's wild that software nearly the entire world relies on for secure access is maintained by seemingly just two people

obligatory xkcd https://xkcd.com/2347/

  • devit
  • ·
  • 5 days ago
  • ·
  • [ - ]
One of these:

1. Using a proper programming language that doesn't allow you to setup arbitrary functions as signal handlers (since that's obviously unsafe on common libcs...) - e.g. you can't do that in safe Rust, or Java, etc.

2. Using a well-implemented libc that doesn't cause memory corruption when calling async-signal-unsafe functions but only deadlocks (this is very easy to achieve by treating code running in signals as a separate thread for thread-local storage access purposes), and preferably also doesn't deadlock (this requires no global mutexes, or the ability to resume interrupted code holding a mutex)

3. Thinking when changing and accepting code, not like the people who committed and accepted [1] which just arbitrarily removes an #ifdef with no justification

4. Using simple well-engineered software written by good programmers instead of OpenSSH

This comes across as quite scathing critique of an open source tool that has provided an extremely high standard of security and reliability over decades, despite being built on technologies that don’t offer the guardrails outlined in points (1) and (2).

Point (3) seems like a personal attack on the developers/reviewer, who made human errors. Humans do in fact make mistakes, and the best build toolchain/test suite in the world won’t save you 100% of the time.

Point (4) seems to imply that OpenSSH is not well-engineered, simple, or written by good programmers. While all of that is fairly subjective, it is (I feel) needlessly unkind.

I’d invite you to recommend an alternative remote access technology with an equivalent track record of security and stability in this space — I’m not aware of any.

This is all great thinking.

Except for those of us who live in a world where most of their OS and utilities and libraries were originally written decades before Rust existed, and often even before Java existed. And where "legacy" C code pretty much underpins everything running on the public internet and which you need to connect to.

There's a very real risk that reimplementing every piece of code on a modern internet connected server in exciting new "safe" languages and libc-type things - by a bunch of "modern" programmers who do not have the learning experience of decades worth of mistakes - will end up with not just new implementation of old and fixed bugs and security problems, but also with new implementations that are incompatible in strange and hard to debug ways with every existing piece of software that uses SSH protocols as they are deployed in the field.

I, for one, and not going to install and put into production version 1.0 of some new OpenSSH replacement written in Rust out Go or Java, which has a non zero chance of strange edge case bugs that are different when connecting to SSH on different Linux/BSD/Windows distros or versions, across different cpu architectures, and probably have subtly different bugs when connecting to different cloud hyperscalers.

  • fanf2
  • ·
  • 6 days ago
  • ·
  • [ - ]
It’s also worth reading the release notes https://www.openssh.com/releasenotes.html

This is actually an interesting variant of a signal race bug. The vulnerability report says, “OpenBSD is notably not vulnerable, because its SIGALRM handler calls syslog_r(), an async-signal-safer version of syslog() that was invented by OpenBSD in 2001.” So a signal-safety mitigation encouraged OpenBSD developers to put non-trivial code inside signal handlers, which becomes unsafe when ported to other systems. They would have avoided this bug if they had done one of their refactoring sweeps to minimize the amount of code in signal handlers, according to the usual wisdom and common unix code guidelines.

Theo de Raadt made an, I think, cogent observation about this bug and how to prevent similar ones: no signal handler should call any function that isn't a signal-safe syscall. The rationale is that, over time, it's too way easy for any transitive call (where it's not always clear that it can be reached in signal context) to pick up some call that isn't async signal safe.
I'm kind of surprised advocating calling any syscall other than signal to add the handler back again. It's been a long time since I looked at example code, but back in the mid 90s, everything I saw (and so informed my habits) just set a flag, listened to the signal again if it was something like SIGUSR1 and then you'd pick up the flag on the next iteration of your main loop. Maybe that's also because I think of a signal like an interrupt, and something you want to get done as soon as possible to not cause any stalls to the main program.

I notice that nowadays signalfd() looks like a much better solution to the signal problem, but I've never tried using it. I think I'll give it a go in my next project.

In practice when I tried it, I wasn't sold on signalfd's benefits over the 90s style self-pipe, which is reliably portable too. Either way, being able to handle signals in a poll loop is much nicer than trying to do any real work in an async context.
This isn't the case for OpenSSH but because a lot of environments (essentially all managed runtimes) actually do this transparently for you when you register a signal "handler" it might be that less people are aware that actual signal handlers require a ton of care. On the other hand "you can't even call strcmp in a signal handler or you'll randomly corrupt program state" used to be a favorite among practicing C lawyers.
Why can't you call strcmp? I think a general practice of "only call functions that are explicitly blessed as async-signal-safe" is a good idea, which means not calling strcmp as it hasn't been blessed, but surely it doesn't touch any global (or per-thread) state so how can it corrupt program state?

Update: according to https://man7.org/linux/man-pages/man7/signal-safety.7.html strcmp() actually is async-signal-safe as of POSIX.1-2008 TC2.

That's the point. They weren't added until TC2 in 2016.
Right, it wasn't promised to be safe until then. That doesn't mean it was definitively unsafe before, just that you couldn't rely on it being safe. My question is how would a function like strcmp() actually end up being unsafe in practice given the trivial nature of what it does.
> but surely it doesn't touch any global (or per-thread) state so how can it corrupt program state?

On x86 and some other (mostly extinct) architectures that have string instructions, the string functions are usually best implemented using those (you might get a generation where there's a faster way and then microcode catches back up). And specifically (not just?) on x86 there was/is some confusion about who should or would restore some of the flags that control what these do. So you could end up with e.g. a memcpy or some other string instruction being interrupted by a signal handler and then it would continue doing what it did, but in the opposite direction, giving you wrong results or even resulting in buffer overflows (imagine interrupting a 1 MB memcpy that just started and then resuming it in the opposite direction).

Make no sense to me. OS restores all the registers incl. flags after leaving the signal handler. Besides, your scenario is not related to the handler _itself_ calling memcpy; it is about interrupting the main code. And it never ever destroys flags.
  • lmm
  • ·
  • 5 days ago
  • ·
  • [ - ]
> surely it doesn't touch any global (or per-thread) state

Not necessarily. An implementation might choose to e.g. use some kind of cache similar to what the JVM does with interned strings, and then a function like strcmp() might behave badly if it happened to run while that cache was halfway through being rebuilt.

A function like strcmp() cannot assume that if it sees the same pointer multiple times that this pointer contains the same data, so there's no opportunity for doing any sort of caching of results. The JVM has a lot more flexibility here in that it's working with objects, not raw pointers to arbitrary memory.
  • lmm
  • ·
  • 4 days ago
  • ·
  • [ - ]
> A function like strcmp() cannot assume that if it sees the same pointer multiple times that this pointer contains the same data

For arbitrary pointers no. But it could special-case e.g. string constants in the source code and/or pointers returned by some intern function (which is also how the JVM does it - for arbitrary strings, even though they're objects, it's always possible that the object has been GCed and another string allocated at the same location).

Contrived example, never seen in reality.
  • jcul
  • ·
  • 5 days ago
  • ·
  • [ - ]
You could be surprised.

For example, recently I wanted to call `gettid()` in a signal handler. Which I guessed was just a simple wrapper around the syscall.

However, it seems this can cache the thread ID in thread local storage (can't remember exact details).

I switched to making a syscall instead.

Well, this is possible for sure, but not for primitive functions, such as strcmp. It just does not happen in practice.
  • jcul
  • ·
  • 3 days ago
  • ·
  • [ - ]
For functions like strcmp, I think they must be signal safe, to be POSIX compliant.

https://man7.org/linux/man-pages/man7/signal-safety.7.html

If it's on this list I generally trust it is safe.

I guess my point is, that if it's not, even a simple function may appear safe, but could do surprising things.

  • lmm
  • ·
  • 4 days ago
  • ·
  • [ - ]
The JVM does it in reality, I can't see why a C runtime wouldn't.
It in theory may, but no one does.
  • fanf2
  • ·
  • 6 days ago
  • ·
  • [ - ]
Exactly, yes :-) Signal handlers have so many hazards it's vital to keep them as simple as possible.
  • rwmj
  • ·
  • 6 days ago
  • ·
  • [ - ]
A rule I try to follow: either set a global variable or write to a self pipe (using the write syscall), and handle the signal in the main loop.
> either set a global variable

IIRC, the rule is also that said global variable must have the type "volatile sig_atomic_t".

  • lokar
  • ·
  • 5 days ago
  • ·
  • [ - ]
You should read the sigsev handler at google, it’s great, doing all kinds of things. Of course it’s going to crash at some point anyway….
I'm not overly familiar with the language and tooling ecosystem, but how trivial is this to detect on a static analysis?
Quite easy.
So it's very likely that some young sysadmin or intern that will have to patch for this vuln was not even born when OpenBSD implemented the solution.
n>=1, one of our juniors is indeed younger than the OpenBSD fix and dealing with this bug.
Once I'd finished upgrading my openssh instances (which are linked against musl not glibc) I thought it'd be interesting to have a poke at musl's syslog(3) and see if it allocates too and so is easily exploitable in the same way. But as far as I can see, it doesn't:

https://github.com/bminor/musl/blob/master/src/misc/syslog.c

Everything there is either on stack or in static variables protected from reentrancy by the lock. The {d,sn,vsn}printf() calls there don't allocate in musl, although they might in glibc. Have I missed anything here?

If you are right about the allocations, then I think the worst it can do is deadlock since the locks aren't recursive. Deadlock in sigalrm could still lead to a DOS since that might prevent it from cleaning up connections.
Heretical opinion: signal handler activations should count as separate threads for the purposes of recursive locking.
How would be done without introducing deadlock?
You’d get a deadlock, absolutely. But I’m fine with that: if the thread wants to access some state protected by a mutex, then while holding it (effectively) spawns a signal handler activation and waits for it to complete, and the signal handler tries to accept some state protected by the same mutex, then the program has just deadlocked (mutex → signal handler → mutex) and deserves to hang (or die, as this is a very simple situation as far as deadlock detection goes). That’s in any case better than corrupted state.
Yes, true: if the alarm signal arrives right in the middle of another syslog() call, this should deadlock. (Safer that it not deadlocking and accessing the static state in the signal handler of course!)
Patch out for FreeBSD. Not clear if affected (it has only known to be exploitable with glibc, which we don't use) but best to be safe.

https://www.freebsd.org/security/advisories/FreeBSD-SA-24:04...

  • rfmoz
  • ·
  • 6 days ago
  • ·
  • [ - ]
From the report:

> Finally, if sshd cannot be updated or recompiled, this signal handler race condition can be fixed by simply setting LoginGraceTime to 0 in the configuration file. This makes sshd vulnerable to a denial of service (the exhaustion of all MaxStartups connections), but it makes it safe from the remote code execution presented in this advisory.

Setting 'LoginGraceTime 0' in sshd_config file seems to mitigate the issue.

Hang on, https://www.man7.org/linux/man-pages/man5/sshd_config.5.html says

> If the value is 0, there is no time limit.

Isn't that worse?

  • pm215
  • ·
  • 6 days ago
  • ·
  • [ - ]
The bug is a race condition which is triggered by code which runs when the timeout expires and the SIGALRM handler is run. If there is no time limit, then the SIGALRM handler will never run, and the race doesn't happen.

(As the advisory notes, you do then have to deal with the DoS which the timeout setting is intended to avoid, where N clients all connect and then never disconnect, and they aren't timed-out and forcibly disconnected on the server end any more.)

Thanks for the explanation; I'd skimmed a little too fast and assumed that this was the more traditional "how many attempts can we squeeze in each connection" rather than something at the end. I guess this makes the hardening advice about lowering that time limit kind of unfortunate.
It sounds like at the end of the default 600 seconds is when the race condition occurs. Set no limit, and there is no end

> In our experiments, it takes ~10,000 tries on average to win this race condition; i.e., with 10 connections (MaxStartups) accepted per 600 seconds (LoginGraceTime), it takes ~1 week on average to obtain a remote root shell.

If you can turn on TCP keepalive for the server connections, you would still have a timeout, even if it's typically 2 hours. Then if someone wants to keep connections open and run you out of sockets and processes, they've got to keep sockets open on their end (but they might run a less expensive userspace tcp)

You can belt and suspenders with an external tool that watches for sshd in pre-auth for your real timeout and kills it or drops the tcp connection [1] (which will make the sshd exit in a more orderly fashion)

[1] https://man.freebsd.org/cgi/man.cgi?query=tcpdrop

Perhaps a more realistic workaround would be simply to make the grace time (or inversely, max connections) long enough that the chances of a successful attack are far too long in the future to be worth attempting.
would cold-restarting sshd every hour also make this unlikely / harder to exploit?
  • pxx
  • ·
  • 4 days ago
  • ·
  • [ - ]
not noticeably with the default config? you need to win a race that happens roughly every 120s

I guess you could restart every minute...

  • 0x0
  • ·
  • 6 days ago
  • ·
  • [ - ]
Patch out for Debian 12; Debian 11 not affected.

https://security-tracker.debian.org/tracker/CVE-2024-6387

Looks like Focal (20.04) isn't on an affected version. Jammy (22.04) looks like it is.
My procrastination pays off ...
As do theirs ;)
What about, uh, 18.04?

Edit: 18.04 Bionic is unaffected, the ssh version is 7.6 which is too old.

If you have extended support: Just update (if it's not so old that it's not even affected in the first place)

If you don't have extended support: You're vulnerable to worse, easier to exploit bugs :)

I have a single 18.04 machine that is stuck on that, because it has 18.04 386, and as 20.04 doesn't support 386 anymore, apparently there is not a simple upgrade path to 20.04 64-bit (without doing extensive surgery). Very annoying...
I can confirm this 18.04 machine still gets some important updates like kernel upgrades and patched versions of Apache.
  • urza
  • ·
  • 5 days ago
  • ·
  • [ - ]
On 22.04 apt update && upgrade doesn't help.. yet?
  • ·
  • 6 days ago
  • ·
  • [ - ]
Just ran an apt update and upgrade on my Debian 12 server. OpenSSH packages were the only ones upgraded.
  • hgs3
  • ·
  • 6 days ago
  • ·
  • [ - ]
Yes, the Debian 12 fix is out. You can verify you're patched by running 'ssh -V' and verifying you see 'deb12u3'. If you see 'deb12u2' then you're vulnerable [1].

[1] https://security-tracker.debian.org/tracker/CVE-2024-6387

Can confirm, Pi OS bullseye also has the updated openssh.
This is a really good find.

One thing which (as an independant person, who isn't doing any of the work!) is it often feels like in order to 'win', people are expected to find a full chain which gives them remote access, rather than just finding one issue, and getting it fixed / getting paid for it.

It feels to me like finding a single hole should be sufficient -- one memory corruption, one sandbox escape. Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?

  • rlpb
  • ·
  • 6 days ago
  • ·
  • [ - ]
There are many wannabe security researchers who find issues that are definitely not exploitable, and then demand CVE numbers and other forms of recognition or even a bounty. For example, there might be an app that crashes when accepting malformed trusted input, but the nature of the app is that it's never intended to and realistically never will be exposed to an adversary. In most people's eyes, these are simply bugs, not security bugs, and while are nice to fix, aren't on the same level. It's not very difficult to find one of these!

So there is a need to differentiate between "real" security bugs [like this one] and non-security-impacting bugs, and demonstrating how an issue is exploitable is therefore very important.

I don't see the need to demonstrate this going away any time soon, because there will always be no end of non-security-impacting bugs.

Agreed, I've seen all kinds of insane stuff, like "setting this public field of a java class to a garbage value will cause a null pointer exception"
  • tetha
  • ·
  • 6 days ago
  • ·
  • [ - ]
So many "Security Researchers" are just throwing ZAP at websites and dumping the result into the security@ mail, because there might be minor security improvements by setting yet another obscure browser security header for cases that might not even be applicable.

Or there is no real consideration if that's actually an escalation of context. Like, "Oh if I can change these postgres configuration parameters, I can cause a problem", or "Oh if I can change values in this file I can cause huge trouble". Except, modifying that file or that config parameter requires root/supervisor access, so there is no escalation because you have full access already anyhow?

I probably wouldn't have to look at documentation too much to get postgres to load arbitrary code from disk if I have supervisor access to the postgres already. Some COPY into some preload plugin, some COPY / ALTER SYSTEM, some query to crash the node, and off we probably go.

But yeah, I'm frustrated that we were forced to route our security@ domain to support to filter out this nonsense. I wouldn't be surprised if we miss some actually important issue unless demonstrated like this, but it costs too much time otherwise.

This shows kind of a naive view of security. Many soft targets are believed to be safe from adversaries that absolutely are not and are used for escalation chains or access to data.

Hospitals often try to make this argument about unsecure MySQL connections inside their network for example. Then something like heart bleed happens and lo and behold all the "never see an adversary" soft targets are exfil.

> There are many wannabe security researchers who find issues that are definitely not exploitable, and then demand CVE numbers and other forms of recognition or even a bounty

I believe this has happened to curl several times recently.

It happens constantly to any startup with a security@ email address.
> Maybe at the moment there are just too many little issues, that you need a full end-to-end hack to really convince people to take you seriously, or pay out bounties?

Let me give you a different perspective.

Imagine I make a serialisation/deserialisation library which would be vulnerable if you fed it untrusted data. This is by design, users can serialise and deserialise anything, including lambda functions. My library is only intended for processing data from trusted sources.

To my knowledge, nobody uses my library to process data from untrusted sources. One popular library does use mine to load configuration files, they consider those a trusted data source. And it's not my job to police other people's use of my library anyway.

Is it correct to file a CVE of the highest priority against my project, saying my code has a Remote Code Execution vulnerability?

I think that if the documented interface of your library is "trusted data only", then one shouldn't even file a bug report against your library if somebody passes it untrusted data.

However, if you (or anybody else) catch a program passing untrusted data to any library that says "trusted data only", that's definitely CVE worthy in my books even if you cannot demonstrate full attack chain. However, that CVE should be targeted at the program that passes untrusted data to trusted interface.

That said, if you're looking for bounty instead of just some publicity in reward for publishing the vulnerability, you must fullfil the requirements of the bounty and those typically say that bounty will be paid for complete attack chain only.

I guess that's because companies paying bounties are typically interested in real world attacks and are not willing to pay bounties for theoretical vulnerabilities.

I think this is problematic because it causes bounty hunters to keep theoretical vulnerabilities secret and wait for possible future combination of new code that can be used to attack the currently-theoretical vulnerability.

I would argue that it's much better to fix issues while they are still theoretical only. Maybe pay lesser bounty for theoretical vulnerabilities and pay reduced payment for the full attack chain if it's based on publicly known theoretical vulnerability. Just make sure that the combination pays at least equally good to publishing full attack chain for 0day vulnerability. That way there would be incentive to publish theoretical vulnerabilities immediately for maximum pay because otherwise somebody else might catch the theoretical part and publish faster than you can.

> Imagine I make a serialisation/deserialisation library which would be vulnerable if you fed it untrusted data

No need to imagine, the PyYAML has that situation. There have been attempts to use the safe deserialization by default, with an attempt to release a new major version (rolled back), and it settled on having a required argument of which mode / loader to use. See: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=PyYAML

That sounds... familiar. Are you perchance the maintainer of SnakeYAML?

Yes, it is correct to file a CVE of the highest priority against your project, because "only intended for processing data from trusted sources" is a frankly ridiculous policy for a serialization/deserialization library.

If it's your toy project that you never expected anyone to use anyway, you don't care about CVEs. If you want to be taken seriously, you cannot play pass-the-blame and ignore the fact that your policy turns the entire project into a security footgun.

> "only intended for processing data from trusted sources" is a frankly ridiculous policy for a serialization/deserialization library.

Truly, it's a design decision so ridiculous nobody else has made it. Except Python's pickle, Java's serialization, Ruby's Marshal and PHP's unserialize of course. But other than that, nobody!

Yes, all of those languages made that bad decision back in the 90s and came to regret it. What's ridiculous is refusing to learn from that.
And Lua's bytecode loader, recently discussed here: https://news.ycombinator.com/item?id=40830005
I know "code is data", but it's a couple orders of magnitude more reasonable to have unsafe bytecode than to have unsafe data deserialization.

If something is supposed to load arbitrary code, not just data, that needs to be super clear at a glance. If it comes across as a data library, but allows takeover, you have a problem. Especially if there isn't a similar data-only function/library.

Having been on the reporting side, "an exploitable vulnerability" and "security weakness which could eventually result in an exploitable vulnerability" are two very different things. Bounties always get paid for the first category. Reports falling in the second category might even cause reputation/signal damage for a lack of proof of concept/exploitability.

There are almost always various weaknesses which do not become exploitable until and unless certain conditions are met. This also becomes evident in contests like Pwn2Own where multiple vulnerabilities are often chained to eventually take the device over and remain un-patched for years. Researchers often sit on such weaknesses for a long time to eventually maximize the impact.

Sad but that is how it is.

As the security maxim goes: POC || GTFO
Buyers pay for outcomes. Vendors do pay for individual links in the chain.
> It feels to me like finding a single hole should be sufficient -- one memory corruption, one sandbox escape.

It should be.

> Maybe at the moment there are just too many little issues...

There are so many.

OpenSSH release notes: https://www.openssh.com/txt/release-9.8

Minimal patches for those can't/don't want to upgrade: https://marc.info/?l=oss-security&m=171982317624594&w=2

> Exploitation on 64-bit systems is believed to be possible but has not been demonstrated at this time.
I'm confident that someone will make a workable exploit against 64-bit systems.
Context here: djmdjm is Daniel Miller, an OpenSSH/OpenBSD developer.
  • l9i
  • ·
  • 6 days ago
  • ·
  • [ - ]
Damien Miller
Exploits only ever get better. Today's possible is next month's done.
  • gruez
  • ·
  • 6 days ago
  • ·
  • [ - ]
What's the advantage of this relatively obscure tool compared to something standard like wireguard or stunnel?
* The tool is not obscure, it's packaged in most distributions.[1][2][3] It was written and maintained by Colin Percival, aka "the tarnsnap guy" or "the guy who invented scrypt". He is the security officer for FreeBSD.

* spiped can be used transparently by just putting a "ProxyCommand" in your ssh_config. This means you can connect to a server just by using "ssh", normally. (as opposed to wireguard where you need to always be on your VPN, otherwise connnect to your VPN manually before running ssh)

* As opposed to wireguard which runs in the kernel, spiped can easily be set-up to run as a user, and be fully hardened by using the correct systemd .service configuration [4]

* The protocol is much more lightweight than TLS (used by stunnel), it's just AES, padded to 1024 bytes with a 32 bit checksum. [5]

* The private key is much easier to set up than stunnel's TLS certificate, "dd if=/dev/urandom count=4 bs=1k of=key" and you're good to go.

[1] https://packages.debian.org/bookworm/spiped

[2] https://www.freshports.org/sysutils/spiped/

[3] https://archlinux.org/packages/extra/x86_64/spiped/

[4] https://ruderich.org/simon/notes/systemd-service-hardening

[5] https://github.com/Tarsnap/spiped/blob/master/DESIGN.md

Correction: I was the security officer for FreeBSD... about a dozen years ago. I'm now the release engineering lead.
Wireguard can also run in userspace (e.g. boringtun[0], wireguard-go[1], Tailscale).

[0] https://github.com/cloudflare/boringtun

[1] https://git.zx2c4.com/wireguard-go/about/

  • sisk
  • ·
  • 6 days ago
  • ·
  • [ - ]
> The private key is much easier to set up than stunnel's TLS certificate, "dd if=/dev/urandom count=4 bs=1k of=key" and you're good to go.

The spiped documentation recommends a key size with a minimum of 256b of entropy. I'm curious why you've chosen such a large key size (4096b) here? Is there anything to suggest 256b is no longer sufficient for the general case?

Force of habit. No particular reason, "4kiB feels like a nice number", cargo culting. Choose one :) .

It doesn't matter if you have more than 256 bits, as your key file gets hashed with SHA256 at the end[1]. It could be 5GiB it would be the same. So yes, you're right to mention that more bits don't add more security.

[1] https://github.com/Tarsnap/spiped/blob/2194b2c64de65eed119ab...

> Finally, if sshd cannot be updated or recompiled, this signal handler race condition can be fixed by simply setting LoginGraceTime to 0 in the configuration file. This makes sshd vulnerable to a denial of service (the exhaustion of all MaxStartups connections), but it makes it safe from the remote code execution presented in this advisory.
[flagged]
Correct me if I'm wrong but it seems like sshd on RHEL-based systems is safe because they never call syslog.

They run sshd with the -D option already, logging everything to stdout and stderr, as their systemd already catches this output and sends it to journal for logging.

So I don't see anywhere they would be calling syslog, unless sshd does it on its own.

At most maybe add OPTIONS=-e into /etc/sysconfig/sshd.

Same question. Aren't all systemd based distros use stdin/out/err for logging and won't call syslog?
  • ttul
  • ·
  • 6 days ago
  • ·
  • [ - ]
TLDR: this vulnerability does appear to allow an attacker to potentially gain remote root access on vulnerable Linux systems running OpenSSH, with some important caveats:

1. It affects OpenSSH versions 8.5p1 to 9.7p1 on glibc-based Linux systems.

2. The exploit is not 100% reliable - it requires winning a race condition.

3. On a modern system (Debian 12.5.0 from 2024), the researchers estimate it takes: - ~3-4 hours on average to win the race condition - ~6-8 hours on average to obtain a remote root shell (due to ASLR)

4. It requires certain conditions: - The system must be using glibc (not other libc implementations) - 100 simultaneous SSH connections must be allowed (MaxStartups setting) - LoginGraceTime must be set to a non-zero value (default is 120 seconds)

5. The researchers demonstrated working exploits on i386 systems. They believe it's likely exploitable on amd64 systems as well, but hadn't completed that work yet.

6. It's been patched in OpenSSH 9.8p1 released in June 2024.

Why is it that the ASLR only adds 1 bit of randomness (doubling the time it takes to win the attack)?
> 4. It requires certain conditions: - The system must be using glibc (not other libc implementations) - 100 simultaneous SSH connections must be allowed (MaxStartups setting) - LoginGraceTime must be set to a non-zero value (default is 120 seconds)

Stupid question, perhaps, but if those two lines inside the sshd_config are commented out with '#', does this mean that grace period and max. sessions are technically unlimited and therefore potentially vulnerable?

Found my own answer: If the values are commented out, it means that the default values are being used. If the file hasn't been modified the default values are those you see inside the config file.

OpenSSH 9.8p1 was released July 1, 2024 according to https://www.openssh.com/releasenotes.html#9.8p1
I’m not sure how many Linux users would know if they’re using glibc or another variation. Is there a list?
If you don't know, you're likely running glibc. Distros that use musl do so intentionally (alpine, etc.)

    In our experiments, it takes ~10,000 tries on average to win this race
    condition, so ~3-4 hours with 100 connections (MaxStartups) accepted
    per 120 seconds (LoginGraceTime). Ultimately, it takes ~6-8 hours on
    average to obtain a remote root shell, because we can only guess the
    glibc's address correctly half of the time (because of ASLR).
MaxStartups default is 10
Question regarding this from a non-guru: - Is it correct that this only works for user root if login with password/key for root is allowed? - Is it correct, that this only works if the attacker knows a login name valid for ssh?
I believe knowing existing user name or using host-depended value does not matter.

The exploit tries to interrupt handlers that are being run due to login grace period timing out - so we are already at a point where authentication workflow has ended without passing all the credentials.

Plus, in the "Practice" section, they discuss using user name value as a way to manipulate memory at a certain address, so they want/need to control this value.

The config option called MaxStartups accepts a tuple to set 3 associated variables in the code. It wasn't clear to me which value people were referring to.
Even if it means the attack takes 10x as long, it doesn't seem to be limited by bandwidth, only time. Might not take long before the bots appear that try to automatically exploit this on scale.
  • ale42
  • ·
  • 6 days ago
  • ·
  • [ - ]
Such an amount of connections should anyway trigger all possible logging & IDS systems, right?
It should trigger fail2ban, that's for sure.

Alerting is useless, with the volume of automated exploits attempted.

> It should trigger fail2ban, that's for sure.

But people here are going to explain that fail2ban is security theater...

I am one of the people who see fail2ban as a nuisance for the average administrator. Average means that they know things on average and sooner or later fail2ban will block unexpectedly. Usually when you are away canoeing in the wilderness.

This is all a matter of threat and risk management. If you know what you are doing then fail2ban or portknocking is another layer on your security.

Security theater in my opinion is something else: nonsense password policies, hiding your SSID, whitelisting MACs, ...

Can you link to any comment in this thread of someone actually claiming that?
It's a doorstop, not a fix. Useful nonetheless.
If you have a public facing Internet server, you're probably already running something like blocklistd or fail2ban. They reduce abuse, but they don't do anything to avoid an issue like this except from naive attackers.

More resourceful attackers could automate attempted exploit using a huge botnet, and it'd likely look similar to the background of ssh brute force bots that we already see 24/7/365.

If you don’t value your time, sure? There’s thousands of systems trying to log into publicly accessible SSH servers all the time.
Yeah slow bruteforces are running all over the net all the time. This means there's no reason not to throw this attack into the mix.
I doubt most servers use any such thing.
The default for MaxStartups is 10:30:100

10:30:60 is mentioned in the man for start:rate:full, so I set mine to that value.

Thanks for the quote

I stopped exposing SSH to the internet years ago. Now I connect over WireGuard, and then run SSH through that when I need to remotely admin something.
I guess you could also spin up a OpenBSD server running SSH and use that as a jump host.
> In our experiments, it takes ~10,000 tries on average to win this race condition, so ~3-4 hours with 100 connections (MaxStartups) accepted per 120 seconds (LoginGraceTime). Ultimately, it takes ~6-8 hours on average to obtain a remote root shell, because we can only guess the glibc's address correctly half of the time (because of ASLR).

Mitigate by using fail2ban?

Nice to see that Ubuntu isn't affected at all

  • mmsc
  • ·
  • 6 days ago
  • ·
  • [ - ]
>Mitigate by using fail2ban?

In theory, this could be used (much quicker than the mentioned days/weeks) to get local privilege escalation to root, if you already have some type of shell on the system already. I would assume that fail2ban doesn't block localhost.

How is local privilege escalation relevant here? Fail2ban should be able to block the RCE
  • mmsc
  • ·
  • 6 days ago
  • ·
  • [ - ]
How is it not?

If fail2ban isn't going to blocklist localhost, then it isn't a mitigation for this vulnerability because RCE implies LPE.

People are generally not trying to get root via an SSH RCE over localhost. That's going to be a pretty small sample of people that applies to.

But, sure, in that case fail2ban won't mitigate, but that's pretty damn obviously implied. For 99% of people and situations, it will.

  • mmsc
  • ·
  • 6 days ago
  • ·
  • [ - ]
>People are generally not trying to get root via an SSH RCE over localhost. That's going to be a pretty small sample of people that applies to

It's going to apply to the amount of servers that an attacker has low-privileged access (think: www-data) and an unpatched sshd. Attackers don't care if it's an RCE or not: if a public sshd exploit can be used on a system with a Linux version without a public Linux LPE, it will be used. Being local also greatly increases the exploitability.

Then consider the networks where port 22 is blocked from the internet but sshd is running in some internal network (or just locally for some reason).

> It's going to apply to the amount of servers that an attacker has low-privileged access (think: www-data) and an unpatched sshd.

Right, which is almost none. www-data should be set to noshell 99% of the time.

> or just locally for some reason).

This is all that would be relevant, and this is also very rare.

Think “illegitimate” access to www-data. It’s very common on linux pentests to need to privesc from some lower-privileged foothold (like a command injection in an httpd cgi script). Most linux servers run openssh. So yes I would expect this turns out to be a useful privesc in practice.
> Think “illegitimate” access to www-data.

I get the point.

My point was the example being given is less than 1% of affected cases.

> It’s very common on linux pentests to need to privesc from some lower-privileged foothold

Sure. Been doing pentests for 20+ years :)

> So yes I would expect this turns out to be a useful privesc in practice.

Nah.

> Nah

I don’t get it then… Do you never end up having to privesc in your pentests on linux systems? No doubt it depends on customer profile but I would guess personally on at least 25% of engagements in Linux environments I have had to find a local path to root.

> Do you never end up having to privesc in your pentests on linux systems?

Of course I do.

I'm not saying privsec isn't useful, I'm saying the cases where you will ssh to localhost to get root are very rare.

Maybe you test different environment or something, but on most corporate networks I test the linux machines are dev machines just used for compiling/testing and basically have shared passwords, or they're servers for webapps or something else where normal users most who have a windows machine won't have a shell account.

If there's a server where I only have a local account and I'm trying to get root and it's running an ssh server vulnerable to this attack, of course I'd try it. I just don't expect to be in that situation any time soon, if ever.

  • mmsc
  • ·
  • 6 days ago
  • ·
  • [ - ]
>I test the linux machines are dev machines just used for compiling/testing and basically have shared passwords, or they're servers for webapps or something else where normal users most who have a windows machine won't have a shell account.

And you don't actually pentest the software which those users on the windows machine are using on the Linux systems? So you find a Jenkins server which can be used to execute Groovy scripts to execute arbitrary commands, the firewall doesn't allow connections through port 22, and it's just a "well, I got access, nothing more to see!"?

> And you don't actually pentest the software which those users on the windows machine are using on the Linux systems?

You really love your assumptions, huh?

> it's just a "well, I got access, nothing more to see!"?

I said nothing like that, and besides that, if you were not just focused on arguing for the sake of it, you would see MY point was about the infrequency of the situation you were talking about (and even then your original point seemed to be contrarian in nature more than anything).

  • mmsc
  • ·
  • 6 days ago
  • ·
  • [ - ]
>www-data should be set to noshell 99% of the time.

Huh? execve(2), of course, lets to execute arbitrary files. No need to spawn a tty at all. https://swisskyrepo.github.io/InternalAllTheThings/cheatshee...

>This is all that would be relevant, and this is also very rare.

Huh? Exploiting an unpatched vulnerability on a server to get access to a user account is.. very rare? That's exactly what lateral movement is about.

Instead of taking the time to reply 'huh' multiple times, you should make sure you read what you're replying to.

For example:

> Huh? Exploiting an unpatched vulnerability on a server to get access to a user account is.. very rare?

The 'this' I refer to is very clearly not what you've decided to map it to here. The 'this' I refer to, if you follow the comment chain, refers to a subset of something you said which was relevant to your point - the rest was not.

You could have also said "99% of people don't let their login timeout and hit the SIGALRM"... People don't usually use an SSH RCE because there usually isn't an SSH RCE. If there is, why wouldn't they?

It doesn't matter if 99% of the situations you can think of are not problematic. If 1% is feasible and the attackers know about it, it's an attack vector.

  • sgt
  • ·
  • 6 days ago
  • ·
  • [ - ]
Confirmed - fail2ban doesn't block localhost.
Where do you see that Ubuntu isn't affected?
>Side note: we discovered that Ubuntu 24.04 does not re-randomize the ASLR of its sshd children (it is randomized only once, at boot time); we tracked this down to the patch below, which turns off sshd's rexec_flag. This is generally a bad idea, but in the particular case of this signal handler race condition, it prevents sshd from being exploitable: the syslog() inside the SIGALRM handler does not call any of the malloc functions, because it is never the very first call to syslog().

No mention on 22.04 yet.

Ubuntu has pushed an updated openssh.
Ubuntu isn't affected _by this exploit_
as opposed to the other exploits not being discussed.
For servers you have control over, as an emergency bandaid, sure. Assumes you are not on an embedded system though like a router.
I didn't consider embedded, probably the biggest target for this.
fail2ban just means an attacker would need to use many source IPs, not hard.
> Ultimately, it takes ~6-8 hours on average to obtain a remote root shell, because we can only guess the glibc's address correctly half of the time (because of ASLR).

AMD to the rescue - fortunately they decided to leave the take-a-way and prefetch-type-3 vulnerability unpatched, and continue to recommend that the KPTI mitigations be disabled by default due to performance costs. This breaks ASLR on all these systems, so these systems can be exploited in a much shorter time ;)

AMD’s handling of these issues is WONTFIX, despite (contrary to their assertion) the latter even providing actual kernel data leakage at a higher rate than meltdown itself…

(This one they’ve outright pulled down their security bulletin on) https://pcper.com/2020/03/amd-comments-on-take-a-way-vulnera...

(This one remains unpatched in the third variant with prefetch+TLB) https://www.amd.com/en/resources/product-security/bulletin/a...

edit: there is a third now building on the first one with an unpatched vulnerabilities in all zen1/zen2 as well… so this one is WONTFIX too it seems, like most of the defects TU Graz has turned up.

https://www.tomshardware.com/news/amd-cachewarp-vulnerabilit...

Seriously I don’t know why the community just tolerates these defenses being known-broken on the most popular brand of CPUs within the enthusiast market, while allowing them to knowingly disable the defense that’s already implemented that would prevent this leakage. Is defense-in-depth not a thing anymore?

Nobody in the world would ever tell you to explicitly turn off ASLR on an intel system that is exposed to untrusted attackers… yet that’s exactly the spec AMD continues to recommend and everyone goes along without a peep. It’s literally a kernel option that is already running and tested and hardens you against ASLR leakage.

The “it’s only metadata” is so tired. Metadata is more important than regular data, in many cases. We kill people, convict people, control all our security and access control via metadata. Like yeah it’s just your ASLR layouts leaking, what’s the worst that could happen? And I mean real data goes too in several of these exploits too, but that’s not a big deal either… not like those ssh keys are important, right?

What are you talking about ? My early-2022 ryzen 5625U shows:

  Vulnerabilities:          
    Gather data sampling:   Not affected
    Itlb multihit:          Not affected
    L1tf:                   Not affected
    Mds:                    Not affected
    Meltdown:               Not affected
    Mmio stale data:        Not affected
    Reg file data sampling: Not affected
    Retbleed:               Not affected
    Spec rstack overflow:   Vulnerable: Safe RET, no microcode
    Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
    Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
    Spectre v2:             Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
    Srbds:                  Not affected
    Tsx async abort:        Not affected
Only regular stuff
KPTI won't be default enabled on Linux on AMD CPUs is the issue here.

Yet it provides valuable separation between kernel and userspace address ranges.

iirc the predecessor to KPTI was made before these hw flaws were announced as a general enhancement to ASLR.

AMD aside, Spectre V2 isn't even default mitigated for userspace across the board, you must specify spectre_v2=on for userspace to be protected.

https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...

> KPTI won't be default enabled on Linux on AMD CPUs is the issue here. Yet it provides valuable separation between kernel and userspace address ranges.

AMD's security bulletin is actually incredibly weaselly and in fact quietly acknowledges KPTI as the reason further mitigation is not necessary, and then goes on to recommend that KPTI remain disabled anyway.

https://www.amd.com/en/resources/product-security/bulletin/a...

> The attacks discussed in the paper do not directly leak data across address space boundaries. As a result, AMD is not recommending any mitigations at this time.

That's literally the entire bulletin, other than naming the author and recommending you follow security best-practices. Two sentences, one of which is "no mitigations required at this time", for an exploit which is described by the author (who is also a named author of the Meltdown paper!) as "worse than Meltdown", in the most popular brand of server processor.

Like it's all very carefully worded to avoid acknowledging the CVE in any way, but to also avoid saying anything that's technically false. If you do not enable KPTI then there is no address space boundary, and leakage from the kernel can occur. And specifically that leakage is page-table layouts - which AMD considers "only metadata" and therefore not important (not real data!).

But it is a building block which amplifies all these other attacks, including Specter itself. Specter was tested in the paper itself and - contrary to AMD's statement (one of the actual falsehoods they make despite their weaseling) - does result in actual leakage of kernel data and not just metadata (the author notes that this is a more severe leak than meltdown itself). And leaking metadata is bad enough by itself - like many kinds of metadata, the page-table layouts are probably more interesting (per byte exfiltrated) than the actual data itself!

AMD's interest is in shoving it under the rug as quietly as possible - the solution is flushing the caches every time you enter/leave kernel space, just like with Meltdown. That's what KPTI is/does, you flush caches to isolate the pages. And AMD has leaned much more heavily on large last-level caches than Intel has, so this hurts correspondingly more.

But I don't know why the kernel team is playing along with this. The sibling commenter is right in the sense that this is not something that is being surfaced to users to let them know they are vulnerable, and that the kernel team continues to follow the AMD recommendation of insecure-by-default and letting the issue go quietly under the rug at the expense of their customers' security. This undercuts something that the kernel team has put significant engineering effort into mitigating - not as important as AMD cheating on benchmarks with an insecure configuration I guess.

There has always been a weird sickly affection for AMD in the enthusiast community, and you can see it every time there's an AMD vulnerability. When the AMD vulns really started to flow a couple years ago, there was basically a collective shrug and we just decided to ignore them instead of mitigating. So much for "these vulnerabilities only exist because [the vendor] decided to cut corners in the name of performance!". Like that's explicitly the decision AMD has made with their customers' security. And everyone's fine with it, same weird sickly affection for AMD as ever among the enthusiast community. This is a billion-dollar company cutting corners on their customers' security so they can win benchmarks. It's bad. It shouldn't need to be said, but it does.

I very much feel that - even given that people's interest or concern about these exploits is fading over time - that even today (let alone a couple years ago) Intel certainly would not have received the same level of deference if they just said that a huge, performance-sapping patch was "not really necessary" and that everyone should just run their systems in an insecure configuration so that benchmarks weren't unduly harmed. It's a weird thing people have where they need to cover all the bases before they will acknowledge the slightest fault or problem or misbehavior with this specific corporation. Same as the sibling who disputed all this because Linux said he was secure - yeah, the kernel team doesn't seem to care about that, but as I demonstrated there is still a visible timing thing even on current BIOS/OS combinations.

Same damn thing with Ryzenfall too - despite the skulduggery around Monarch, CTS Labs actually did find a very serious vuln (actually 3-4 very serious exploits that let them break out of guest/jailbreak PSP and bypass AMD's UEFI signing and achieve persistence, and it's funny to look back at the people whining that it doesn't deserve a 9.0 severity or whatever. Shockingly, MITRE doesn't give those out for no reason, and AMD doesn't patch "root access lets you do root things" for no reason either.

https://www.youtube.com/watch?v=QuqefIZrRWc

I get why AMD is doing it. I don't get why the kernel team plays along. It's unintentionally a really good question from the sibling: why isn't the kernel team applying the standards uniformly? Here's A Modest Security Proposal: if we just don't care about this class of exploit anymore, and KASLR isn't going to be a meaningful part of a defense-in-depth, shouldn't it be disabled for everyone at this point? Is that a good idea?

But is this not the same thing on intel CPU ? I believe "new" intel CPU are, too, unaffected by meltdown, and so kpti will be disabled there by default.
if there are no errata leading to (FG)KASLR violations, then no problem disabling KPTI as a general security boundary. The thing I am saying is that vendors do not agree on processors providing ASLR timing attack protection as a defined security boundary in all situations.

you need to either implement processor-level ASLR protections (and probably these guarantees fade over time!) or kpti/flush your shit when you move between address spaces. Or there needs to be an understanding from the kernel team that they need to develop under the page allocation model that attackers can see your allocation patterns after initial breaches. like let's say they breach your PRNG key. Should there be additional compartmentalization after that? Multiple keys at multiple security boundaries / within the stack more generally to increase penetration time across security boundaries?

seemingly the expectation is one or the other though, because ASLR security is being treated as a security boundary.

I also very much feel that at this point KPTI is just a generalized good defense in depth. If that's the defense that's going to be deployed after your shit falls through... let's just flush it preemptively, right? That's not the current practice but should it be?

Also if you don't have a bios update available for that newer microcode, give my real-ucode package a try: https://github.com/divestedcg/real-ucode

The linux-firmware repo does not provide AMD microcode updates to consumer platforms unlike Intel.

these are the tests you need to run: https://github.com/amdprefetch/amd-prefetch-attacks/blob/mas...

you probably want to do `export WITH_TLB_EVICT=1` before you make, then run ./kaslr. The power stuff is patched (by removing the RAPL power interface) but there is still timing differences visible on my 5700G and the WITH_TLB_EVICT makes this fairly obvious/consistent:

https://pastebin.com/1n0QbHTH

```csv

452,0xffffffffb8000000,92,82,220

453,0xffffffffb8200000,94,82,835

454,0xffffffffb8400000,110,94,487

455,0xffffffffb8600000,83,75,114

456,0xffffffffb8800000,83,75,131

457,0xffffffffb8a00000,109,92,484

458,0xffffffffb8c00000,92,82,172

459,0xffffffffb8e00000,110,94,499

460,0xffffffffb9000000,92,82,155

```

those timing differences are the presence/nonpresence of kernel pages in the TLB, those are the KASLR pages, they’re slower when the TLB eviction happens because of the extra bookkeeping.

then we have the stack protector canary on the last couple pages of course:

```csv

512,0xffffffffbf800000,91,82,155

513,0xffffffffbfa00000,92,82,147

514,0xffffffffbfc00000,92,82,151

515,0xffffffffbfe00000,91,82,137

516,0xffffffffc0000000,112,94,598

517,0xffffffffc0200000,110,94,544

518,0xffffffffc0400000,110,94,260

519,0xffffffffc0600000,110,94,638

```

edit: the 4 pages at the end of the memory space are very consistent between tests and across reboots, and the higher lookup time goes away if you set the kernel boot option "pti=on" manually at startup, that’s the insecure behavior as described in the paper.

log with pti=on kernel option: https://pastebin.com/GK5KfsYd

```csv

513,0xffffffffbfa00000,92,82,147

514,0xffffffffbfc00000,92,82,123

515,0xffffffffbfe00000,92,82,141

516,0xffffffffc0000000,91,82,134

517,0xffffffffc0200000,91,82,140

518,0xffffffffc0400000,91,82,151

519,0xffffffffc0600000,91,82,141

```

environment: ubuntu 22.04.4 live-usb, 5700G, b550i aorus pro ax latest bios

> Exploitation on non-glibc systems is conceivable but has not been examined.

( https://www.openssh.com/txt/release-9.8 )

Darn - here I was hoping Alpine was properly immune, but it sounds more like "nobody's checked if it works on musl" at this point.

> OpenSSH sshd on musl-based systems is not vulnerable to RCE via CVE-2024-6387 (regreSSHion).

https://fosstodon.org/@musl/112711796005712271

If you are on GCP and don't have time to patch, GCP recommends turning off your port 22 for now. https://cloud.google.com/compute/docs/security-bulletins

1. Find things that are 0.0.0.0 port 22, example, https://gist.github.com/james-ransom/97e1c8596e28b9f759bac79...

2. Force them to the local network, gcloud compute firewall-rules update default-allow-ssh --source-ranges=10.0.0.0/8 --project=$i;

Out of curiosity, does Windows have anything as disruptive as signals? I assume it is also not vulnerable, because SSH server there do not use glibc.
Yes, Windows has signals. No Windows, doesn’t have glibc. But don’t worry, msvcrt has its own vulnerabilities.
As someone who does unspeakable, but safe, things in signal handlers, I can confirm that it is easy to stray off the path of async-signal-safety.
I agree and I'm surprised OpenSSH developpers did not remove the use of SIGALRM and replace it by select/poll timer and explicitly managed future event list. Likely more portable and safe by default from this class of bugs that has bitten ssh code more than one time now...

Defensive programming tells us to minize code in signal handlers and the safest is to avoid using the signal at all when possible :).

After the xz backdoor a few months ago, I decided to turn off SSH everywhere I don't need it, either by disabling it or uninstalling it entirely. While SSH is quite secure, it's too lucrative a target, so it will always pose a risk.
I now only bind services to wireguard interfaces. The bet is that a compromise in both the service and wireguard at the same time is unlikely (and I have relatively high confidence in wireguard.)
I'm confident in making ssh changes while logged in via ssh.

Compared to ssh, wireguard configs feel too easy to mess up and risk getting locked out if its the only way of accessing the device.

  • pja
  • ·
  • 5 days ago
  • ·
  • [ - ]
Use tailscale instead? It’s user friendly enough that you’re unlikely to mess it up.
> The bet is that a compromise in both the service and wireguard at the same time is unlikely

An RCE in wireguard would be enough -- no need to compromise both.

I was going to say something like this, but in practice wireguard is very very tiny. It doesn't have pluggable authentication, or passwords, or user transitions, or forked subprocesses, or systemd integrations. Using it or another simple secure transport in front of SSH is probably a good idea.
I don't disagree with you. However, my point was that the parent poster's reasoning was flawed.

Stacking these services on top of each other in this way does not necessarily mean that an attacker has to compromise both services in order to compromise a host. The parent poster's flawed reasoning appeared to lead to a false sense of security as a result.

Yes for sure. An RCE in the first is sufficient, or an auth bypass in the first and some other vulnerability in the second.
  • rwmj
  • ·
  • 6 days ago
  • ·
  • [ - ]
So .. how do you handle remote logins?
"everywhere I don't need it" likely implies computers he or she only accesses directly on the console.
What do you use in place of it?
https://www.aurga.com/ ?

Because an $80 black box wireless KVM from a foreign country is way more secure! (Just kidding, though it is not internet-accesible by default.)

A keyboard and a display, aka "the console". For example when using one's laptop or sitting at their stationary PC.
  • zshrc
  • ·
  • 6 days ago
  • ·
  • [ - ]
Doesn’t affect RHEL7 or RHEL8.
Or RHEL9.

  $ rpm -q openssh
  openssh-8.7p1-38.0.1.el9.x86_64
Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable.

The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1

https://blog.qualys.com/vulnerabilities-threat-research/2024...

> Statement

> The flaw affects RHEL9 as the regression was introduced after the OpenSSH version shipped with RHEL8 was published.

However, we see the -D option on the listening parent:

  $ ps ax | grep sshd | head -1
     1306 ?        Ss     0:01 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
As mentioned elsewhere here, is -D sufficient to avoid exploitation, or is -e necessary as well?

  $ man sshd | sed -n '/ -[De]/,/^$/p'
     -D      When this option is specified, sshd will not
             detach and does not become a daemon.  This
             allows easy monitoring of sshd.

     -e      Write debug logs to standard error instead
             of the system log.
RHEL9 is also 64-bit only, and we see from the notice:

"we have started to work on an amd64 exploit, which is much harder because of the stronger ASLR."

On top of writing the exploit to target 32-bit environments, this also requires a DSA key that implements multiple calls to free().

There is a section on "Rocky Linux 9" near the end of the linked advisory where unsuccessful exploit attempts are discussed.

>As mentioned elsewhere here, is -D sufficient to avoid exploitation, or is -e necessary as well?

https://github.com/openssh/openssh-portable/blob/V_9_8_P1/ss...

sshd.c handles no_daemon (-D) and log_stderr (-e) independently. log_stderr is what is given to log_init in log.c that gates the call to syslog functions. There is a special case to set log_stderr to true if debug_flag (-d) is set, but nothing for no_daemon.

I can't test it right now though so I may be missing something.

I'm on Oracle Linux, and they appear to have already issued a patch for this problem:

  openssh-8.7p1-38.0.2.el9.x86_64.rpm
  openssh-server-8.7p1-38.0.2.el9.x86_64.rpm
  openssh-clients-8.7p1-38.0.2.el9.x86_64.rpm
The changelog addresses the CVE directly. It does not appear that adding the -e directive is necessary with this patch.

  $ rpm -q --changelog openssh-server | head -3
  * Wed Jun 26 2024 Alex Burmashev <alexander.burmashev@oracle.com> - 8.7p1-38.0.2
  - Restore dropped earlier ifdef condition for safe _exit(1) call in sshsigdie() [Orabug: 36783468]
    Resolves CVE-2024-6387
So in other words, -De is not a workaround. -Dde might be but it will cause more log output than is wanted.
-De is a workaround. -D is not.
Speaking of Rocky 9, they suggest to get the new version from the SIG/Security repository:

https://rockylinux.org/news/2024-07-01-rocky-linux-9-cve-202...

How refreshing to read a pure txt on the phone. It displays text better than a dozen websites.
I haven't seen an increase of ssh traffic yet, but the alert only went out a couple hours ago... hopefully distros will ship the patches quickly.
This is the sort of bug which pre-announcement coordination is designed for. Anyone who doesn't have patches ready was either forgotten (I've seen a few instances of "I thought you were going to tell them!") or isn't on the ball.
Gentoo announced it at the same time as qualys, but they're currently trying to backport and bump users to a patched version. https://bugs.gentoo.org/935271
Gentoo has pushed the patched version now.
  • booi
  • ·
  • 6 days ago
  • ·
  • [ - ]
i would assume all the distros have patches ready to go awaiting the embargo lift.
Patch out for Arch Linux

https://archlinux.org/packages/core/x86_64/openssh/

edit be sure to manually restart sshd after upgrading; my systems fail during key exchange after package upgrade until restarting the sshd service:

% ssh -v 192.168.1.254

OpenSSH_9.8p1, OpenSSL 3.3.1 4 Jun 2024

... output elided ...

debug1: Local version string SSH-2.0-OpenSSH_9.8

kex_exchange_identification: read: Connection reset by peer

Connection reset by 192.168.1.254 port 22

Same here. It's caused by the sshd daemon being split into multiple binaries. In fact, the commit which introduced the change mentions this explicitly:

> NB. if you're updating via source, please restart sshd after installing, otherwise you run the risk of locking yourself out.

https://github.com/openssh/openssh-portable/commit/03e3de416...

Edit: Already reported at https://gitlab.archlinux.org/archlinux/packaging/packages/op...

For my own setup, I'm looking into Path Aware Networking (PAN) architectures like SCION to avoid exposing paths to my sshd, without having to set up a VPN or port knocking.

https://scion-architecture.net

  • pgraf
  • ·
  • 6 days ago
  • ·
  • [ - ]
Genuinely curious, how would you block an attacker from getting to your SSH port without knowing the path you will connect from (which is the case for remote access) at configuration time? I don‘t see how Path-Aware Networking would replace a VPN solution
The SCION Book goes over a lot of potential solutions that are possible because of the architecture, but my favorite is hidden paths. https://scion.docs.anapaya.net/en/latest/hidden-paths.html

> Hidden path communication enables the hiding of specific path segments, i.e. certain path segments are only available for authorized ASes. In the common case, path segments are publicly available to any network entity. They are fetched from the control service and used to construct forwarding paths.

They note that OpenBSD is not vulnerable. Is macOS also (probably?) safe then?
TLDR: these are the safe versions 4.4p1 <= OpenSSH < 8.5p1 AND >= 9.8p1

---

- OpenSSH < 4.4p1 is vulnerable to this signal handler race condition, if not backport-patched against CVE-2006-5051, or not patched against CVE-2008-4109, which was an incorrect fix for CVE-2006-5051;

- 4.4p1 <= OpenSSH < 8.5p1 is not vulnerable to this signal handler race condition (because the "#ifdef DO_LOG_SAFE_IN_SIGHAND" that was added to sigdie() by the patch for CVE-2006-5051 transformed this unsafe function into a safe _exit(1) call);

- 8.5p1 <= OpenSSH < 9.8p1 is vulnerable again to this signal handler race condition (because the "#ifdef DO_LOG_SAFE_IN_SIGHAND" was accidentally removed from sigdie()).

Anyone else here just totally crap bricks when they see news like this? Like, I wake up and instantly think all my servers are going to be owned and freak out. Though its usually never that bad, sometimes it is. Looks like in this case my debian servers were fine though.

edit: maybe i should add an iptable rule to only allow ssh from my IP.

In some setups I decided to have jumphost via HAproxy ssl as described there https://www.haproxy.com/blog/route-ssh-connections-with-hapr... so no ssh directly exposed at all.
So this is effectively like ProxyJump, just with the jump node exposed over SSL and backed by HAProxy binary instead of OpenSSH?

What benefits do you see? I mean, you still expose some binary that implements authentication and authorization using cryptography.

I think that even RBAC scenarios described in the link above should be achievable with OpenSSH, right?

It's not about RBAC at all. Goal is not to expose ssh socket to the Internet! ssh tcp is encapsulated in https packet and ONLY after successful certificate auth by HAProxy.
Right, but now your are exposing HA proxy socket to the Internet. Why is that better?
Quoting some ska tune in a SSH vulnerability report really caught me off ward, but I loved it.
I have a Ubuntu 22.10 system with ssh using socket activation. Does this bug still have an impact? I've read that Ubuntu 24.4 is safe because of socket activation. Can any expert here comment?
There's a purported PoC exploit that delivers shellcode available on GitHub, but I saw someone comment the link here, and then their comment disappeared on the next refresh.
  • rrix2
  • ·
  • 5 days ago
  • ·
  • [ - ]
So what are the odds we ever see a viable x86_64 exploit?
Was this abused in the wild?
how to install openssh 9.8p1 in ubuntu 22.04.4 LTS
  • ·
  • 6 days ago
  • ·
  • [ - ]
And who was notoriously not exploitable? The ones hiding sshd behind port knocks. And fail2ban: would work too. And a restrictive firewall: would help too.

I don't use port-knocking but I really just don't get all those saying: "It's security theater".

We had not one but two major OpenSSH "near fiasco" (this RCE and the xz lib thing) that were both rendered unusable for attackers by using port knocking.

To me port-knocking is not "security theater": it adds one layer of defense. It's defense-in-depth. Not theater.

And the port-knocking sequence doesn't have to be always the same: it can, say, change every 30 seconds, using TOTP style secret sequence generation.

How many exploits rendered cold dead in their tracks by port-knocking shall we need before people stop saying port-knocking is security theater?

Other measures do also help... Like restrictive firewalling rules, which many criticize as "it only helps keep the logs smaller": no, they don't just help keep the logs smaller. I'm whitelisting the three ISP's IP blocks anyone can reasonably be needing to SSH from: now the attacker needs not only the zero-day, but it also need to know he needs to be on one of those three ISPs' IPs.

The argument that consists in saying: "sshd is unexploitable, so nothing else must be done to protect the server" is...

Dead.

> I don't use port-knocking but I really just don't get all those saying: "It's security theater".

It's not security theater but it's kind of outdated. Single Packet Authentication[0] is a significant improvement.

> How many exploits rendered cold dead in their tracks by port-knocking shall we need before people stop saying port-knocking is security theater?

Port knocking is one layer, but it shouldn't be the only one, or even a heavily relied upon one. Plenty of people might be in a position to see the sequence of ports you knock, for example.

Personally, I think if more people bothered to learn tools like SELinux instead of disabling it due to laziness or fear, that is what would stop most exploits dead. Containers are the middleground everyone attached to instead, though.

[0] https://www.cipherdyne.org/fwknop/docs/SPA.html

Port-knocking is a PITA in theory and even worse in real world : people do not have time nor the will to do wild invocations before getting job done.

Unless you are talking about your own personal use-case, in which case, feel free to follow your deepest wishes

Firewall is a joke, too. Who can manage hundreds and thousands of even-changing IP ? Nobody. Again: I'm not talking about your personal use-case (yet I enjoy connecting to my server through 4G, whereever I am)

Fail2ban, on the other hand, is nice: every systems that relies on some known secret benefits from an anti-bruteforce mechanism. Also, and this is important: fail2ban is quick to deploy, and not a PITA for users. Good stuff.

I am interested in any tools for managing IP allow lists on Azure and AWS. It seems like there should be something fairly polished, perhaps with an enrollment flow for self-management and a few reports/warnings...
I’ve written a few toy port knocking implementations, and doing it right is hard.

If your connections crap and there’s packet loss, some of the sequence may be lost.

Avoiding replay attacks is another whole problem - you want the sequence to change based on a shared secret and time or something similar (eg: TOTP to agree the sequence).

Then you have to consider things like NAT…

Also, if you are trying to connect to a resource from a restrictive network your knocking sequence might be blocked by filters at the client end.

I've been on networks that allow nothing except the standard TCP & UDP ports for HTTP(S), SSH and DNS (and even then DNS was restricted to their local name server).

> […] rendered unusable for attackers by using port knocking.

Port knocking renders SSH unusable: I'm not going to tell my users "do this magic network incantation before running ssh". They want to open a terminal and simply run ssh.

See the A in the CIA triad, as well as U in the Parkerian hexad.

What benefits does port knocking give over and above a simple VPN? They're both additional layers of authentication, except a VPN seems much more rigorous and brings potentially other benefits.

In a world where tailscale etc. have made quality VPNs trivial to implement, why would I both with port knocking?

Port-knocking is way simpler and relies on extremely basic network primitives. As such the attack surface is considerably smaller than OpenSSH or OpenVPN and their authentication mechanisms.
VPNs drop your bandwidth speeds by 50% on average. And if tailscale has to use a relay server, instead of a direct connection, bandwidth will drop by 70-80%.
  • gruez
  • ·
  • 6 days ago
  • ·
  • [ - ]
>VPNs drop your bandwidth speeds by 50% on average

Source? Wireguard can do 1GB/s on decade old processors[1]. Even openvpn can do 258 Mb/s, which realistically can saturate the average home internet connection. Also, if we're talking about SSH connections, why does throughput matter? Why do you need 1 gigabit of bandwidth to transfer a few keystrokes a second?

[1] https://www.wireguard.com/performance/

We ran iperf on our multi-cloud and on prem network.

> Also, if we're talking about SSH connections, why does throughput matter?

scp, among other things, runs over ssh.

  • gruez
  • ·
  • 6 days ago
  • ·
  • [ - ]
>scp, among other things, runs over ssh.

Ironically scp/sftp caused me more bandwidth headaches than wireguard/openvpn. I frequently experienced cases where scp/sftp would get 10% or even less of the transfer speed compared to a plain http(s) connection. Maybe it was due to packet loss, buffer size, or qos/throttling, but I wasn't able to figure out a definitive solution.

  • nh2
  • ·
  • 5 days ago
  • ·
  • [ - ]
In almost all cases, the reason is OpenSSH's silly limitation of buffer sizes [1].

It limits the amount of data that's "in the cable" (which needs to be more if the cable is long).

> The default SSH window size was 64 - 128 KB, which worked well for interactive sessions, but was severely limiting for bulk transfer in high bandwidth-delay product situations.

> OpenSSH later increased the default SSH window size to 2 MB in 2007.

2 MB is still incredibly little.

It means that on a 100 ms connection, you can not exceed 160 Mbit/s, even if your machines have 10 Gbit/s.

OpenSSH is one of the very few TCP programs that have garbage throughput on TCP. This is also what makes rsync slow.

The people from [1] patched that.

You can support that work here: https://github.com/rapier1/hpn-ssh

In my opinion, this should really be fixed in OpenSSH upstream. I do not understand why it doesn't just use normal automatic TCP window size scaling, like all other TCP programs.

All the big megacorps and almost every other tech company in existence uses SSH, yet nobody seems to care that it's artificially 100x slower than necessary.

[1]: http://www.allanjude.com/bsd/AsiaBSDCon2017_-_SSH_Performanc...

Are you assuming that VPNs are more secure than ssh?
Yes, inherently yes. Because they have a lot less features than SSH.

It's in the name; Secure Shell, vs. Virtual Private Network. One of them has to deal with users, authentication, shells, chroots. The other mostly deals with the network stack and encryption.

No?
Port knocking is a ludicrous security measure compared to the combination of: * configuring sshd to only listen over a Wireguard tunnel under your control ( or letting something like Tailscale set up the tunnel for you) * switching to ssh certificate authn instead of passwords or keys
Does Wireguard work in such a way that there is no trace of its existence to an unauthorized contacting entity?

I used port knocking for a while many years ago, but it was just too fiddly and flaky. I would run the port knocking program, and see the port not open or close.

If I were to use a similar solution today (for whatever reason), I'd probably go for web knocking.

In my case, I didn't see it as a security measure, but just as a way to cut the crap out of sshd logs. Log monitoring and banning does a reasonable job of reducing the crap.

That's one of the original design requirements for wireguard. Unless a packet is signed with the correct key, it won't respond at all.
  • cies
  • ·
  • 6 days ago
  • ·
  • [ - ]
I use port knocking to keep my ssh logs clean. I dont think it adds security (I even brag about using it in public). It allows me to read ssh's logs without having to remove all the script kiddie login attempt spam.
  • boxed
  • ·
  • 6 days ago
  • ·
  • [ - ]
Saying you use it publicly doesn't defeat the security it gives though. Unless you publicly say the port knocking sequence. Which would be crazy.
  • cies
  • ·
  • 6 days ago
  • ·
  • [ - ]
I meant to say: I dont use it for security, I use it for convenience.
  • _joel
  • ·
  • 6 days ago
  • ·
  • [ - ]
Those not notoriously exploitable were those using gated ssh access only via known IPs or connecting via tailnets/vpn.
  • abofh
  • ·
  • 6 days ago
  • ·
  • [ - ]
Port knocking works if you have to ssh to your servers, there are many solutions that obviate even that, and leave you with no open ports, but a fully manageable server. I'm guilty of ssm in aws, but the principal applies - the cattle phone home, you only have to call pets.
  • ·
  • 6 days ago
  • ·
  • [ - ]
Sure, if you can't afford a more robust access control to your SSH server and for some reason need to make it publicly available then port knocking etc. can be a deterring feature that reduces the attack rate.
Camouflage is after all one of nature's most common defenses. Always be quick with patching, though.
  • ugjka
  • ·
  • 6 days ago
  • ·
  • [ - ]
I put all my jank behind wireguard
  • ·
  • 6 days ago
  • ·
  • [ - ]
  • nj5rq
  • ·
  • 6 days ago
  • ·
  • [ - ]

    OpenBSD is notably not vulnerable, because its
    SIGALRM handler calls syslog_r(), an async-signal-safer version of
    syslog() that was invented by OpenBSD in 2001.
Saving the day once again.
  • fmbb
  • ·
  • 6 days ago
  • ·
  • [ - ]
“async-signal-safer”

Just this morning was the first time I read the words MT-Safe, AS-Safe, AC-Safe. But I did not know there were “safer” functions as well.

Is there also a “safest” syslog?

For a word like 'safe', or at least in CS, I would assume that the 'safe' one actually is 'safest'; that 'safer' is ehh it's not safe but it's an improvement on the unsafe one. It's safer.
Similarly, safest is normal English means not completely safe, but more safe than the other options. So safe > safest > safer > safe-ish > unsafe.
Wait, that seems backwards to me as a native English speaker. The superlative version feels more safe. Safest > Safe > (…)
One example to help think about this. Say you have 3 friends. Your friend Bob has a net worth of $1 - he is the least rich. Your friend Alex has a net worth $10 - he is richer. Another friend Ben has a net worth of $100 - he is the richest. Richest here is comparative against all 3 of them, but none of them are actually rich. Bill Gates is rich. Bezos is rich. Musk is rich. Someone with a net worth of $100 isn't.

You can still have comparisons between the rich too, so Bezos is richer than Gates and he's also the richest if you're just considering the pair. But add Musk to the mix, and he's no longer the richest.

I guess that last example looks like you have two attributes - rich as some objective "has a lot of money" and comparatively rich (richer, richest). For safe, it's kind of similar, except that as soon as you are saying one thing is safer than the other, then you are implicitly acknowledging that there are areas where the thing isn't safe, and if you're admitting that you can't also call it safe without contradicting yourself.

A better example is "pure water". By it's definition, that's just H2O molecules floating around with nothing else.

If you add a single grain of salt to a glass of that water, it's no longer pure. Drinking it you probably wouldn't notice, and some people might colloquially call it "pure", but we know it isn't because we added some salt to it.

If you add a teaspoon of salt to to a different glass of pure water, it's also no longer pure, and now most people would probably notice the salt and recognise it's not pure.

If you add a tablespoon of salt to to a different glass of pure water, it's definitely not pure and you probably wouldn't want to drink it either.

You could say the teaspoon of salt glass is purer than the tablespoon of salt glass, the grain of salt glass is purer than both of them and so the purest of the three. And yet, we know that it isn't pure water, because we added something else to it.

So pure > purest > purer > less pure. Also note that I was required to use "less pure" for the last one, because all of them except pure are "impure" or "not pure", even though were what I originally thought of writing.

It's a bit ambiguous and depends on context, which is why I said 'at least in CS', since for whatever the particular topic is 'safe' and 'unsafe' is likely to have a fairly strict meaning.

In general you're right. For safety it's just that 'safest' implies some sort of practicality: the best - most safe - from a set of options. But the safest option isn't necessarily strictly safe.

(Say your dog's stuck on a roof on a windy day, you decide the safest option is scaffolding (safer than a ladder or free climbing), but it's not safe, you just insist on rescuing your dog.)

Nah. If something is 'safe', it's safe, period. If something is safest is, it's only the best of the available options and not necessarily 'safe'.
  • nj5rq
  • ·
  • 6 days ago
  • ·
  • [ - ]
I would assume he refers to "safe" being absolutely safe, while "safest" refers to the safest of the existing alternatives?
  • fmbb
  • ·
  • 6 days ago
  • ·
  • [ - ]
I’m would assume the same. Hence my question.
Theo and team way ahead of their time like always.
  • pjmlp
  • ·
  • 6 days ago
  • ·
  • [ - ]
Not always,

36C3 - A systematic evaluation of OpenBSD's mitigations

https://www.youtube.com/watch?v=3E9ga-CylWQ

Wouldn't a good systematic evaluation need (or at least benefit from) a few actual working exploits/PoCs? I keep asking this as a long-time OpenBSD user who is genuinely interested in seeing it done, but so far everyone who has said "it's flawed" also reserved themselves the convenience of not having to prove their point in a practical sense.
> Wouldn't a good systematic evaluation need (or at least benefit from) a few actual working exploits/PoCs?

Sure, see any of the previous exploits for sshd, or any other software shipped in the OpenBSD default install.

> I keep asking this as a long-time OpenBSD user who is genuinely interested in seeing it done, but so far everyone who has said "it's flawed" also reserved themselves the convenience of not having to prove their point in a practical sense.

The point is they have very little in the way of containing attackers and restricting what they can. Until pledge and unveil, almost all their focus in on eliminating bugs which hey, great, but let's have a little more in case you miss a bug and someone breaks in, eh?

An insecure dockerized webserver protected with SELinux is safer than Apache on a default OpenBSD install.

> Sure, see any of the previous exploits for sshd, or any other software shipped in the OpenBSD default install.

Would you like to point to one that successfully utilizes a weakness in OpenBSD itself, which is the topic and implied statement of the video, rather than a weakness in some application running under the superuser?

Just to underline, I'm not interested in discussing the hows and whys of containing arbitrary applications where one or more portions are running under euid 0. I'm interested in seeing OpenBSD successfully attacked by an unprivileged process/user.

Now to be fair, sshd on OpenBSD is part of OpenBSD rather than an add-on application and I think it would be fair to count exploits in it against the OS, if it had vulnerabilities there.
Any vulns in any package in OpenBSD's package repositories that they audited should count as a vuln against OpenBSD itself.

If OpenBSD users installed it through OpenBSD repositories and are running it will they be affected? Yes? Then it counts against the system itself.

I'm not sure that's fair; was log4j a vulnerability in Ubuntu itself? How about libwebp ( https://news.ycombinator.com/item?id=37657746 )?
> I'm not sure that's fair;

It's the way most distros handled security vulnerabilities, though. Without looking, I'm certain Ubuntu has a security advisory for that vulnerability.

So I agree it might not be fair on the face of it or if doing a technical analysis or something, but if you want to compare OpenBSD security to other Linux distros by vulnerability count, (and so many who don't know better do), then vulnerabilities should be measured in the same way across both systems.

> Would you like to point to one that successfully utilizes a weakness in OpenBSD itself, which is the topic and implied statement of the video, rather than a weakness in some application running under the superuser?

I'm sorry, what? What kind of nonsense distinction is this?

Are you trying to very disingenuously try and claim only kernel exploits count as attacks against OpenBSD?

Why the hell wouldn't a webserver zero-day count? If an OS that claims to be security focused can't constrain a misbehaving web server running as root then it's sure as hell not any type of secure OS.

> I'm interested in seeing OpenBSD successfully attacked by an unprivileged process/user.

You realize there is very little that OpenBSD does to protect against LPE if there is any LPE vuln on their system, right? Surely you're not just advocating for OpenBSD based on their own marketing? If you want to limit the goalposts to kernel vulns or LPE's that already require an account you're free to do so, but that's rather silly and not remotely indicative of real world security needs.

If it's a security focused OS, it should provide ways to limit the damage an attacker can do. OpenBSD had very very little in that regard and still does, although things are slightly better now and they have a few toys.

And hey, fun fact, if you apply the same OpenBSD methodology and config of having a barebones install, you'll suddenly find at least dozens of other operating systems with equivalent or better track records.

Plan 9 has had less vulnerabilities than OpenBSD and has had more thought put into its security architecture[0], so by your metric it's the more secure OS, yeah?

[0] http://9p.io/sys/doc/auth.html

> I'm sorry, what? What kind of nonsense distinction is this?

> Are you trying to very disingenuously try and claim only kernel exploits count as attacks against OpenBSD?

Not at all. I clearly underlined that I'm not looking for cases fitting that specific scenario. The only moving of goalposts is entirely on your behalf by very disingenously misrepresenting my question in a poor attempt to try make your answer or whatever point fit. And on top of that, the tasteless pretending to be baffled...

> Not at all. I clearly underlined that I'm not looking for cases fitting that specific scenario

The thing is, we're trying to talk about the security of OpenBSD compared to its competition.

But you're trying to avoid letting anyone do that by saying only an attack against something in the default install you can do with a user account counts, which is absolutely ridiculous.

I'm not moving the goalposts nor am I pretending in any sense. Your approach just doesn't make sense, measure or indicate anything useful or relevant about the security of OpenBSD. I stated so and explained why.

But hey, keep believing whatever you want buddy.

> "The thing is, we're trying to talk about the security of OpenBSD compared to its competition."

> "But you're trying to avoid letting anyone do that by saying only an attack against something in the default install you can do with a user account counts, which is absolutely ridiculous."

I don't know who "we" are. The question I asked another poster, where you decided to butt in, regarded escalation from an unprivileged position and nothing else.

Nobody but yourself said anything along the lines of "only attacks against things in the default install 'count'", nor drew drew up comparisons against "the competition". You clearly have some larger axe to grind, but you're doing it in a discourse playing out only in your head, without reading what others actually wrote.

  • dang
  • ·
  • 4 days ago
  • ·
  • [ - ]
Please don't perpetuate flamewar comments on HN. It's not what this site is for, and destroys what it is for.

I don't see as much of it in your recent history, which is good, but it's not good to let yourself get sucked into this sort of tit-for-tat spat.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

> I don't know who "we" are.

We are the people having this discussion. That should be obvious. It's kind of funny you accused me of pretending to be baffled, lol. The irony.

You certainly had no issue discussing this top with me until I called out your claims/methodology as nonsense.

> The question I asked another poster, where you decided to butt in,

Welcome to the Internet!

> regarded escalation from an unprivileged position and nothing else.

Yes. And I pointed out why this is an absolutely nonsense approach. You realize getting root on OpenBSD is significantly easier than several other setups or Linux distro's you've probably never heard of, though, right?

So, what is it? Afraid to be wrong? You brought too much into the OpenBSD marketing, so now it's a sunk cost for your ego?

> Nobody but yourself said anything along the lines of "only attacks against things in the default install 'count'", nor drew drew up comparisons against "the competition".

This is exactly what you imply when you want to limit attacks to LPE's that require a user account, lol.

> You clearly have some larger axe to grind, but you're doing it in a discourse playing out only in your head, without reading what others actually wrote.

No axe to grind. Just calling out bad claims and reasoning.

Even now, you've successfully got us discussing semantics and nonsense instead of you actually addressing the bs claims you made. Stellar job.

  • dang
  • ·
  • 4 days ago
  • ·
  • [ - ]
You've been posting huge numbers of flamewar comments lately. That's not what HN is for, and destroys what it is for. If you keep this up, we're going to have to ban you. I don't want to do that, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules from now on, we'd appreciate it.

By the time a commenter gets to violating the site guidelines as egregiously as this, it's almost always the case that they should have stopped posting a lot sooner.

Not sure if you reflect at all over your own texts. You come off as projecting, incoherent and cognitively dissonant.
  • dang
  • ·
  • 4 days ago
  • ·
  • [ - ]
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html

Let's say I show up on this message board and say my house is more secure than a bank vault, because I have a special laser in my attic that vaporizes attackers if they come in my house. Would you believe me? Would you bother to even visit my house to prove me wrong? I mean, I can claim all I want that nobody has robbed my house, but at some point there is actually nothing of value here that means nobody has tried and nobody actually wants to try.
  • zshrc
  • ·
  • 6 days ago
  • ·
  • [ - ]
Not always, but they make it their goal to be.

Code standards are very strict in OpenBSD and security is always a primary thought...

  • ·
  • 6 days ago
  • ·
  • [ - ]
Now, how many remote exploit do we have in openbsd?
No more than before this; openbsd is not vulnerable to this exploit due to a different syslog() implementation.
  • ZiiS
  • ·
  • 6 days ago
  • ·
  • [ - ]
Worth being explicit here. The OpenBSD syslog is not just 'different' enough that it was luckily uneffected. It was intentionally designed to avoid this situation more than 20 years ago.
Also there's no publicly known exploit for this one yet even for Linux. The advisory says Qualys put exploit development on hold to coordinate the fix.
Two in living memory? If you know something with a better track record do speak up.
SEL4 and derivatives.

For starters.

And if you want to simply go by vulnerability counts, as though that meant something, let's throw in MenuetOS and TempleOS.

Okay, let's say if you know something useful with a better record. TempleOS doesn't have network, so while it's genuinely cool it's not useful to most people. MenuetOS does have network but poor software compatibility. I would actually love to see a seL4 distro but AFAIK it's pretty much only ever used as a hypervisor with a "real" (normal) OS under it, often (usually?) Linux-based. We can certainly consider to what degree OpenBSD is useful with just the base system, but it does include everything out of the box to be a web server with zero extra software added, including sshd in its native environment.
> Okay, let's say if you know something useful with a better record.

Oh, SEL4 is without any doubt useful, it wouldn't be as popular and coveted if it wasn't, but I think you are trying to say widespread.

However, you seem to have taken my examples literally and missed my point, which is trying to judge the security of an OS by its vulnerabilities is a terrible, terrible approach.

> but it does include everything out of the box to be a web server

Sure, and so do plenty of minimal linux distros, and if you use the same metrics and config as OpenBSD then they'll have a similar security track record.

And honestly, Linux with one of the RBAC solutions puts OpenBSD's security to shame.

Do yourself a favor and watch the CCC talk someone else linked in the thread.

> Oh, SEL4 is without any doubt useful, it wouldn't be as popular and coveted if it wasn't, but I think you are trying to say widespread.

There is a laptop running OpenIndiana illumos on my desk. I mean useful, though through the lens of my usecases (read: if it can't run a web browser or server, I don't generally find it useful). I've only really heard of seL4 being popular in embedded contexts (mostly cars?), not general-purpose computers.

> However, you seem to have taken my examples literally and missed my point, which is trying to judge the security of an OS by its vulnerabilities is a terrible, terrible approach.

No, I think your examples were excellent for illustrating the differences in systems; you can get a more secure system by severely limiting how much it can do (seL4 is a good choice for embedded systems, but in itself currently useless as a server OS), or a more useful system that has more attack surface, but OpenBSD is a weirdly good ratio of high utility for low security exposure. And yes of course I judge security in terms of realized exploits; theory and design is fine, but at some point the rubber has to hit the road.

> Sure, and so do plenty of minimal linux distros, and if you use the same metrics and config as OpenBSD then they'll have a similar security track record.

Well no, that's the point - they'll be better than "fat" distros, but they absolutely will not match OpenBSD. See, for example, this specific sshd vuln, which will affect any GNU/Linux distro and not OpenBSD, because OpenBSD's libc goes out of its way to solve this problem and glibc didn't.

> Do yourself a favor and watch the CCC talk someone else linked in the thread.

I don't really do youtube - is it the one that handwaves at allegedly bad design without ever actually showing a single exploit? Because I've gotten really tired of people loudly proclaiming that this thing is so easy to exploit but they just don't have time to actually do it just now but trust them it's definitely easy and a real thing that they could do even though somehow it never seems to actually happen.

I’m an OpenBSD fanboi, and the review of mitigations, their origins, efficacy, and history is well worth the time to watch or just review slides. Its not about some claim of vulz.
> is it the one that handwaves at allegedly bad design without ever actually showing a single exploit? Because I've gotten really tired of people loudly proclaiming that this thing is so easy to exploit but they just don't have time to actually do it just now but trust them it's definitely easy and a real thing that they could do even though somehow it never seems to actually happen

I mean, OpenBSD does security mitigation sealioning, so nobody really wants to engage with their stupider ideas

> I mean useful, though through the lens of my usecases

Better to stick to standard definitions in the future so you won't have to explain your personal definitions later on.

> No, I think your examples were excellent for illustrating the differences in systems; you can get a more secure system by severely limiting how much it can do

So you not only missed the point but decided to take away an entirely different message. Interesting.

Yes, limiting attack surface is a basic security principle. The examples I gave were not to demonstrate this basic principle, but to show that trying to gauge security by amount of vulnerabilities is foolish.

> seL4 is a good choice for embedded systems, but in itself currently useless as a server OS

Plan 9 then. Or any of other numerous OS projects that have less vulns than OpenBSD and can meet your arbitrary definition of 'useful'. The point is that trying to measure security by vuln disclosures is a terrible, terrible method and only something someone with no clue about security would use.

> but OpenBSD is a weirdly good ratio of high utility for low security exposure.

OpenBSD is just niche, that's it. Creating OpenSSH brought a lot of good marketing, but if you really look at the OS from a security perspective and look at features, it's lacking.

> Well no, that's the point - they'll be better than "fat" distros, but they absolutely will not match OpenBSD.

They absolutely will be better than OpenBSD, because they have capabilities to limit what an attacker can do in the event they get access, as opposed to putting all the eggs in the 'find all the bugs before they get exploited' basket. OpenBSD isn't anything special when it comes to security. That, really, is the point. Anything otherwise is marketing or people who have fell for marketing IMO.

> I don't really do youtube

There's a lot of good content only on that platform. Surely you can use yt-dlp or freetube or something.

> is it the one that handwaves at allegedly bad design without ever actually showing a single exploit?

That summary isn't remotely accurate, so I'd have to say no.

> Because I've gotten really tired of people loudly proclaiming that this thing is so easy to exploit but they just don't have time to actually do it just now but trust them it's definitely easy and a real thing that they could do even though somehow it never seems to actually happen.

They have remote holes listed on their homepage. Both those cases led to remote root and this supposedly secure OS had nothing to offer, while most Linux distros did. Let's make this simple. Linux allows you to contain a remote root exploit with tools like RBAC and MAC extensions. OpenBSD offers nothing. In the event both systems have the same vulnerability (of which this titular instance is not an example of) allowing remote root, Linux will be the safer system if set up correctly.

But honestly, I've gotten really tired of OpenBSD stans regurgitating that it's supposedly secure and thinking that being able to point to a lack of vulnerabilities in a barebones default install is some kind of proof of that.

You're not being serious if you are suggesting Plan 9 as a more secure OS than OpenBSD.
I was making a point that per the other poster's methodology for evaluating security by vulnerability count, plan 9 would win, and plan 9 also meets the posts arbitrary definition of 'useful', in that it can run a webserver, database, and other common software.
People still use SSH these days?

I kid, but really you probably shouldn't on Production. You should be exporting your logs and everything else. The host or VM bootstrapped golden images with everything as needed.

It is okay to start that way and figure out your enternals but that isn't for Production. Production is a locked down closed environment.

Recomment from another Hacker News post.

Yes I too have production systems running no software at all.
  • sneak
  • ·
  • 6 days ago
  • ·
  • [ - ]
Why are we all still running an ssh server written in an unsafe language in 2024?
Because nobody has written an sshd in a memory safe language with the same track record of safety as OpenSSH. I personally wouldn't trust a new sshd for a few years at least.
There’s a Rust library that implements most of the protocol, but I’ve not found a “drop in replacement” using said library yet.

Might actually make for a fun side project to build a SSH server using that library and see how well it performs.

Does Rust have some invulnerability to race conditions?
It does have invulnerability to data races. However, that guarantee applies only to data types and code in Rust.

The dangerous interaction between signals and other functions is outside of what Rust can help with.

There are several crates available which implement the dangerous parts of signal handling safely for you.
There are, but safety of their implementation is not checked by the language.

Rust doesn't have an effect system nor a similar facility to flag what code is not signal-handler-safe. A Rust implementation could just as likely call something incompatible.

Rust has many useful guarantees, and is a significant improvement over C in most cases, but let's be precise about what Rust can and can't do.

> Rust doesn't have an effect system nor a similar facility to flag what code is not signal-handler-safe.

My understanding is that a sound implementation of signal handling in Rust will require the signal handler to be Send, requiring it only has access to shared data that is Sync (safe to share between threads). I guess thread-safe does not nessecarily imply signal-safe, though.

And of course you could still call to a signal-unsafe C function but that requires an unsafe block, explicitly acknowledging that Rust's guarentees do not apply.

Signal handlers are not threads. Rust doesn't have anything that expresses the special extremely restrictive requirements of signal handlers.

A safe-Rust thread-safe Send+Sync function is allowed to call `format!()`, or `Box::new`, or drop a `Vec`, all of which will directly cause the exact same vulnerability as in SSH.

There is nothing in Rust that can say "you can't drop a Vec", and there are no tools that will let you find out whether any function you call may do it. Rust can't even statically prove that panics won't happen. Rust's panic implementation performs heap allocations, so any Rust construct that can panic is super unsafe in signal handlers.

The crates that I have looked at work by installing their own minimal signal handler which then puts a message into a channel, or otherwise delivers the message that the signal was fired to your code in a safe way.

Of course, you are still trusting that the implementation is sound.

  • px43
  • ·
  • 6 days ago
  • ·
  • [ - ]
This bug seems to be exploitable due to a memory corruption triggered by the race condition, it's the memory corruption that rust would protect from.
  • rfoo
  • ·
  • 4 days ago
  • ·
  • [ - ]
No. You can't protect against arbitrary "memory corruption" without also covering race condition.

While this is a common misconception, I'm already tired of enthusiastic "safu language fans" trying to explain what bound checks mean to me so apologizes for being mean.

But no, in 2024 you should focus on temporal memory safety which is much harder to eliminate compared to "just add boundz CHK everYwHErE!!!@".

What is the better working alternative?
>However, we cannot yet endorse its appropriateness for production systems without further peer review. [0]

Not an alternative and won't be for the foreseeable future.

[0]: https://github.com/francoismichel/ssh3?tab=readme-ov-file#-s...

You are right for now. This is the last piece(ssh server) that needs to be solved for Go/Rust to prevail over the legacy systems.
  • ·
  • 6 days ago
  • ·
  • [ - ]