While all of these are very useful, you'll find that a lot of these are already enabled by default in many distributions of the gcc compiler. Sometimes they're embedded in the compiler itself through a patch or configure flag, and sometimes they're added through CFLAGS variables during the compilation of distribution packages. I can only really speak of gentoo, but here's a non-exhaustive list:

* -fPIE is enabled with --enable-default-pie in GCC's ./configure script

* -fstack-protector-strong is enabled with --enable-default-ssp in GCC's ./configure script

* -Wl,-z,relro is enabled with --enable-relro in Binutils' ./configure script

* -Wp,-D_FORTIFY_SOURCE=2, -fstack-clash-protection, -Wl,-z,now and -fcf-protection=full are enabled by default through patches to GCC in Gentoo.

* -Wl,--as-needed is enabled through the default LDFLAGS

For reference, here's the default compiler flags for a few other distributions. Note that these don't include GCC patches:

* Arch Linux: https://gitlab.archlinux.org/archlinux/packaging/packages/pa...

* Alpine Linux: https://gitlab.alpinelinux.org/alpine/abuild/-/blob/master/d...

* Debian: It's a tiny bit more obscure, but running `dpkg-buildflags` on a fresh container returns the following: CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/home/<myuser>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection

  • jpfr
  • ·
  • 1 day ago
  • ·
  • [ - ]
Most of these are implicit with -fhardened.

https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.h...

Finally.
From https://news.ycombinator.com/item?id=38505448 :

> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).

Is there a good reference for comparing these compile-time build flags and their defaults with Make, CMake, Ninja Build, and other build systems, on each platform and architecture?

From https://news.ycombinator.com/item?id=41306658 :

> From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :

>> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.

But those are per-arch performance flags, not security flags.

Further reading:

Fabien Sanglard in driving compilers: https://fabiensanglard.net/dc/

GNU binutils with their own take on how to process static archives (libfoo.a) https://sourceware.org/bugzilla/show_bug.cgi?id=32006

Linkers: Mold: https://news.ycombinator.com/item?id=26233244

Wild: https://news.ycombinator.com/item?id=42814683

List of FOSS C linkers:

GNU ld

GNU gold

LLVM lld mold (by LLVM lld author)

wild

EDIT: typesetting

The 5-part series by Fabien Sanglard is really good. Thanks for sharing!
If you are taking notes, add `-fzero-init-padding-bits=all` to the list, without this flag, GCC 15 onwards will not zero-initialize a full union if you wrote pre-C23 style ={0} and the largest member is not the first one. `-ftrivial-auto-var-init` cannot help this case. https://godbolt.org/z/7zKccfnea
I have been always stuck with C99, what is the "post" C23 way that will zero initialize a full union?

Or am I misunderstanding this?

`={}` in place of `={0}` is the new option in C23.
Sane defaults should be table stakes for toolchains but C++ has "history".

All significant C++ code-bases and projects I've worked on have had 10s of lines (if not screens) of compiler and linker options - a maintenance nightmare particularly with stuff related to optimization. This stuff is so brittle, who knows when (with which release of the compiler or linker) a particular combination of optimization flags were actually beneficial? How do you regression test this stuff? So everyone is afraid to touch this stuff.

Other compiled languages have similar issues but none to the extent of C++ that I've experienced.

> Sane defaults should be table stakes for toolchains but C++ has "history".

Yes, it has. By "history" you actually mean "production software that is expected to not break just because someone upgrades a compiler". Yes, C++ does have a lot of that.

> All significant C++ code-bases and projects I've worked on have had 10s of lines (if not screens) of compiler and linker options - a maintenance nightmare particularly with stuff related to optimization.

No, not really. That is definitely not the norm, at all. I can tell you as a matter of fact that release builds of some production software that's even a household name is built with only a couple of basic custom compiler flags, such as specifying the exact version of the target language.

Moreover, if your project uses a build system such as CMake and your team is able to spend 5 minutes reading an onboarding guide onto modern CMake, you do not even need or care to set compiler flags. You set a few high-level target properties and you never look at it ever again.

> Yes, it has. By "history" you actually mean "production software that is expected to not break just because someone upgrades a compiler". Yes, C++ does have a lot of that.

I disagree. Disproportionately in my career random C and C++ code bases failed to build because some new warning was introduced. And this is precisely because compiler options are so bad in that a lot of projects do Wall, Wextra and Werror.

Also the way undefined behavior is exploited means that you don't really know of your software that worked fine 10 years ago will actually work fine today, unless you have exhaustive tests.

> I disagree. Disproportionately in my career random C and C++ code bases failed to build because some new warning was introduced. And this is precisely because compiler options are so bad in that a lot of projects do Wall, Wextra and Werror.

There is nothing to disagree. It is a statement of fact that there is production software that is not expected to break just because someone breaks a compiler. This is not up for debate. Setting flags like Werror is not even relevant, because that is an explicit choice of development teams and one which is strongly discouraged beyond local builds.

> Also the way undefined behavior is exploited means that you don't really know of your software that worked fine 10 years ago will actually work fine today, unless you have exhaustive tests.

No, not really. There are only two scenarios with UB: either you unwittingly used UB and thus you introduced an error, or you purposely used a feature provided by your specific choice of compiler+OS+hardware that leverages UB.

The latter involves a ton of due diligence and pinning your particular platform, particularly compiler version.

So either you don't know what you're doing, or you are very well aware and very specific about what you're doing.

  • nly
  • ·
  • 1 day ago
  • ·
  • [ - ]
I've rarely seen more than a handful of compiler options even on very large codebase

If anything there's tonnes people should be using more of.

The problem with all these hardening options though is they noticeably reduce performance

> The problem with all these hardening options though is they noticeably reduce performance

Yep. What I would really like is 2 lists, one for debug/checked mode and one for release.

It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

I've been eyeing Zig recently. It makes a lot of choices straightforward yet explicit, e.g. you choose between four optimisation strategies: debug, safety, size, perf. Individual programs/libraries can have a default or force one (for the whole program or a compilation unit), but it's customary to delegate that choice to the person actually building from source.

Even simpler story with Go. It's been designed by people who favour correctness over performance, and most compiler flags (like -race, -asan, -clobberdead) exist to help debug problems.

I've been observing a lot of people complain about declining software quality; yearly update treadmills delivering unwanted features and creating two bugs for each one fixed. Simplicity and correctness still seem to be a niche thing; I salute everyone who actually cares.

> It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid. Drawing from math analogies where one wrong step leads to an absurd conclusion:

* https://en.wikipedia.org/wiki/All_horses_are_the_same_color

* https://en.wikipedia.org/wiki/Principle_of_explosion

* https://proofwiki.org/wiki/False_Statement_implies_Every_Sta...

* https://en.wikipedia.org/wiki/Mathematical_fallacy#Division_...

Back to programming, hopefully this example will not be controversial: If a program contains at least one write to an arbitrary address (e.g. `*(char*)0x123 = 0x456;`), the overall behavior will be unpredictable and effectively meaningless. In this case, I would fully agree with a compiler deleting, reordering, and manipulating code as a result of that particular UB.

You could argue that C shouldn't have been designed so that reading out of bounds is UB. Instead, it should read some arbitrary value without crashing or cleanly segfault at that instruction, with absolutely no effects on any surrounding code.

You could argue that C/C++ shouldn't have made it UB to dereference a null pointer for reading, but I fully agree that dereferencing a null pointer for a method call or writing a field must be UB.

Another analogy in programming is, let's forget about UB. Let's say you're writing a hash table in Java (in the normal safe subset without using JNI or Unsafe). If you get even one statement wrong in the data structure implementation, there still might be arbitrarily large consequences like dropping values when you shouldn't, miscounting how many values exist, duplicating values when you shouldn't, having an incorrect state that causes subtle failures far in the future, etc. The consequences are not as severe and pervasive as UB at the language level, but it will still result in corrupt data and/or unpredictable behavior for the user of that library code, which can in turn have arbitrarily large consequences. I guess the only difference compared to C/C++ UB is that for C/C++, there is more "spooky action at a distance", where some piece of UB can have very non-local consequences. But even incorrect code in safe Java can produce large consequences, maybe just not as large on average.

I am not against compilers "exploiting" UB for performance gain. But these are the ways forward that I believe in, for any programming language in general:

* In the language specification, reduce the number of cases/places that are undefined. Not only does it reduce the chances of bad things happening, but it also makes the rules easier to remember for humans, thus making it easier to avoid triggering these cases.

* Adding to that point, favor compile-time errors over run-time UB. For example, reading from an uninitialized local variable is a compile error in Java but UB in C. Rust's whole shtick about lifetimes and borrowing is one huge transformation of run-time problems into compile-time problems.

* Overwhelmingly favor safety by default. For example, array accesses should be bounds-checked using the convenient operator like `array[index]`, whereas the unsafe unchecked version should be something obnoxious and ugly like `unsafe { array.get_unchecked(index) }`. Make the safe way easy and make the unsafe way hard - the exact opposite of C/C++.

* Provide good (and preferably complete) sanitizer tools to check that UB isn't triggered at run time. C/C++ did not have these for the first few decades of their lives, and you were flying blind when triggering UB.

> Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid.

You're failing to understand the problem domain, and consequently you're oblivious to how UB is actually a solution to problems.

There are two sides to UB: the one which is associated with erroneous programs, because clueless developers unwittingly do things that the standards explicitly states that lead to unknown and unpredictable behavior, and the one which leads to valid programs, because developers knowingly adopted an implementation that specifies exactly what behavior they should expect from doing things that the standards specify as UB.

Somehow, those who mindlessly criticize UB only parrot the simplistic take on UB, the "nasal demons" blurb. They don't even stop to think about what is undefined behavior and why would a programming language specification purposely leave specific behavior as undefined instead of unspecified or even implementation-defined. They do not understand what they are discussing and don't invest any moment trying to understand why things are the way they are, and what problems are solved by them. The just parrot cliches.

Perhaps I'm spoiled by ever so slightly higher-level languages, but it seems your entire point is that if a program is ever so slightly incorrect, the programmer (and/or the end user) should suffer all of the consequences.

From where I stand, compilers are tools to aid the programmer. We invented them, because we found out that it was more productive than writing machine code by hand[1]. If an off-by-one error or a null pointer dereference[2] in a trivial program can invoke time travel several frames up the call stack[3], it isn't just missing the entire point of having a compiler - it can drive people insane.

[1]: https://en.wikipedia.org/wiki/Grace_Hopper#UNIVAC

[2]: https://en.wikipedia.org/wiki/Tony_Hoare#Research_and_career

[3]: https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=63...

As far as I can tell, no popular language created in the past 30 years (including those with official specs and multiple implementations) makes heavy use of UB.

from the ubc.pdf paper linked in this thread.

    int d[16];
    int SATD (void)
    {
    int satd = 0, dd, k;
    for (dd=d[k=0]; k<16; dd=d[++k]) {
    satd += (dd < 0 ? -dd : dd);
    }
    return satd;
    }
This was “optimized” by a pre-release of gcc-4.8 into the following infinite loop: SATD: .L2: jmp .L2

(simply because k<16 is always true because k is used as an index to an array with a known size)

I mean thats just sort of nuts, how do you loop over an array then in an UB free manner? The paper referred to this situation being remediated:

"The GCC maintainers subsequently disabled this optimization for the case occuring in SPEC"

I try to keep up with the UB thing, while for current code I just use o0 because its fast enough and apparently allows me to keep an array index in bounds. Reading about this leaves me thinking that some of this UB criticism might not be so mindless.

  • tyg13
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Leaving aside the fact that that code reads an array out-of-bounds (which is not a trivial security issue) that's a ridiculously obtuse way to write that code. For loop conditions should be almost always be expressed in terms of their induction variable. A much cleaner and safe version is

    int d[16];
    int SATD (void)
    {
    int satd = 0, k = 0;
    for (k = 0; k < 16; ++k)
      satd += d[k] < 0 ? -d[k] : d[k];
    return satd;
    }
Reference: https://c9x.me/compile/bib/ubc.pdf#page=4

Both the parent comment and the referenced paper fail to mention the out-of-bounds access of d[16]. At best, the paper says:

> The compiler assumed that no out-of-bounds access to d would happen, and from that derived that k is at most 15 after the access

Here is my analysis. By unrolling the loop and tracing the statements and values, we get:

    k = 0;  dd = d[k];
    k is 0;  k < 16 is true;  loop body;  ++k;  k is 1;  dd = d[k];
    k is 1;  k < 16 is true;  loop body;  ++k;  k is 2;  dd = d[k];
    k is 2;  k < 16 is true;  loop body;  ++k;  k is 3;  dd = d[k];
    k is 3;  k < 16 is true;  loop body;  ++k;  k is 4;  dd = d[k];
    k is 4;  k < 16 is true;  loop body;  ++k;  k is 5;  dd = d[k];
    k is 5;  k < 16 is true;  loop body;  ++k;  k is 6;  dd = d[k];
    k is 6;  k < 16 is true;  loop body;  ++k;  k is 7;  dd = d[k];
    k is 7;  k < 16 is true;  loop body;  ++k;  k is 8;  dd = d[k];
    k is 8;  k < 16 is true;  loop body;  ++k;  k is 9;  dd = d[k];
    k is 9;  k < 16 is true;  loop body;  ++k;  k is 10;  dd = d[k];
    k is 10;  k < 16 is true;  loop body;  ++k;  k is 11;  dd = d[k];
    k is 11;  k < 16 is true;  loop body;  ++k;  k is 12;  dd = d[k];
    k is 12;  k < 16 is true;  loop body;  ++k;  k is 13;  dd = d[k];
    k is 13;  k < 16 is true;  loop body;  ++k;  k is 14;  dd = d[k];
    k is 14;  k < 16 is true;  loop body;  ++k;  k is 15;  dd = d[k];
    k is 15;  k < 16 is true;  loop body;  ++k;  k is 16;  dd = d[k];  OUT OF BOUNDS!
As long as we enter the loop, the loop must eventually execute undefined behavior. Furthermore, every instance of testing `k < 16` is true before we hit UB. Therefore it can be simplified to true without loss of functionality, because after we hit UB, we are allowed to do absolutely anything. In my ancestor post where I said that any mistake, no matter how small, can have unbounded consequences, I fully mean it and believe it.

Please stop blaming the compiler. The problem is buggy code. Either fix the code, or fix the language specification so that wild reads either return an arbitrary value or crashes cleanly at that instruction.

Note that we cannot change the spec to give definite behavior to writing out of bounds, because it is always possible to overwrite something critical like a return address or an instruction, and then it is literally undefined behavior and anything can happen.

> I mean thats just sort of nuts, how do you loop over an array then in an UB free manner?

The code is significantly transformed, but the nasty behavior can be prevented by designing code that does not read out of bounds! The trick is that the test `k < 16` must be false before any attempt to read/write `d[k]`. Which 99.99% of programmers get right, especially by writing a loop in the standard way and not in the obtuse way demonstrated in the referenced code. The obvious and correct implementation is:

    for (int k = 0; k < 16; k++) {
        int dd = d[k];
        satd += dd < 0 ? -dd : dd;
    }
The fact that the SPEC code chose to load `d[k]` before checking that `k` is still in bounds is an overly clever, counterproductive "jumping the gun" tactic. Putting assignment statements into indexing expressions is also needless obfuscation (which I untangled in the unrolled analysis).
  • duped
  • ·
  • 1 day ago
  • ·
  • [ - ]
I mean if you emit compiler commands from any build system they're going to be completely illegible due to the number of -L,-l,-I,-i,-D flags which are mostly generated by things like pkg-config and your build configuration.

There's not many optimization flags that people get fine grained with, the exception being floating point because -ffast-math alone is extremely inadvisable

It goes even further.

Technically, the compilers can choose to make undefined-behavior implementation-defined-behavior instead. But they don't.

That's kind of also how C++ std::span wound up without overflow checks in practice. And my_arr.at(i) just isn't really being used by anybody.

Seems very user-hostile to me.

-ffast-math and -Ofast are inadvisable on principle:

Tl;dr: python gevent messes up your x87 float registers (yes.)

https://moyix.blogspot.com/2022/09/someones-been-messing-wit...

  • duped
  • ·
  • 1 day ago
  • ·
  • [ - ]
I disagree with "on principle." There are flaws in the design of IEEE 754 and omitting strict adherence for the purposes of performance is fine, if not required for some applications.

For example, recursive filters (even the humble averaging filter) will suffer untold pain without enabling DAZ/FTZ mode.

fwiw the linked issue has been remedied in recent compilers and isn't a python problem, it's a gcc problem. Even that said, if your algorithm requires subnormal numbers, for the love of numeric stability, guard your scopes and set the mxcsr register accordingly!

A big problem with -ffast-math is that it causes isnan and isinf to be completely, silently broken (gcc and clang).

Like, "oh you want faster FP operations? well then surely you have no need to ever be able to detect infinite or NaN values.."

> well then surely you have no need to ever be able to detect infinite or NaN values

Well yeah, maybe I actually don't.

In practice, "some applications" seems to include almost all of NumPy and Python. Good call.

Like with the Java sin() fixes: if you don't care about the results being correct why not constant-fold an arbitrary number? Way faster at run-time.

  • duped
  • ·
  • 1 day ago
  • ·
  • [ - ]
All numerical methods define "correct" to be within a range or to some precision. There are very few algorithms that require FTZ mode to be "correct" - the linked article and the article it links don't even have an example (there are good examples of where say, -ffinite-math is super dangerous, because inf/NaNs are way more common than arithmetic on subnormal numbers).

And yea, the fact that crt1.o being linked into shared libraries fucking up the precision of some computations depending on library dependencies (and the order they're loaded!) was bad.. but it lingered in the entire Linux ecosystem for over a decade. So how bad was it, if it took that long to notice?

If you have a numerical algorithm that requires subnormal arithmetic to converge, a) don't that's super shaky, b) set/unset mxcsr at the top/bottom of your function and ensure you never unwind the stack without resetting it. It's preserved across context switches so you're not going to get blown away by the OS scheduler.

This isn't practical numerical methods in C 101 but it's at least 201. In practice you don't trust floats for bit exact math. Use different types for that.

IEEE 754 defaults are for people who don't get deeply into numerical analysis and Cauchy sequences. Like, ostensibly, most FOSS maintainers. Or most people who write software in general.

There are people that do. HPC and the demoscene have numerous examples. Most of the people I met here are capable of reading gcc's manual and picking the optimizations they actually need. And they know how to debug this stuff.

If it's not obvious who gcc's defaults should cater to, then redefine human-friendly until it becomes obvious.

  • a_e_k
  • ·
  • 1 day ago
  • ·
  • [ - ]
I find that building and testing my code with -Ofast and -ffast-math from the beginning helps to avoid a lot of the issues with them. Any new code that breaks with them on probably wasn't particularly stable anyway and should be rethought.
"what kind of math does the compile usually do without this funsafemath flag? Sad dangerous math?"
There are things like floating point exceptions (IEEE 754) and subnormal numbers (close to zero, have less precision than the small approximation error "machine-epsilon"). The idea is to degrade gracefully. These additional features require additional transistors and processing which raises latency.

If you really know (and want to know) what you are doing, turning this stuff off may help. Some people even advocate brute-forcing all 2^32 single floats in your test cases, because it is kind if feasible to do so: https://news.ycombinator.com/item?id=34726919

  • vkaku
  • ·
  • 1 day ago
  • ·
  • [ - ]
It is hard that people have to remember these options on a per-compiler basis. I'd rather prefer people use easy to remember flags like `-O2` than the word soup mentioned here.

Compiler writers should revisit their option matrices and come up with easy defaults today.

Disclaimer: Used to work on the GCC code for option handling back in the day. Options like -O2 map to a whole bunch of individual options, and people only needed to remember adding -O2, which corresponded to different values in every era and yet subjectively meant: decently optimized code.

> I'd rather prefer people use easy to remember flags

Like -fhardened?

  • vkaku
  • ·
  • 17 hours ago
  • ·
  • [ - ]
Sure.

-f is technically machine independent.

-m should be used when having it implemented as machine dependent options.

So if you are telling me all these security features are only developed without requiring to implement per machine level support then it makes sense.

The interactions between different optimization passes may have surprising consequences.

Endless loops are technically undefined behavior, can be dropped, except for their assembly jump tag entry point, and collide with the next function's assembly jump tag.

All because of UB.

Huge headache. Try debugging that.

And interaction loops on games are sometimes endlessly waiting for input.

> The keyword $ORIGIN in rpath is expanded by the dynamic loader to the path of the directory where the object is found, which may be set by an attacker (e.g., via hard links) to a directory with a malicious dependency. On Linux, the fs.protected_hardlinks sysctl can help prevent this attack.

This has nothing to do with hardlinks, the same applies to symlinks. On linux the status quo is that the dynamic loader finds the library by symlink, the convention is `libfoo.so.x -> libfoo.so.a.b.c` where `x` is the ABI version and `a.b.c` the full version.

But if `libfoo.so.x -> /absolute/path/libfoo.so.a.b.c` and it has `$ORIGIN/libbar.so.y` in DT_NEEDED, those are resolved relative to the dir of the symlink, not to realpath of the symlink.

That makes sense, cause it would be a lot of startup overhead to lstat every path component of every library that uses $ORIGIN.

I don't see the point of including this gotcha in a security overview to be honest.

> Our threat model is that all software developers make mistakes, and sometimes those mistakes lead to vulnerabilities

That’s not a threat model. What are the attackers going to do if there are vulnerabilities in your executable? Is it connected to a web server?

Does it have access to privileged resources?

They're using it in the sense of "the scope of this document covers this scenario," so the answer to all of your questions are out of scope.
Nice, thank you! Saved this. Mastering GCC compiler options feels harder than mastering C++ UB.
Succinct.
Last week a build broke because there was space after the Wl, some-linker-option The Warning messages can't be very challenging to decipher.

Most importantly: Are the warnings show-stoppers? Not in part of my pay grade.

There is a pragma to ignore specific warnings. This is "#pragma GCC diagnostic ignore "some-compiler-warning" which is useful when dealing with several versions of the GCC compiler.

Yes, it happens.

> Most importantly: Are the warnings show-stoppers? Not in part of my pay grade.

The best places (code quality wise) I've ever worked were the strictest on compiler warnings. Turn on all warnings, turn on extra warnings, treat warnings as errors, and forbid disabling warnings via #pragma. The absolute worst was the one where compiling the software using the compiler's default warning level produced a deluge of 40,000 warnings, and the culture was to disable warnings when they became annoying (vs. you know, fixing them).

My philosophy: Compilers don't issue warnings for fun. Every one of them is a potential problem and they are almost always worth fixing.

I also adhere to this in my personal hobby projects, too. It can be challenging when integrating with third party libraries, where the library maintainer doesn't care as much. I once submitted a patch to an open source project I won't name here, which fixed a bunch of warnings that seem to be only present in macOS builds (XCode's defaults tend to be quite strict). The response was not to merge it because "I don't regularly do macOS builds, and besides, they're just warnings." Alright, bro, sorry I tried to help.

If my C++ project is a simple utility supposed to take some files, crunch numbers, and spit out results, is there still the possibility it can be used for nefarious purposes?
It doesn't matter what the tool does, what matters is 1) whether it is ever exposed to untrusted input, 2) what permissions it has.

If you don't ever expose something to untrusted input, then you're probably fine. But be VERY careful, because you should defensively consider anything downloaded off the internet to be untrusted input.

As for permissions, if you run a tool inside of a sandbox inside of a virtual machine on an airgapped computer inside a Faraday cage six stories underground, then you're probably fine.

  • duped
  • ·
  • 1 day ago
  • ·
  • [ - ]
Read/write access to a filesystem is a pretty large surface area for attack, so yes.
How does it get its input files? Where does it run? What's the output used for?
It depends on what exactly your program does and equally important, where it is deployed and used. Security is a matter of degree based on context i.e. there are levels of Security. It is not a all or nothing proposition.

If your program is going to be used for some non-critical work internally you don't have to bother much about attack surface/vectors etc. Just use some standard "healthy" compiler options and you are good.

If you would like to know more on this subject, i recommend reading the classic The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities by Mark Dowd et al.

Related: Rob Pike on programming style, especially his note in include files: http://doc.cat-v.org/bell_labs/pikestyle

See also: SQLites amalgamation. Others (iirc Philippe Gaultier) have called this a Unity build: https://sqlite.org/amalgamation.html

Rob Pike on systems software research: http://doc.cat-v.org/bell_labs/utah2000/utah2000.html

EDIT: typo

His opinions on include files have fallen out of favor because compiling is faster and it adds needless work. Are there organizations that still do this? All the style guides I've seen do not.
  • csb6
  • ·
  • 1 day ago
  • ·
  • [ - ]
I believe clang and gcc avoid reading in and re-processing include files that are already included, so his advice is unnecessary and creates a lot of maintenance burden, especially for C++ where a lot more code is in header files. It may still be useful for old compilers, though.
They recognize include guards and skip any further inclusions for those cases. There are scenarios where you may want multiple inclusion and you can still have that.
If your filesystem and disks are fast enough, then maybe Rob's assumptions don't apply.
I still adhere to this for personal hobby projects, more out of a sense of craftsmanship than anything practical at this point.
  • klysm
  • ·
  • 1 day ago
  • ·
  • [ - ]
It would be really nice if we had a versioning scheme that enabled developers to get secure by default and opt into performance tradeoffs
Wolfi OS (by Chainguard) is one of a few decided to adopt openssf compiler options

https://github.com/wolfi-dev/os/blob/main/openssf-compiler-o...

No, he does not. He skipped most warnings