26 years of FreeBSD and counting...

IIRC in about 99 I got sick of Mandrake and RH RPM deps hell and found FreeBSD 3 CD in a Walnut creek book. Ports and BSD packages were a revelation, to say nothing of the documentation which still sets it apart from the haphazard Linux.

The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.

Currently I run FreeBSD on several home machines including old mac minis repurposed as media machines throughout the house.

They run kodi + linux brave and with that I can stream anything like live sports.

Also OpenBSD for one firewall and PFSense (FreeBSD) for another.

> The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.

I completely agree.

Supermicro mobo's with server-grade components combined with aggressive cooling fans/heat sinks running FreeBSD in a AAA data center resulted in two prod servers having uptimes of over 3000+ days. This included dozens of app/jails/ports updates (pretty much everything other than the kernel).

Back when I was a sysadmin (sort of 2007-2010), the preference of a colleague (RIP AJG...) who ran a lot of things before my time at the org, was FreeBSD, and I quickly understood why. We ran Postgres on 6.x as a db for a large Jira instance, while Jira itself ran on Linux iirc because I went with jrockit that ran circles around any JVM at the time. Those Postgres boxes had many years of uptime, locked away in a small colo facility, never failed and outlived the org that got merged and chopped up. FreeBSD was just so snappy, and just kept going. At the same time I ran ZFS on FreeBSD as our main file store for NFS and whatnot, snapshots, send/recv replication and all.

And it was all indeed on Supermicro server hardware.

And in parallel, while our routing kit was mostly Cisco, I put a transparent bridging firewall in front of the network running pfSense 1.2 or 1.3. It was one of those embedded boxes running a Via C3/Nehemiah, that had the Via Padlock crypto engine that pfSense supported. Its AES256 performance blew away our Xeons and crypto accelerator cards in our midrange Cisco ISRs - cards costing more than that C3 box. It had a failsafe Ethernet passthrough for when power went down and it ran FreeBSD. I've been using pfSense ever since, commercialisation / Netgate aside, force of habit.

And although for some things I lean towards OpenBSD today, FreeBSD delivers, and it has for nearly 20 years for me. And, as they say, it should for you, too.

> uptimes of over 3000+ days

Oof, that sounds scary. I’ve come to view high uptime as dangerous… it’s a sign you haven’t rebooted the thing enough to know what even happens on reboot (will everything come back up? Is the system currently relying on a process that only happens to be running because someone started it manually? Etc)

Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.

Depends how they are built. There are many embedded/real-time systems that expect this sort of reliability too of course.

I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.

So if we are talking about garbage windows servers sure. It's just a question of what is accepted by the customers/users.

Yep. I once did some contracting work for a place that had servers with 1200+ day uptimes. People were afraid to reboot anything. There was also tons of turnover.
I still remember AJG vividly to this day. He also once told me he was a FreeBSD contributor.

My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed. I tried Red Hat 9, and it was slow as a snail by comparison. For me, the choice was obvious. Later on I was running FreeBSD on my ThinkPad, and I still remember the days of coding on it using my professor's linear/non-linear optimisation library, sorting out wlan driver and firmware to use the library wifi, and compiling Mozilla on my way home while the laptop was in my backpack. My personal record: I never messed up a single FreeBSD install, even when I was completely drunk.

Even later, I needed to monitor the CPU and memory usage of our performance/latency critical code. The POSIX API worked out of the box on FreeBSD and Solaris exactly as documented. Linux? Nope. I had to resort to parsing /proc myself, and what a mess it was. The structure was inconsistent, and even within the same kernel minor version the behaviour could change. Sometimes a process's CPU time included all its threads, and sometimes it didn't.

To this day, I still tell people that FreeBSD (and the other BSDs) feels like a proper operating system, and GNU/Linux feels like a toy.

All hail the mighty Wombats!

The "completely drunk" comment made me chuckle, too familiar... poor choices, but good times!

This is more about OpenBSD, but worth mentioning that nicm of tmux fame also worked with us in the same little office, in a strange little town.

AJG also made some contributions to Postgres, and wrote a beautiful, full-featured web editor for BIND DNS records, which, sadly, faded along with him and was eventually lost to time along with his domain, tcpd.net, that has since expired and was taken over.

Lovely stuff. The industry would be so much better off if the family of BSDs had more attention and use.

I run some EVE Online services for friends. They have manual install steps for those of use not using containers. Took me half a day to get the stack going on FBSD and that was mostly me making typos and mistakes. So pleased I was able to dodge the “docker compose up” trap.

As one of the guys who develops a EVE Online service: While you were able to get by with manual install steps that perhaps change with the OS, for a decent number of people it is the first time they do anything on the CLI on a unixoid system. Docker reduces the support workload in our help channels drastically because it is easier to get going.
  • clan
  • ·
  • 8 hours ago
  • ·
  • [ - ]
I can sympathize. It makes sense.

But...

As a veteran admin I am tired of reading trough Docker files to guess how to do a native setup. You can never suss out the intent from those files - only do haphazardous guesses.

It smells too much like "the code is the documentation".

I am fine that the manual install steps are hidden deep in the dungeons away from the casual users.

But please do not replace Posix compliance with Docker compliance.

Look at Immich for an unfortunate example. Theys have some nice high level architecture documentation. But the "whys" of the Dockerfile is nowhere to be found. Makes it harder to contribute as it caters to the Docker crowd only and leaves a lot of guesswork for the Posix crowd.

Veteran sysadmin of 30 years... UNIX sysadmin and developer...

I use docker+compose for my dev projects for about the past 12 years. Very tough to beat the speed of development with multi-tier applications.

To me Dockerfiles seem like the perfect amount of DSL but still flexible because you can literally run any command as a RUN line and produce anything you want for layer. Dockerfiles seem to get it right. Maybe the 'anything' seems like a mis-feature but if you use it well it's a game changer.

Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).

Are their supply chain risks? sure -- Like many package systems. I build my important images from scratch all the time just to mitigate this. There's also Podman with Podfiles if you want something more FOSS friendly but less polished.

All that said, I generally containerize production workloads but not with Docker. If a dev project is ready for primetime now I port it to Kubernetes. Used to be BSD Jails .

[dead]
Can you explain why "Docker compose" is a trap?
For my two cents, it discourages standardization.

If you run bare-metal, and instructions to build a project say "you need to install libfoo-dev, libbar-dev, libbaz-dev", you're still sourcing it from your known supply chain, with its known lifecycles and processes. If there's a CVE in libbaz, you'll likely get the patch and news from the same mailing lists you got your kernel and Apache updates from.

Conversely, if you pull in a ready-made Docker container, it might be running an entire Alpine or Ubuntu distribution atop your preferred Debian or FreeBSD. Any process you had to keep those packages up to date and monitor vulnerabilities now has to be extended to cover additional distributions.

  • clan
  • ·
  • 8 hours ago
  • ·
  • [ - ]
You said it better at first: Standardization.

Posix is the standard.

Docker is a tool on top of that layer. Absolutely nothing wrong with it!

But you need to document towards the lower layers. What libraries are used and how they're interconnected.

Posix gives you that common ground.

I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.

The manual steps need to be documented. Not for regular users but for those porting to other systems.

I do not like black boxes.

Why I move from docker for selfhosted stuff was the lack of documentation and very complicated dockerfiles with various shell scripts services config. Sometimes it feels like reading autoconf generated files. I much prefer to learn whatever packaging method of the OS and build the thing myself.
Something like harbor easily integrates to serve as both a pull-through cache, and a cve scanner. You can actually limit allowing pulls with X type or CVSS rating.

You /should/ be scanning your containers just like you /should/ be scanning the rest of your platform surface.

I wonder how it would work with the new-ish podman/oci container support?
You've put that command in quotation marks in three comments on this topic. I don't think it's as prevalent as you're making out.
It really is amazing how much success Linux has achieved given its relatively haphazard nature.

FreeBSD always has been, and always will be, my favorite OS.

It is so much more coherent and considered, as the post author points out. It is cohesive; whole.

> It really is amazing how much success Linux has achieved given its relatively haphazard nature.

That haphazard nature is probably part of the reason for its success, since it allowed for many alternative ways of doing things being experimented in parallel.

That was my impression from diving into The Design & Implementation of the FreeBSD Operating System. I really need to devote time to running it long term.
Really great book. Among other things, I think it's the best explanation of ZFS I've seen in print.
Linux has turned haphazardry into a strength. This is impressive.

I prefer FreeBSD.

I like the haphazardry but I think systemd veered too far into dadaism.
THIS. As bad as launchctl on Macs. Solution looking for a problem so it causes more problems -- like IPv6
> Solution looking for a problem

Two clear problems with the init system (https://en.wikipedia.org/wiki/Init) are

- it doesn’t handle parallel startup of services (sysadmins can tweak their init scripts to speed up booting, but init doesn’t provide any assistance)

- it does not work in a world where devices get attached to and detached from computers all the time (think of USB and Bluetooth devices, WiFi networks).

The second problem was evolutionary solved in init systems by having multiple daemons doing, basically, the same thing: listen for device attachments/detachments, and handling them. Unifying that in a single daemon, IMO, is a good thing. If you accept that, making that single daemon the init process makes sense, too, as it will give you a solution for the first problem.

Yes, ”a solution”. We need a thing. Systemd is a thing. Therefore, we need systemd.
Not to get into a flame war, but 99% of my issues with systemd is that they didn't just replace init, but NTP, DHCP, logging (this one is arguably necessary, but they made it complicated, especially if you want to send logs to a centralized remote location or use another utility to view logs), etc. It broke the fundamental historical concept of unix: do one thing very well.

To make things worse, the opinionated nature of systemd's founder (Lennart Poettering) has meant many a sysadmin has had to fight with it in real-world usage (eg systemd-timesyncd's SNTP client not handling drift very well or systemd-networkd not handling real world DHCP fields). His responses "Don't use a computer with a clock that drifts" or "we're not supporting a non-standard field that the majority of DHCP servers use" just don't jive in the real world. The result was going to be ugly. It's not surprising that most distros ended up bundling chrony, etc.

You can't be serious thinking that IPv4 doesn't have problems
Of course not.

But IPv6 is not the solution to Ipv4's issues at all.

IPv6 is something completely different justified post-facto with EMOTIONAL arguments ie. You are stealing the last IPv4 address from the children!

- Dual stack -- unnecessary and bloated - Performance = 4x worse or more - No NAT or private networks -- not in the same sense. People love to hate on NAT but I do not want my toaster on the internet with a unique hardware serial number. - Hardware tracking built into the protocol -- the mitigations offered are BS. - Addresses are a congintive block - Forces people to use DNS (central) which acts as a censorship choke point.

All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc

I actually wrote a code project that implements this network as an overlay -- but it's not ready to share yet. Works though.

If I were to imagine my self in the room deciding on the IPv6 requirements I expect the key one was 'track every person and every device every where all the time' because if you are just trying to expand the address space then IPv6 is way way way overkill -- it's overkill even for future proofing for the next 1000 years of all that privacy invading.

> All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc

That is what we have in ipv6. What you write sounds good/easy on paper, but when you look at how networks are really implemented you realize it is impossible to do that. Networks packets have to obey the laws of bits and bytes and there isn't any place to put that extra 0 in ipv4: no matter what you have to create a new ipv6. They did write a standard for how to send ipv4 addresses in ipv6, but anyone who doesn't have ipv6 themselves can't use that and so we must dual stack until everyone transitions.

Actually there is a place to put it... I didn't want to get into this but since you asked:

My prototype/thought experiment is called IPv40 a 40bit extension to IPv4.

IPv40 addresses are carried over Legacy networks using the IPv4 Options Field (Type 35)

Legacy routers ignore Option 35 and route based on the 32-bit destination (effectively forcing traffic to "Space 0". IPv40-aware routers parse Option 35 to switch Universes.

This works right now but as a software overlay not in hardware.

Just my programming/thought experiment which was pretty fun.

When solutions are pushed top down like IPv6 my spider sense tingles -- what problem is it solving? the answers are NOT 'to address address space limitations of IPv4' that is the marketing and if you challenge it you will be met with ad hominen attacks and emotional manipulations.

So either the new octet is in the least-significant place in an ipv40 address, in which case it does a terrible job of alleviating the IP shortage (everyone who already has IP blocks just gets 256x as much as them)

Or, it’s in the most-significant place, meaning every new ipv40 IP is in a block that will be a black hole to any old routers, or they just forward it to the (wrong) address that you get from dropping the first octet.

Not to mention it’s still not software-compatible (it doesn’t fit in 32 bits, all system calls would have to change, etc.)

That all seems significantly worse than IPv6 which already works just fine today.

You didn't save anything as everyone needs to know the new extension before anyone can use it.

Hardware is important - fast routers can't do work in the CPU (and it was even worse in the mid 90's when this started), they need special hardware assistance.

All good points guys -- but my point was to see what is possible. And it was. And it was fun! Of course I know it will perform poorly and it's not hardware.
So you have to update every router to actually route the "non-legacy" addresses correctly. How is this different from IPv6?
That is the easy part - most of the core routers have supported ipv6 for decades - IIRC many are IPv6 only on the backbone. The hard part is if there is even one client that doesn't have the update you can't use the new non-legacy addresses as it can't talk to you.

Just like today, it is likely that most client will support your new address, but ISPs won't route them for you.

  • ·
  • 5 hours ago
  • ·
  • [ - ]
I almost completely agree with you, but IPv6 isn't going anywhere - it's our only real alternative. Any other new standard would take decades to implement even if a new standard is agreed on. Core routers would need to be replaced with new devices with ASICs to do hardware routing, etc. It's just far too late.

I still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.

That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there.

> still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.

So true.

> That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there

JS Fingerprinting is a huge issue.

Honestly if IPv6 was just for the internet of things I'd ignore it. Since it's pushed on to every machine and you are essentially forced to use it -- with no direct benefit to the end user -- I have a big problem with it.

So it's not strictly needed for YOU, but it solves some problems that are not a problem for YOU, and also happens to address space. I do not think the 'fixes' to IPv6 do enough to address my privacy concerns, particularly with a well-resourced adversary. Seems like they just raised the bar a little. Why even bother? Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.

Browser / JS fingerprinting applies to IPv4, too. And your entire IPv4 home network is likely NAT'd out of an ISP DHCP provided address that rarely changes, so it would be easy to track your household across sites. Do you feel this is a privacy concern, and why not?
> Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.

You probably don't see it directly, but IPv4 IP addresses are getting expensive - AWS recently started to charge for their use. Cloud providers are sucking them up. If you're in the developed world, you may not see it, but many ISPs, especially in Asia and Africa, are relying on multiple levels of NAT to serve customers - you often literally can't connect to home if you need or want to. It also breaks some protocols in ways you can't get around depending on how said ISPs deal with NAT (eg you pretty much can't use IPSEC VPNs and some other protocols when you're getting NAT'd 2+ times; BitTorrent had issues in this environment, too). Because ISPs doing NAT requires state-tracking, this can cause performance issues in some cases. Some ISPs also use this as an excuse to force you to use their DNS infra that they can then sell onwards (though this can now be mitigated by DNS over HTTPS).

There are some benefits here, though. CGNAT means my phone isn't exposed directly to the big bad internet and I won't be bankrupted by a DDOS attack, but there are other, better ways to deal with that.

Again, I do get where you're coming from. But we do need to move on from IPv4; IPv6 is the only real alternative, warts and all.

C'mon that just rude to Dada.
Linux is haphazard because it's really only the kernel. The analog of "FreeBSD" would be a linux distro like Redhat or Debian etc. In fact, systemd's real goal was to get rid of linux' haphazard nature... but it's ahhh really divisive as everyone knows.

I go back to early Linux' initial success because of the license. It's the first real decision you have to make once you start putting your code out there.

Yes and no. There was also some intellectual property shenanigans with FreeBSD 4.3 and then the really rough FreeBSD 5 series and their initial experiments with M:N threading with the kernel and troubles with SMP
Just another instance of Worse is Better?
  • ·
  • 18 hours ago
  • ·
  • [ - ]
It seems FreeBSD is becoming more talked about in enthusiast communities simply because Linux is a lot more mainstream now and there’s a joy in contrarianism rather than any real changes with either of the two operating systems.
Dismissing the FreeBSD community as contrarians feels uncharitable. I can think of at least a few other contributing factors for the increase in popularity of late:

1) Linux's popularity has enlarged the pool of users interested in Unix-like operating systems. Some proportion of users familiar with Unix genuinely like FreeBSD and the unique features it offers.

2) The rise of docker and the implosion of VMWare has driven an increase of interest in FreeBSD Jails and the Bhyve hypervisor.

3) Running a homelab is a popular hobby. ZFS is popular for RAID, and pf is popular for networking.

4) Podman being brought to FreeBSD: (https://freebsdfoundation.org/blog/oci-containers-on-freebsd...).

5) Dell, AMD, Framework, and the FreeBSD foundation committing $750,000 to making FreeBSD easier to use last year: (https://freebsdfoundation.org/blog/why-laptop-support-why-no...).

6) Apple announcing that they're bringing the Swift language to FreeBSD this year.

My interest has been piqued of late. I've been a Linux enthusiast since the late 90's. I don't think it's a sense of contrarianism that motivates my interest anew.

As I've aged, what I've come to value most in software stacks is composability. I do not know if [Free]BSD restores that, but Linux feels like it has grown more complicated and less composable. I'm using this term loosely, but I'm mostly thinking of how one reasons and cognates about the way the system work in this instance. I want to work in a world where each tool on the OS's bench has a single straightforward man page, not swiss army knives where the authors/maintainers just kept throwing more "it can do this too" in to attract community.

I can’t speak for whole communities but my interest in FreeBSD has been renewed over the past couple of years. It has been a very solid OS for a long time and the tight integration between the kernel and core userland has meant that it is sometimes more performant than some popular Linux distros. But its UX has not always been amazing. Seems like lately they have really improved that. Plus ZFS and root on ZFS in particular is very nice.

I would actually be interested in running it in some production environments but it seems like that is pitted against the common deploy scenarios that involve Docker and while there is work on bringing runc to FreeBSD it is alpha stage at best currently.

Still, if you just want an ssh server, a file server, a mail server, it is a great OS with sane defaults and a predictable upgrade schedule.

Docker did work. AFAIK the APIs are there. Someone needs to grab the bull by the horns.

Jails and BHYVE vms are excellent -- but I use Docker every day and if I could use BSD as my docker host I would.

Good thing my docker servers are all built with terraform so I do not have to touch.

Specifically I was talking about this not being ready for prime time: https://github.com/samuelkarp/runj
  • crest
  • ·
  • 19 hours ago
  • ·
  • [ - ]
You can use Podman on FreeBSD, but the CNI providers need a bit more time cover all the amazing/crazy things the FreeBSD network stack can do.
For me it's all the changes in Linux. Every time I upgrade they change stuff that worked fine for me. Another issue is many distros pushing their "invented here" stuff like canonical and redhat. And the huge amount of corporate influence over Linux.

FreeBSD is largely free of those. And it leaves all the agency to the operator, rather than the distro forcing stuff down (except arch, but I don't like the community there)

  • lmm
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Disagree. Linux has been gradually changing with the push towards systemd, snap, flatpak etc.. Today's FreeBSD resembles the Linux of 10 or 20 years ago a lot more than today's Linux does.
> Today's FreeBSD resembles the Linux of 10 or 20 years ago a lot more than today's Linux does.

I'm not sure that that's the win that you think it is. Linux 10 to 20 years ago was pretty terrible, at least on desktops.

Everyone hates on systemd, but honestly I really think that the complaints are extremely overblown. I've been using systemd based distros since around ~2012, and while I've had many issues with Linux in that time, I can't really say that any of them were caused by systemd. systemd is easy to use, journalctl is nice for looking at logs, and honestly most of the complaints I see about it boil down to "well what if...", what-if's that simply hasn't happened yet.

FreeBSD is cool, but when I run it I do sometimes kind of miss systemd, simply because systemd is easy. I know there was some interest in launchd in the FreeBSD world but I don't know how far that actually got or if it got any traction, but I really wish it would.

It is a bit of a bummer if you spend quite a bit of time tracking down a very weird DNS bug and it turns out to be systemd-resolved.

And I don't want to go into all of the time spend getting systemd unit files correct. There is very active community suggesting things you can add, which then of course breaks your release for users in unexpected ways. An enormous waste of time.

The OpenSSH/XZ exploit from a year or so ago was actually a systemd exploit[1], fun fact.

Looking back on the time I spent in systemd land, I don't miss it at all. My system always felt really opaque, because the mountain of understanding systemd seemed insurmountable. I had to remember so much, all the different levers required to drive the million things systemd orchestrated... and for very little effect. I really prefer transparency in my system, I don't want abstraction layers that I have no purpose for. I don't take it as a coincidence at all that since I moved away from systemd distributions, my system has become quite a bit more reliable. When I got my Steamdeck, the first systemd setup I've used in years, one of the first things I noticed is that the jank I used to experience has showed its face once again. It might not be directly tied to poetteringware, but it's very possible that this is a simple 2nd or 3rd order effect from having a more complex system.

[1] - https://www.fortinet.com/resources/articles/xz-utils-vulnera...

Any sufficiently large codebase that runs an operating system will have security exploits eventually, so finding an example of this really doesn’t change anything. I am sure FreeBSD has had security issues in the past.

I am hardly a super genius and I really didn’t find systemd very hard at all to do most of the stuff I wanted. Everyone complains about it being complicated but an idiot like me has been able to figure out how to make my own services and timers and set the order of boot priorities and all that fun stuff. I really think people are exaggerating about the difficulty of it.

I think you misunderstand the issue being raised, hence your confusion. The "difficulty" isn't the individual facets of the system, but piercing the opaqueness of the entire picture without wholly specializing into it. On the very basis of using a configuration DSL loaded with strange quirks, the init system part of systemd alone is already asking to take up more space in your head than an init system reasonably should. Having to memorize a completely different set of string expansion behaviors, for example, and all the edge-cases that introduces at the boundary of shell scripts. One small example, and only of the tiny slice that is the init part of systemd. We can talk all day about the problems with resolved, udevd, logind, and so on.

None of these issues are "difficult" and perhaps that is why you think people are "exaggerating" and engaging in bad faith. I would challenge you on this and suggest you haven't seriously interrogated the idea that the standpoint against systemd has a firm basis in reality. Have you ever asked the question "Why?" and sought to produce an answer that frames the position in a reasonable light? Until you find that foundation, you won't understand the position.

I like BSD especially because it lacks systemd :)
  • lmm
  • ·
  • 14 hours ago
  • ·
  • [ - ]
> I'm not sure that that's the win that you think it is. Linux 10 to 20 years ago was pretty terrible, at least on desktops.

For all its usability issues, Linux 10 to 20 years ago had advantages that, for a certain kind of user, were worth the cost. Frankly Linux on the desktop today is the worst of all worlds - it doesn't have the ease-of-use or compatibility of Windows or OSX, but it doesn't have the control and consistency/reliability of BSD either.

I believe this is pretty unfair. Today's Linux on desktop is pretty straightforward for any normal user, given that there are no lines anymore between local and remote software. Windows shoves ads down your throat and MacOS make you pay a premium on HW which normal people spends on phones, not laptops or desktop computers anymore.
I just tried installing Zoom on my Ubuntu desktop, and the options seem to be:

- find Zoom in the package manager (can't)

- find zoom-client in the package manager (can, but it appears to be authored by some person and not Zoom Inc)

- go to the Zoom website and download a .deb and then run a command

This is fine for me, but let's not pretend that a regular user wanting to install something as basic as Zoom is going to have an easy time of it.

Most Ubuntu based distros let you just double click on the deb and just install the deb file. I don’t see how that’s appreciably different than Windows.
If you look at the website[0] you might see the difference.

[0] https://support.zoom.com/hc/en/article?id=zm_kb&sysparm_arti...

I think it’s a joy of having a system built by a small community for fun and not debates between large corporate interests.
FreeBSD users definitely seem to have taken over the mantle of OS evangelicals from Linux users.

I tried using FreeBSD for two different projects (NAS and router) and it turned out to be unsuitable for both, for each one switching to Linux solved the problem. Despite having solved my problems, the FreeBSD faithful seemed to think that using FreeBSD in itself was supposed to be the goal, not to solve the task at hand.

If you work with computers the whole day it would make sense the computers you keep as a hobby have some degree of difference.
  • ·
  • 16 hours ago
  • ·
  • [ - ]
I feel like you may never have used it. Would that be true?
Well said! I used to administer both FreeBSD and Linux (Debian) servers at the same time. I found them different, but couldn't say either was better or worse.
That is the vibe I get from this post. Very "I am different" energy.
There was always some truth to that, and there are worse reasons to find joy in actual competition. How do you discover the truth about differences in quality without fuel for curiosity?
The desire to be different is strong with some people.
  • rixed
  • ·
  • 14 hours ago
  • ·
  • [ - ]
A well needed counterbalance when so much in tech is just a popularity contest.
That's fine. The thing is: I am different with Linux too. So I don't quite understand that FreeBSD focus.

From the BSDs, I think only OpenBSD has a really unique selling point with its focus on security. People ask "why pick FreeBSD rather than Linux" and most will not find compelling arguments in favour of FreeBSD there.

First of all, FreeBSD has plenty of selling points compared to your typical Linux distro:

Small, well integrated base system, with excellent documentation. Jails, ZFS, pf, bhyve, Dtrace are very well integrated with eachother, which differs from linux where sure there's docker, btrfs, iptables, bpftrace and several different hypervisors to choose from, but they all come from different sources and so they don't play together as neatly.

The ports tree is very nice for when you need to build things with custom options.

The system is simple and easy to understand if you're a seasoned unix-like user. Linux distros keep changing, and I don't have the time to keep up. I have more than 2 decades of experience daily driving linux at this point, and about 3 years total daily driving FreeBSD. And yet, the last time I had a distro install shit itself(pop os), I had no idea how to fix it, due to the rube-goldberg machine of systemd, dbus, polkit, wayland AND X, etc etc that sits underneath the easy to use GUI(which was not working). On boot I was dropped into a root shell with some confusing systemd error message. The boot log was full of crazy messages from daemons I hadn't even heard of before. I was completely lost. On modern Linux distros, my significant experience is effectively useless. On FreeBSD, it remains useful.

Second, when it comes to OpenBSD, I don't actually agree that security is its main selling point. For me, the main selling point of OpenBSD is as a batteries included server/router OS, again extremely well documented in manpages, and it has all the basic network daemons installed, you just enable them. They have very simple configuration files where often all you need is a single digit number of lines, and the config files have their own manpages explaining everything. For use cases like "I just want an HTTP server to serve some static content", "I just want a router with dhcpd and a firewall", etc, OpenBSD is golden.

OpenBSD's philosophy of simple config files and secure defaults are among its best features.
Out of the box ZFS is a big selling point for me. Jails are just lovely. The rc system is very easy to reason with. I've had systems that were only stable on freebsd that would crash using windows or various Linux.
ZFS is amazing and while there are many would be clones there is only one ZFS.

I used (and pushed) it everywhere I could and first encountered on Solaris before FBSD. Even had it on my Mac workstation almost 18 years ago (unsupported) -- aside I will never forgive that asshole Larry Ellison for killing OpenSolaris. NEVER.

Systemd is the worst PoS every written. RCs are effective and elegant. Systemd is reason enough to avoid Linux but I still hold my nose and use it because I have to.

Sorry, can you be specific about what's terrible about systemd? I really would like to know why it's the "worst piece of shit ever written".
It is unnecessarily complex to begin with. On top of that, the maintainers are historically not the most open to criticism and try to aggressively push the adoption. So much so that Gnome for example now has very strong dependecies on systemd which makes it very difficult to adopt Gnome on non-systemd systems unless you wanna throw a bunch of patches at it. This hard coupling alone is something that I wouldn't want to rely on, ever.
THIS. Also what problem does it solve that RC scripts can't accomplish? They are much more readable and less complex. What is the benefit of all that added complexity? Even more to the point the business case for it in a professional setting? I've been wondering that for a long time.
Barging in as a Linux guy interested to learn more about the BSDs, so please bear with me.

Something I love with systemd is how I can get very useful stats about a running process, e.g. uptime, cumulated disk & network IOs, current & peak mem usage, etc.

Also the process management (e.g. restart rules & dependency chain) is pretty nice as well.

Is that doable with RC (or other BSD-specific tooling) as well?

It's up to you to say check in your init script if you need to start another service before you.

In terms of uptime or IO and stuff, those metrics are already available. Be that via SNMP or other means. Say you start an nginx in systems, which network and disk usage does it report? Just the main process or all its forks? Same problem in RC.

But that is part of the point. Why in the ever-loving existence should an INIT system provide stats like disk usage? That is NOT what an init system is for.

If you need memory usage or IO usage or uptime, there are so many other tools already integrated into the system that the init system doesn't need to bother.

Init systems should only care about starting, stopping and restarting services. Period. The moment they do more than that, they failed at their core job.

This might came across stronger than meant to, but still holds true.

BSDs are about "keep it simple, keep it single purpose" to a "I can live with that degree". What you get though is outstanding documentation and every component is easily understandable. Prime examples are OpenBSD/FreeBSD PF. That firewall config is just easy to grok, easy to read up on and does 99.999% of what you ever need out of a firewall.

  • shakow
  • ·
  • 33 minutes ago
  • ·
  • [ - ]
> which network and disk usage does it report? Just the main process or all its forks? Same problem in RC.

Well, the main process and its whole hierarchy, that's what you would expect of an init system monitoring its services, right? And what's nice with systemd is that I can get that from a simple `systemctl status my-service` – of course I could deploy a whole observability stack, but better if I can avoid it.

But there is no need to be defensive, it RC can that's nice, if it can't, then well, too bad.

> there are so many other tools already integrated into the system that the init system doesn't need to bother.

That's what I'd love to hear about, what are the equivalent in the BSDs world.

Spin up a VM, may that be locally or a cloud VM, throw an OpenBSD or a FreeBSD. If you are into mail servers, static http etc then OpenBSD might be your jam. Or try FreeBSD and Jails. Jails are absolutely fantastic.

Ditch the LLMs (not insinuating that you use them, but just in case), try to use the Handbooks and the man pages.

If you ever feel the need that you have so many interdependent services that you need something more complex than RC, then you might have an actual architectural problem to be honest.

>If you ever feel the need that you have so many interdependent services that you need something more complex than RC, then you might have an actual architectural problem to be honest.

Bang on.

It autodiscovers the dependency chain or shit like that? If you got 500+ services that need to be orchestrated you honestly have a very different problem.

I love the simplicity of RC scripts. Easy to understand, easy to debug, it just fucking works.

Simplicity is king, because it's understandable. A behemoth like systemd feels like it requires a PhD.

Systemd also runs 100% against the Unix/Linux philosophy of composability and single purpose.

If you need to make sure that the network stack starts after the usb stack and that starts after the pcie stack and that starts after the … then systemd is considerably easier than SysV init.

You’re handwaving away something that is pretty important. You can say that having 500 services is its own problem but it’s also a reality, even on desktop operating systems.

THIS ---- Systemd also runs 100% against the Unix/Linux philosophy of composability and single purpose.
Addendum to my other reply: it comes down to the "not invented here" problem which always invites weirdly complex solutions to problems that don't exist.

Linux is "just" the kernel and every distro invites new solutions to perceived core problems whereas the BSDs have a whole base system that comes from one source, reducing the chance of a systemd popping up there. Both approaches have their ups and downs.

Because "it's one program doing too many things and that goes against the unix philosophy"

In reality systemd is 69 different binaries (only one of which runs as pid 1), all developed under the same project, designed to work together.

I don’t see how they can be considered “one program to so many things” when it’s 69 different binaries. Yes they’re under the same project, but the same can be said about FreeBSD itself.

They’re designed to work together but as far as I am aware there’s no reason you couldn’t replace individual binaries with different ones, though admittedly I have never done that.

YES -- Because "it's one program doing too many things and that goes against the unix philosophy"
I can't speak for others, but I find it poorly documented and only rarely improves on the systems it replaced, and it invalidates decades of high quality documentation that you can easily find on the internet. It's possible the transition will pay off one day with eg a usable graphic interfaces for system configuration that might compete with that of mac os, but as of yet, no such thing has materialized.

This is especially true compared to how beautifully well and consistently the BSDs tend to document their init and configuration systems. Or Mac OS, again—launchd is still way easier to use and far more of a "fire and forget" system without adding complicated interfaces for unrelated stuff like network interfaces and logging. But that has always been true as well.

But you are aware there are distros without systemd, right?
Yes. Over the years I've used just about every linux distro from Slackware and RH up to nix + arch, as well programming on Solaris, IRIX, SCO, OS/400 and even Tandem --look it up it's pretty obscure! But I just use FreeBSD mostly now.

There's pretty much nothing I can't do on FreeBSD that I would get with one Linux or another. Not much of a gamer so maybe that factors in..

  • ·
  • 18 hours ago
  • ·
  • [ - ]
Debian and Ubuntu support it out of the box. DKMS works for other distros. Same core ZFS code BSD uses. Hope this helps educate you.
Eh? ZFS does double caching and doesn't have a way to consolidate extents except to rebuild the FS.
I once upgraded a FreeBSD system from 8 to 12 with a single command. I don’t recall having to reboot — might have needed to.

Can you give that shot for me on Linux? Could you spin up a Ubuntu 14 VM and do a full system update to 24.04 without problems? Let me know how you go.

I once needed help with a userland utility and the handbook answered the question directly. More impressive was the conversation I had with a kernel developer, who also maintains the userland tools — not because they choose too but because the architecture dictates that the whole system is maintained as a whole.

Can you say the same for Linux? You literally cannot. Only Arch and RedHat (if you can get passed the paywall) have anything that comes close to the FreeBSD Handbook.

FreeBSD has a lot going for it. It just sits there and works forever. Linux can do the same, if you maintain it. You barely need to maintain a FreeBSD system outside of updating packages.

Most people who use containers a lot won’t find a home in FreeBSD, and that’s fine. I hope containers never come to the BSD family. Most public images are gross and massive security concerns.

But then, most people who use FreeBSD know you don’t need containers to run multiple software stacks on the same OS, regardless of needing multiple runtimes or library versions. This is a lost art because today you just go “docker compose up” and walk away because everything is taken care of for you… right? Guys? Everything is secure now, right?

> I once upgraded a FreeBSD system from 8 to 12 with a single command.

The command you most likely used is freebsd-update[0]. There are other ways to update FreeBSD versions, but this is a well documented and commonly used one.

> I don’t recall having to reboot — might have needed to.

Updating across major versions requires a reboot. Nothing wrong with that, just clarifying is all.

> Most people who use containers a lot won’t find a home in FreeBSD, and that’s fine. I hope containers never come to the BSD family.

Strictly speaking, Linux containers are not needed in FreeBSD as jails provide similar functionality (better IMHO, but I am very biased towards FreeBSD). My preferred way to manage jails is with ezjail[1] FWIW.

> But then, most people who use FreeBSD know you don’t need containers to run multiple software stacks on the same OS, regardless of needing multiple runtimes or library versions.

I completely agree!

0 - https://docs.freebsd.org/en/books/handbook/cutting-edge/

1 - https://erdgeist.org/arts/software/ezjail/

I haven't tried, but I heard podman runs on freebsd :D
I think that’s true yeah.
It has a decent manual for a start.
Linux is not fucking mainstream

If anything is mainstream, it’s BSD, because OS X is BSD.

Linux is the most-user kernel on consumer devices (Android) and servers by a very wide margin.
OS X is XNU. BSD code is in the kernel and BSD tooling is in the userland, but the kernel isn't BSD in license or architecture.
BSDs taught me how to Unix in a way that I just wasn't able to manage with Linux before. This was during the early RedHat 5.x days and I just found so many pain points with the RPMs and odd file hierarchy inconsistencies for different packages. I tried to setup a firewall for my office network and struggled with iptables (or was it ipchains back then?) and found the documentation confusing.

I tried OpenBSD to setup a firewall system and fell in love. Everything just made more sense and felt more cohesive. PF rules syntax was just so much easier to work with and flexible. I loved the ports system and the emphasis on code correctness and security. The Man pages were a revelation! I could find everything I needed in the command line.

I tried all the BSDs, and each have their own strengths and weaknesses. FreeBSD had the most ports and seemed to also have good hardware support, NetBSD had the most platform support, DragonflyBSD was focused on parallel computing, etc. They all borrow and learn from each other.

BSDs are great and I heartily recommend people give them a whirl. This article in The Register is also worth a read:

https://www.theregister.com/2024/10/08/switching_from_linux_...

As much as I love FreeBSD, the release schedule is a real challenge in production: each point release is only supported for about three months. Since every release includes all ports and packages, you end up having to recertify your main application constantly.

Compare this to RedHat: yes, a paid subscription is expensive, but RedHat backports security fixes into the original code, so open source package updates don’t break your application, and critical CVEs are still addressed.

Microsoft, for all its faults, provides remarkable stability by supporting backward compatibility to a sometimes ridiculous extent.

Is FreeBSD amazing, stable, and an I/O workhorse? Absolutely: just ask Netflix. But is it a good choice for general-purpose, application-focused (as opposed to infrastructure-focused) large deployments? Hm, no ?

each point release is only supported for about three months

Where are you getting 3 months from? It's usually 9 months and occasionally 12 months.

Also, major versions are supported for 4 years and unless you're messing with kernel APIs nothing should break. (Testing is always good! But going from 14.3 to 14.4 is not a matter of needing lots of extra development work.)

I stand corrected, the official current release plan is "...while each individual point release is only supported FOR THREE MONTHS AFTER THE NEXT POINT RELEASE".

https://www.freebsd.org/security/#:~:text=on%20production%20...

Recent point releases:

14.3 (June 10, 2025)

14.2 (December 3, 2024)

14.1 (June 4, 2024)

14.0 (November 20, 2023)

13.4 (September 17, 2024)

>> Also, major versions are supported for 4 years and unless you're messing with kernel APIs nothing should break.

Well, things may not break but your system may be open to published vulnerabilities like these:

https://bsdsec.net/articles/freebsd-security-advisory-freebs...

For keeping up to date with vulnerability fixes for packages/ports (which are far more frequent) the "easy" path is to use the last FreeBSD point release.

  • tete
  • ·
  • 11 hours ago
  • ·
  • [ - ]
Yes, so what you do is you run `freebsd-update fetch` then `freebsd-update install` or if you switch a minor version you do freebsd-update upgrade -r MAJOR.MINOR` and do the same. Minor release upgrades are not the breaking kind. ABI, etc. will stay intact. There aren't expected breakages it's just that stuff will have new features and you might have some really specific use case where that shell command version output is checked and breaks stuff when it changes.

I think that's a big misunderstanding coming from other systems. Minor system updates are the kind of updates that a lot of other systems would pull in silently, while FreeBSD's major releases are a lot more like OpenBSD's releases (where minor and major version numbers don't make a difference).

Minor in FreeBSD means that stuff isn't supposed to break. It's a lot more like "Patch Level". I always want to mention Windows here for comparison, but keep thinking about how much Windows Updates break things and did so for a long time (Service Packs, etc.).

Maybe going about it from the other side makes more sense: FreeBSD got a lot of shit for not changing various default configurations for compatibility reasons - even across major versions. These are default configurations, so things where the diff is a config file change. I think they are improving this, but they really do care about their compatibility, simply because the use case of FreeBSD is in that area.

This is in contrast to eg OpenBSD where not so few people run -current, simply because it's stable enough and they want to use the latest stuff. They only support the last release (so essentially release +6 months) but again even there things do not usually break beyond having to recompile something. They all have their ports/packages collections and want stuff to run and OpenBSD being used a lot more "eating your own dogfood" style, which you can see with there being an OpenBSD gaming community, while that OS doesn't "even" support wine.

The just released FreeBSD 15 for example as a major release is supported until end of 2029, how much more LTS support do you want?

The minor point releases are close to a year in support. And that is only talking base system. Packages and ports you can also easily support yourself with poudriere and others.

As for backwards compatibility: FreeBSD has a stable backwards compatible ABI. That is why you can run a 11.0 jail on a 15.0 host. With zero problems.

Other way around is what doesn't work. You can't run a 15.0 jail on a 11.0 host for example. But backwards compatibility is definitely given.

[1]: https://www.freebsd.org/releases/15.0R/schedule/

> As much as I love FreeBSD, the release schedule is a real challenge in production: each point release is only supported for about three months. Since every release includes all ports and packages, you end up having to recertify your main application constantly.

How much support do you plan on getting? The old releases don't really turn into pumpkins. Yes, every two or three major releases, they end up with a minor release that adds something to libc where binary packages from X.2 won't run on X.1 or X.0. But this isn't usually a huge deal for servers if you follow this plan:

Use FreeBSD as your stable base, but build your own binaries for your main service / language runtimes. If you build once and distribute binaries, keep your build machine / build root on the oldest minor revision you have in your fleet. When you install a new system, use an OS version that's in support and install any FreeBSD built binary packages then.

You do have to be prepared to review updates to confirm if they need you to take action (many to most won't if you are careful about what is enabled), backport fixes, build packages yourself, or upgrade in a hurry when necessary, but you don't often actually need to.

I don't think this strategy works for a desktop deployment; there's too many moving pieces. But it works well for a server. Most of my FreeBSD servers for work got installed and never needed an OS upgrade until they were replaced by better hardware. I did have an upgrade process, and I did use it sometimes: there were a couple kernel bugs that needed fixes, and sometimes new kernels would have much better performance so it was foolish to leave things as-is. And a couple bugs in the packages we installed; usually those didn't need an OS upgrade too, but sometimes it was easier to upgrade the handful of old servers rather than fight everything; choosing battles is important.

Or you can go like Netflix and just run as close to -CURRENT as you can.

>> Or you can go like Netflix and just run as close to -CURRENT as you can.

The point is that for any system that has a publicly facing (internet) part you will have to keep up to date with known vulnerabilities as published in CVEs. Not doing so makes you a prime target to security breaches.

The FreeBSD maintainers do modify FreeBSD to address the latest known vulnerabilities.... but you will have to accept the new release every 3 months.

Aditionally, those releases do not only contain FreeBSD changes but also changes to all third party open source packages that are part of the distribution. Every package is maintained by different individuals or groups and often they make changes that change the way their software works, often these are "breaking" changes, i.e. you will have to update your application code for it to be compatible with that.

> Aditionally, those releases do not only contain FreeBSD changes but also changes to all third party open source packages that are part of the distribution

No they don't. Only major releases so, which are once every 2 years or so. And the old ones stay supported until the release after that. There's always two major releases in support. So you have about 4 years.

> The point is that for any system that has a publicly facing (internet) part you will have to keep up to date with known vulnerabilities as published in CVEs. Not doing so makes you a prime target to security breaches.

Sure, you have to be aware of them, but for something like this [1], if you don't use SO_REUSEPORT_LB, you don't have to take any further action.

The defect is likely in other FreeBSD releases that are no longer supported, but still, if you don't use SO_REUSEPORT_LB, you don't have to update.

If you do use the feature, then for unsupported releases, you could backport the fix, or update to a supported version. And you might mitigate by disabling the feature temporarily, depending on how much of a hit not using it is for your use case. Like I said, you have to be prepared for that.

You can also do partial updates, like take a new kernel, without touching the userland; or take the kernel and userland without taking any package/ports updates.

Some security advisories cover base userland or ports/packages... we can go through an example one of those and see what decision criteria would be for those, too.

[1] https://www.freebsd.org/security/advisories/FreeBSD-SA-25:09...

  • tete
  • ·
  • 11 hours ago
  • ·
  • [ - ]
> As much as I love FreeBSD, the release schedule is a real challenge in production: each point release is only supported for about three months.

I think point releases "don't count". Point releases means you run freebsd-update, restart and are done.

And major releases tend to be trivial too. You run freebsd-update, follow instructions it puts out, do `pkg update -u`.

Been doing that for production database clusters (Postgres) for hundreds of thousands of users for over a decade now and even longer in other settings.

Sure you do your planning and testing, but you better do that for your production DB. ;)

These are thousands of queries a second setup including a smaller portion of longer queries (GIS using PostGIS).

That said: Backwards compatibility is something that is frequently misunderstood in FreeBSD. Eg. the FreeBSD kernel has those COMPAT_$MAJORVERSION in there by default for compatibility. So you usually end up being fine where it matters.

But also keep in mind that you usually have a really really long time to move between major releases - the time between a new major release and the last minor release losing support.

And to come back to the Postgres Setup. I can do this without doing both the DB (+PostGIS) upgrade at once cause I have my build server building exactly the same version for both versions. No weird "I upgrade the kernel, the OS, the compiler and everything at once". I actually did moved a from FreeBSD 13 to 14 and PG from 14 to 18 - again with PostGIS which tends to make this really messy on many systems - without any issues whatsoever. Just using pg_upgrade and having the old versions packages in a temporary directory.

This is just one anecdote, but it's a real life production setup with many paying customers.

I also have experience with RedHat, but for RedHat the long term support always ends up being "I hope I don't work here anymore when we eventually do have to upgrade".

But keep in mind we talk about years for something that on FreeBSD tends to be really trivial compared to RedHat which while supporting old stuff for very long does mean a lot of moving parts, because the applications you run are a lot more tied to your release.

On FreeBSD on your old major release you totally can run eg the latest Postgres, or nginx, or python or node.js or...

  • lmm
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Comparing FreeBSD with paid RedHat is a bit of a tilted comparison. The vast majority of Linux deployments do not use paid RedHat and do not get that kind of extreme backporting of security fixes.
  • Gud
  • ·
  • 15 hours ago
  • ·
  • [ - ]
Still, that option exists, and doesn’t for FreeBSD.

FWIW I switched from Debian to FreeBSD 25 years ago as my main OS.

  • lmm
  • ·
  • 13 hours ago
  • ·
  • [ - ]
> Still, that option exists

Yes and no. If you get yourself into a position where you have servers deployed on version x.y of whatever Linux distribution you went with and now can't or won't upgrade from that, the vast majority of the time you're going to be exactly as stuck as if you were on FreeBSD. If you wanted to benefit from paid RedHat backports you had to decide to deploy your application to LTS RedHat on day 1, and the vast majority of people don't.

  • crest
  • ·
  • 19 hours ago
  • ·
  • [ - ]
What you measured is just the overlap between minor releases of the same major release. It helps to think of them as service packs if you want a MicroSoft analogy. So each minor release is supported until it has be surplanted for 3 months by a new one on the same major release line or the whole major release line goes end of life.
Sure, but the point is that each minor release contains changes in all third party open source packages/ports by taking them to the head version.

Open source packages often include breaking changes, all but guaranteeing your application to fail. With (a paid version of) RedHat Linux, RedHat modifies the open source packages to remediate CVEs by modifying the original version.

  • tete
  • ·
  • 11 hours ago
  • ·
  • [ - ]
> in all third party open source packages/ports by taking them to the head version.

No it doesn't!

You can totally stick with the old version of packages. You are NOT forced to switch third party version numbers. And as mentioned elsewhere I did switch eg. Postgres versions interdependently of the OS.

What is being updated is the userland in the OS not in ports per se. According to the Release Notes of the latest FreeBSD release 14.3[1], OpenSSL, XZ, the file command, googletest, OpenSSH, less, expat, tzdata, zfs and spleen have been updated when it comes to third party applications as well. ps has been updated and some sysctl flags to filter for jails have been introduced.

These are the kinds of updates you'll get from point releases, not the breaking kind. These go into major releases, which is exactly why the support strategy is "The latest do release + X months and at least that long".

[1] Scroll down a bit: https://www.freebsd.org/releases/14.3R/relnotes/

  • tw04
  • ·
  • 18 hours ago
  • ·
  • [ - ]
So why not use jails?
As someone who was a Linux sysadmin for several years, looking after a large fleet of RedHat boxes, I can say that the "don’t break your application" promise is BS. Their patches broke applications several times resulting in having to hold them back for months for it to be fixed.

The only Linux distro that actually lives up to that promise in my experience is Alpine.

> Microsoft, for all its faults, provides remarkable stability by supporting backward compatibility to a sometimes ridiculous extent.

Citation needed. Some long time ago, yes. Not anymore.

"If someone wants hype or the latest shiny thing every month, they have Linux."

This is just such a bizarre view ... what do they think Linux really is? Maybe if you are on bleeding edge Arch as a hobbyist who follows the latest shiny windows managers or something like that. But those of us who run Linux in production do that on stable releases with proven tech that hasn't changed significantly in more than a decade. Or longer for some things.

The FreeBSD folks need a reality check. They are so out of touch with what Linux really is. It is hard to take these kind of articles seriously.

  • lmm
  • ·
  • 16 hours ago
  • ·
  • [ - ]
> But those of us who run Linux in production do that on stable releases with proven tech that hasn't changed significantly in more than a decade.

Pretty sure the firewall commands have changed at least once in that time, and the device layer and maybe the init system. I hear the preferred sound system is changing again in the last few years too.

  • keyle
  • ·
  • 21 hours ago
  • ·
  • [ - ]
These rose tinted glasses are pretty strong. I found nothing but pain trying to run the BSDs on recent computers/setups.

There is 'different' as in 'alternative/edgy', and then there is 'different' as in 'won't implement/yagni' which becomes highly subjective.

I’ve been using it in VMs just fine. Used it on my desktop just fine for a year. Used it on laptops just fine.

You might have just hit a bad hardware setup that’s outside the scope of support. It happens.

>Used it on laptops just fine.

Which laptop?

Did you use the battery, touchpad, and the wifi?

I find most BSD users who say they use it on a laptop are just using a laptop-form-factor machine like a thinkpad that is plugged in, with a mouse not the touchpad, and connected via ethernet 99.9% of the time. There's nothing wrong with this, but it bears little resemblance to what I consider "using a laptop".

My experience with distros including Open- and FreeBSD on laptops has been universally negative. OpenBSD in particular is very slow compared to Linux on the same hardware, to say nothing of awful touchpad drivers and battery management.

I'm using openbsd on a several laptops at the moment, a dell x55, a thinkpad x230, and a thinkpad x270. Everything works on all of them - sleep, hibernate, wifi, touchpad, colume and brightness buttons, cpu throttling, etc.

On one of them I use a creative bt-w2 bluetooth dongle for audio output, openbsd removed software bluetooth support due to security concerns. The latest wifi standards are not supported on these models, which doesn't bother me. It's not the size of your network, it's what you do with it! I don't mind not having the latest flashy hardware - been there, done that.

I have to pay attention when I purchase hardware, and am happy to do so, because openbsd aligns much better with my priorities. For me that includes simplicity, security, documentation and especially stability through time - I don't want to have to rearrange my working configs every two years cuz of haphazard changes to things like audio, systemd, wayland, binary blobs, etc.

On OpenBSD right now with a Dell Latitude 7490. Works fine.

The reason I like the BSD is that they are easily understood. Have you tried to troubleshoot ALSA? Or use libvirt? Linux has a lot of features, but most of them are not really useful to to general computer user. It felt like a B2B SaaS, lot of little stuff that you wonder why they are included in the design or why they're even here in the first place.

For some reason I had a much easier time getting OpenBSD working on one specific laptop (a Thinkpad E585 where I had replaced the stock Wifi with an Intel card). A lot of Linux distributions got into weird states where they forgot where the SSD was, and there was chicken-and-egg about Wifi firmware.

OpenBSD at least booted far enough that I could shim the Wifi firmware in as needed. I probably picked the wrong Linux distribution to work with, since I've had okay luck with Debian and then Devuan on that machine's replacement (a L13)

  • zie
  • ·
  • 18 hours ago
  • ·
  • [ - ]
probably because OpenBSD developers use laptops, so they port the OS to laptops all the time.

FreeBSD has a few laptop developers, but most are doing server work. There is a project currently underway to help get more laptops back into support again: https://github.com/FreeBSDFoundation/proj-laptop

Lenovo T480s works great with FreeBSD.
Never had any issues but I've only ever tried to run it on supermicro boards
I've been running it on most of my personal laptops since around version 10. It's a lot like how Linux felt in the late 90s. Depends on your hardware and what you want to do. But it's solid.

If you could handle Linux in the late 90s you can handle it.

So you're saying KDE and Gnome and xfce and enlightenment and openBox, etc, are all desktops that run like the 90s? These current versions, and many more, run on FreeBSD.
No, I was talking mostly about perception of hardware support, package management, and a pre-systemd init system.

But which DE you run is entirely up to you. I'm writing this with FreeBSD running fvwm. On my laptops I run dwm.

Also:

> enlightenment

I haven't kept up with recent developments, but this is the most 1990s WM that ever 1990s'd.

I personally have been itching for a NixOS-style BSD or Illumos derivative. My main machine is currently NixOS with root on ZFS, but I would love to be running something where ZFS isn't an afterthought, I could use dtrace, the kernel has first class OS virtualization, and so on. I think that the declarative approach to package management is obviously the future, but I wish there were a non-Linux option.
The way nixos handles zfs seems overengineered while on FreeBSD you don't even need an fstab.
I would run something like that if it would exist - illumos zones sound quite appealing as does a more native support for ZFS.
Last time I entountered a mainframe they religiously rebooted - full power cycle every six months. a few years before their power backup failed in an emergency and for months aftrewards they were trying to figure out what all was running on that thing and how to get it started. By rebooting people starting a process remember to get it in the startup sequence - or at least only a few moths have passed so odds are they remember how it works.
These days I use it as a home file server because for my needs, FreeBSD the best tool for that job.

But back in the early 2000s I got access to a free Unix shell account that included Apache hosting and Perl, and if I'm not misremembering, it was running on FreeBSD and hosted by an ISP in the UK using the domain names portland.co.uk and port5.com.

That was formative for me: I learned all of Unix, Perl, and basic CGI web development on that server. I don't know who specifically was running that server, or whether they have any relation to the current owner of that domain. But if you're out there, thanks! Having access to FreeBSD was a huge help to a random high schooler in the U.S., who wouldn't have been able to afford a paid hosting account back then.

Nothing "against" FreeBSD, but I've never been able to really use it as a desktop OS.

Don't get me wrong: ports is pretty cool and jails are cool, but every time I've tried running FreeBSD on a laptop I end up spending a day chasing problems with drivers or getting things like brightness or volume controls working. Basically, FreeBSD on laptops (as of the last time I tried it about two years ago) feels like Linux on laptops about fifteen years ago. Linux on laptops nowadays generally works out of the box, at least with AMD stuff. I didn't have much issue getting NixOS working on my current laptop, but I am not sure that would be the case with FreeBSD, even still.

That said, FreeBSD on servers is pretty sweet. Very stable, and ports is pretty awesome. I ran FreeBSD on a server for about a year.

Well, take the corporate and user interest which Linux sees into account. FreeBSD is a niche desktop OS. We can’t expect everything to work. The easiest way is for me and you to start contributing and things might change for the better.
I don’t disagree with that, but I don’t see why this matters to the end user.
"a thousand-day uptime shouldn’t be folklore"

I reboot a lot. Mostly I want to know that should the system need to reboot for whatever reason, that it will all come back up again. I run a very lightly loaded site and I highly doubt anybody notices the minute (or so) loss of service caused by rebooting.

Pretty sure I don't feel bad about this.

There's a weird fetishization of long uptimes. I suspect some of this dates from the bad old days when Windows would outright crash after 50 days of uptime.

In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.

On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).

And of course FreeBSD hasn't implemented kernel live patching -- but then, that isn't a "long uptime" solution anyway, the point of live patching is to keep the system running safely until your next maintenance window.

> unless you restart all of userspace (at which point you might as well just reboot).

I can't speak for FreeBSD, but on my OpenBSD system hosting ssh, smtp, http, dns, and chat (prosody) services, restarting userspace is nothing to sweat. Not because restarting a particular service is easier than on a Linux server (`rcctl restart foo` vs `systemctl restart foo`), but because there are far fewer background processes and you know what each of them does; the system is simpler and more transparent, inducing less fear about breaking or missing a service. Moreover, init(1) itself is rarely implicated by a patch, and everything else (rc) is non-resident shell scripts, whereas who knows whether you can avoid restarting any of the constellation of systemd's own services, especially given their many library dependencies.

If you're running pet servers rather than cattle, you may want to avoid a reboot if you can. Maybe a capacitor is about to die and you'd rather deal with it at some future inopportune moment rather than extending the present inopportune moment.

> There's a weird fetishization of long uptimes. I suspect some of this dates from the bad old days when Windows would outright crash after 50 days of uptime.

My recollection is that, usually, it crashed more often than that. The 50 days thing was IIRC only the time for it to be guaranteed to crash (due to some counter overflowing).

> In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.

Or that the part of the system which needs the security updates not be exposed to the Internet. Other than the TCP/IP stack, most of the kernel is not directly accessible from outside the system.

> On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).

You don't need a software update for that. Normal use of the system is enough to make it gradually diverge from its "clean" after-boot state. For instance, if you empty /tmp on boot, any temporary file is already a subtle difference from how it would be on a fresh reboot.

Personally, I consider having to reboot due to a security fix, or even a stability fix, to be a failure. It means that, while the system didn't fail (crash or be compromised), it was vulnerable to failure (crashing or being compromised). We should aim to do better than that.

There are a lot of OT, safety and security infrastructure that must be run on premise in large orgs and require four to five nines of availability. Much of the underlying network, storage, and compute infra for these OT and SS solutions run proprietary OSs based on a BSD OS. BSD OSs are chosen specifically for their performance, security and stability. These solutions will often run for years without a reboot. If a patch is required to resolve a defect or vulnerability it generally does not require a reboot of the kernel and even so these solutions usually have HA/clustering capabilities to allow for NDU (non disruptive upgrades) and zero downtime of the IT infra solution.
It's from a bygone era. An era when you'd lose hours of work if you didn't go file -> save, (or ctrl-s, if you were obsessive). If you reboot, you lose all of your work, your configuration, that you haven't saved to disk. Computers were scarce, back in those days. There was one in the house, in the den, for the family. These days, I've got a dozen of them and everything autosaves. But so that's where that comes from.
  • ssl-3
  • ·
  • 15 hours ago
  • ·
  • [ - ]
Home computers seem more scarce to me today than they did ~25 years ago.

Sure: People have smart TVs and tablets and stuff, which variously count as computing devices. And we've broadly reached saturation on pocket supercomputers adoption.

But while it was once common to walk into a store and find a wide array of computer-oriented furniture for sale, or visit a home and see a PC-like device semi-permanently set up in the den, it seems to be something that almost never happens anymore.

So, sure: Still-usable computers are cheap today. You've got computers wherever you want them, and so do I. But most people? They just use their phone these days.

(The point? Man, I don't have a point sometimes. Sometimes, it's just lamentations.)

Regularly (every 3 years or so) had 1000+ days of uptime with FreeBSD rack servers on Supermicro mobos.

I built the servers myself and then shipped to colo half way around the world.

I got over 1400 once and then I needed to add a new disk. They ran for almost 13 years with some disk replacements, CPU upgrades, and memory additions

Genuine questions:

Do you ever apply kernel patches? I also run FreeBSD and reboot for any kernel patches and never can get my uptimes to 1,000 days before that.

Do you just run versions that don't get security patches? Security support EOL dates generally means I need to upgrade before 1,000 days too. For example the current stable release gets security patches only from June 10, 2025 to June 30, 2026 giving just over 360 days of active support.

I get FreeBSD is stable and get days of uptime, and I could easily do the same if I didn't bother upgrading etc, it's just that I can't see how that's done without putting your machine at risk. Perhaps only for airgapped machines?

> Do you ever apply kernel patches?

For my personal machines, I just run GENERIC kernels and that includes a lot, so I need to do a lot of updates. I also reboot every time I update the OS (even when it's an update that doesn't touch the kernel) so that I'm sure reboots will be fine... but I did setup my firewalls with carp and pfsync so I can reboot my firewall machines one at a time with minimal disruption.

For work machines, I use a crafted kernel config that only includes stuff we use, although so far I've usually had one config for all boxes, because it's simpler. If there's a security update for part of the kernel that we don't use, we don't need to update. Security update in a driver we don't have, no update; security update in tcp, probably update. Some security updates include mitigation steps to consider instead of or until updating... Sometimes those are reasonable too. Sometimes you do need to upgrade the whole fleet.

When there's an update that's useful but also has an effective mitigation, I would mitigate on all machines, run the update and reboot test on one or a few machines, and then typically install on the rest of the machines and if they reboot at some point, great, they don't need the mitigation anymore. If they are retired with 1000 days of uptime and a pending kernel update, that's fine too.

I would not update a machine just because support for the minor release it was on timed out. Only if there was an actual need. Including a security issue found in a later release that probably affects the older one or at least can't be ruled out. Yes, there's a risk of unpublished security issues in unsupported releases; but a supported release also has a risk of unpublished security issues too.

Fair question! But why am I -1 on this?

Risk is relative and not just about THAT security. I worked at an AV vendor and the joke internally was our security threat lists were the scoreboard for bad actors. But if you asked our salespeople you are an irresponsible hack if you don't keep up to date. They never account for the person who really can do it himself -- that is not their customer.

Yes I used to build custom FreeBSD kernels a lot. I manually made security patches on a few occasions and I put in many work-arounds by reading the security mailing list etc. Yes I went well past EOLs a few times for sure.

Always behind a firewall, workloads always in a Jail.

IIRC the release cycles used to be longer and it was less of an issue ~10 years ago. Can anyone confirm?

Most of my downtimes started with a power issue in the datacenter or a need for a hardware upgrade.

Some of the largest orgs have large amounts of IT infrastructure for OT and ISS that is not connected to the Internet. This infra is air gapped or often times on a completely separate physical LAN which is not accessible without passing through multiple physical security controls.
One of my four colocated servers is at something 1200 days+ of uptime running on 12-BETA.

Tightly firewall'd to my VPN and SSH is restricted to certificates. There are no services on the host that would allow users to upload inject, the most you could achieve is a DDoS.

I run bHyve in a jail and any legacy services that could jump out of bHyve stack would straightly go to jail.

I run freeBSD for a NAS and a couple linux vm's under bhyve. Could I have just installed Ubuntu and been done with it? Probably. I did make some mistakes like setting a very low swap partition, forgetting to switch my RAID controller to IT mode which made me have to rebuild my raidz1 pool, changing my bhyves to UEFI so the internet works better. I made sure the jail I built for plex worked fine. It's been fun. At this point I should probably rebuild the whole damn thing, but I know it will run just fine as is.
For years and years to come. You’ll never need to update that box, frankly.
I've been running home servers in one form or another for some time. Win NT4 briefly, FreeBSD, Win2k server for a while, then to Linux for quite some time. I tried FreeNAS but learned to hate the webui churn and outdated/scattered docs but liked ZFS. Went back to Linux using Mint (yeah, weird choice...) and had issues with Mint and my hardware was crap. I wanted ZFS so I decided to go back to my roots and run FreeBSD again.

I realized the right way to start is with GOOD hardware. So I went on eBay (I know, I know...) and found a nice Supermicro uATX server board and a 65 watt quad core Xeon in the 1151(?) socket then bought a fresh set of Kingston 16 GB ECC DIMMs, and 4x 8TB enterprise CMR SATA disks.

I read the docs and read a few how-to's from blogs and personal sites. In a day I had a everything setup, accounts, Raid-Z5 data store, samba and nfs exporting the data store. It was so damn easy. It's been running solid ever since. It's so reliable it's boring. I have to remind myself its even there so I can run updates and check on the thing to make sure its not full of dust or snakes or whatever.

I hope that someday that https://github.com/nixos-bsd/nixbsd will be upstreamed into NixOS, and it will allow much easier switching between Linux and FreeBSD.
What is love? Baby don't reboot me, baby don't reboot me :) Truly a rock solid OS and I use it for my personal DNS servers and never had any issue, it just run and runs and runs with minimal bloat. I think it's way more stable than Linux, but obviously has some work to be a comptetive desktop. I've recently developed the philosophy of using Omarchy as my desktop and servers are FreeBSD unless there's some reason it doesn't fit my use case.
FBSD user since the mid 90s. It (and BSDi) often outperformed "big UNIX" vendors, and was far ahead in terms of bang per buck... if only good server grade rack-mount x86 hardware had been available at the time we'd have made more use of it. Now I have returned to it for server workloads. Good documentation, consistent release schedule, reliability even under extreme load.
I got over 20 years of sane, reliable and consistent computing from the FreeBSD Project, thank you.
I'm really enjoying the uptick in interest in FreeBSD I'm starting to see as well as the really exciting 15.0 release.
To me biggest selling point of freebsd is delayed rolling release package policy - every 3 month you get a bulk of new packages, and security updates in between I wish Debian was like that.
  • js-j
  • ·
  • 13 hours ago
  • ·
  • [ - ]
I like freebsd a lot, too. In production we use debian (without systemd!), and here's the uptime of one of our servers:

$ uname -a

Linux deb2 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux

$ uptime

08:50:41 up 2512 days, 17:15, 1 user, load average: 18.70, 20.46, 21.43

so last security patch was over 7 years ago?
> let administrators see longevity as a feature, not a gamble

I get what you’re going for. But…

Please god no. Immutable images, servers are cattle not pets.

I prefer pets. I also give my server silly names. I think the cattle not pets crowd art partly to blame in ruining the fun of computing.
I felt the same way in 1999.
And do you feel the same way now?
Yes, but we have separated for a while on a few different occasions. FreeBSD chased me away for few years around version 7. I always came back though.
I so wish that FreeBSD was GPL. I know this won't be a popular opinion, but I believe that success Linux has had is because of copyleft, and *BSD are riding on the coat tails of that.

But I don't like Linux. I use it daily, but I don't like it. I wish FreeBSD held the position Linux does in the market today. That would be heaven.

> but I believe that success Linux has had is because of copyleft

No, the success Linux has had is because it ran on the machines people had at home, and was very easy to try out.

An instructive example would be my own path into Linux: I started with DJGPP, but got annoyed because it couldn't multi-task (if you started a compilation within an IDE like Emacs, you had to wait until it finished before you could interact with the IDE again). So I wanted a real Unix, or something close enough to it.

The best option I found was Slackware. Back then, it could install directly into the MS-DOS partition (within the C:\LINUX directory, through the magic of the UMSDOS filesystem), and boot directly from MS-DOS (through the LOADLIN bootloader). That is: like DJGPP, it could be treated like a normal MS-DOS program (with the only caveat being that you had to reboot to get back to MS-DOS). No need to dedicate a partition to it. No need to take over the MBR or bootloader. It even worked when the disk used Ontrack Disk Manager (for those too young to have heard of it, older BIOS didn't understand large disks, so newer HDDs came bundled with software like that to workaround the BIOS limitations; Linux transparently understood the special partition scheme used by Ontrack).

It worked with all the hardware I had, and worked better than MS-DOS; after a while, I noticed I was spending all my time booted into Linux, and only then I dedicated a whole partition to it (and later, the whole disk). Of course, since by then I had already gotten used to Linux, I stayed in the Linux world.

What I've read later (somewhere in a couple of HN comments) was that, beyond not having all these cool tricks (UMSDOS, LOADLIN, support for Ontrack partitions), FreeBSD was also picky with its hardware choices. I'm not sure that the hardware I had would have been fully supported, and even if it were, I'd have to dedicate a whole disk (or, at least, a whole partition) to it, and it would also take over the boot process (in a way which probably would be incompatible with Ontrack).

> FreeBSD was also picky with its hardware choices. I'm not sure that the hardware I had would have been fully supported

Copy / paste of my comment from last year about FreeBSD

I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.

The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.

https://en.wikipedia.org/wiki/CMD640

The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.

The sound card was another issue.

I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"

When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.

I don’t disagree with what you say. But why did Linux work on all that hardware? I assert that if you trace that line of thinking to its conclusion, the answer is the GPL.

Many people and organizations adapted BSD to run on their hardware, but they had no obligation to upstream those drivers. Linux mandated upstreaming (if you wanted to distribute drivers to users).

GPL does not mandate upstreaming your drivers.
It mandates making source available for upstreaming, if you are distributing.
That's actually true, if they wanted to distribute a linux compatible driver they had to make it available for anyone to upstream it in the linux kernel.

Probably GPL was indeed a factor that made device makers and hackers to create open source drivers for linux. I am not convinced that it was a major one.

I'd say with modern hardware, like the xe Intel iGPUs on 11th gen Intel and up got driver attention quickly. Some things like realtek 2.5gb NICs took a little while to integrate but I think realtek offered kernel modules. I remember NIC compatibility was sparse when I started playing with it around 1999-2000. What trips me up is command flags on gnu vs freebsd utils, ask me about the time I DOSed the Colo from the jump machine using the wrong packet argument interval.
>>I believe that success Linux has had is because of copyleft, and *BSD are riding on the coat tails of that.

Apparently many here are unaware of the history and story as to what stalled FreeBSD in a long lawsuit involving ATT. You need to read up on that. Copyleft had nothing to do with it.

What would FreeBSD as GPL give you? You could fork it and release FreeGPL with that license tomorrow. (Minus ZFS, but that's in contrib)

Some users of FreeBSD prefer more freedoms than GPL offers. The contributors must not be put off by providing more freedoms.

Places I've worked have contributed changes to FreeBSD and Linux, mostly for the same reason ... regardless of any necessity from distributing code under license, it's nicer to keep your fork close to upstream and sending your changes upstream helps keep things close.

IANAL, but you can’t actually just relicense code, even if it’s under BSD‐like license. What you can do is to release this code in the binary form without providing the source code.
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Right you can add gpl code on top, but the base is still BSD
  • zie
  • ·
  • 18 hours ago
  • ·
  • [ - ]
I don't understand this thinking. The GPL is more restrictive than the FreeBSD license. You have more freedoms with the FreeBSD license than you do with the GPL(of any version).

> I wish FreeBSD held the position Linux does in the market today. That would be heaven.

Well The BSD's were embattled with a lawsuit from AT&T at the time Linux came around, so it got a late start as it were, even if it's a lot older.

  • pyvpx
  • ·
  • 21 hours ago
  • ·
  • [ - ]
It’s a nice belief for some but wholly divorced from historical facts and circumstances
> I wish FreeBSD held the position Linux does in the market today. That would be heaven.

I don't. That would break everything I love about it. If it was as big as Linux there would be a lot of corpo suits influence, constant changes, constant drive to make it 'mainstream' etc. All the things I hate about Linux.

GCC vs LLVM. It isn’t the license.
I don't know about that... Llvm didn't exist until 2003. The BSDs and Linux both existed for a long time before that, and Linux already had much more momentum at that point.
BSD was mired in the uncertainty of a lawsuit over some of their code at the time that Linux was getting started, and the FUD around that gave Linux a head start that BSD had up until that point, so you can't infer much about the reasons Linux's early success over BSD through that fog. If Linux had been dealing with the same problem that BSD had instead, BSD almost certainly would be in Linux's place right now.
  • jm4
  • ·
  • 17 hours ago
  • ·
  • [ - ]
Linux was dealing with SCO just a few years later. There was also a period where Microsoft was out to destroy Linux.
  • Gud
  • ·
  • 15 hours ago
  • ·
  • [ - ]
The difference is that Linux was well supported by the corporate world when the SCO/Microsoft lawsuit took place
I think that's less because of the license and more because people found patching gcc to be a big pain.
To be fair, GCC's design was motivated by the same thing as the license. They intentionally didn't modularize GCC so that it couldn't be used by non-free code.

> Anything that makes it easier to use GCC back ends without GCC front ends--or simply brings GCC a big step closer to a form that would make such usage easy--would endanger our leverage for causing new front ends to be free.

https://gcc.gnu.org/legacy-ml/gcc/2000-01/msg00572.html

Correct, it’s not the license.
Linux is OK. It’s a mess compared to BSD, but it’s OK. It’s the lazy man’s solution. It’s mainly for people who only want to “docker compose up” and walk away. The art of the OS has been lost. People think the OS is something to be abstracted away as much as possible and it’s evil and hard to secure. Shame.
I'd offer in counterargument that Linux is for getting things done, whereas BSD seems to be largely for people who view the OS itself as the hobby.

I have zero interest in tinkering with my operating system. I mostly want it to just get out of my way, which Linux does well 95% of the time.

I need to tinker less because there's no distro maintainers that constantly change stuff.

It did take a while to set it up but then it runs fine. I don't view my OS as a hobby, but I do want to have full control over it and to be able to understand how it works. I don't want to have to trust a commercial party to act in my best interests, because they don't. The current mess that is windows, full of ads and useless ai crap, mandatory telemetry, forced updates, constantly trying to sell their cloud services etc is a good example. FreeBSD doesn't do any of those things.

Most Linuxes don't either but there's still a lot of corpo influence. I feel like it's becoming a playing ball of big tech. You only need to see how many corp suits are on the board of the Linux foundation, how many patches are pushed by corp employees as part of their job etc. I don't want them to have that much influence over my OS. I don't believe in a win-win concerning corporate involvement in open-source.

FreeBSD has a little bit of that (netgate's completely botched wireguard is and example) but lessons are learned.

>no distro maintainers that constantly change stuff.

This is one of those things that mom-Linux people think but isn't really true. I can think of two episodes in the last decade (systemd and Wayland) that constituted controversial changes but frankly there are people who make "not using systemd" their entire identity and it's just so much cringe.

Even on a rolling release bleeding edge distro like Fedora things really don't change that much at all.

>I don't view my OS as a hobby, but I do want to have full control over it and to be able to understand how it works.

FreeBSD doesn't afford you any more or less control over how the system works than Linux.

That used to be the argument for Windows over Linux.

FreeBSD has always required far less tweaking or maintenance than Linux, though.

I have seen this particular idea come up a lot lately.

Personaly I think Intel's early investment in linux had a lot to do with it. They also sold a compiler and marketed to labs and such which bought chips. So linux compatibility meant a lot to decision makers.

AMD the underdog went more in on Linux compat than NVIDIA. Which may have been a business decsion.

I dunno, maybe the GPL effect was more a market share thing with developers than a copyleft thing.

Nota Bene: I do love copyleft and license all my own projects AGPL

The Linux vs BSD fight happened before NVIDIA vs AMD was a thing.
I feel the same, because it seems that the only desktop-ready OS under GPL today is GNU/Linux, and it feels too bloated nowadays (not to mention that Linux is effectively stuck under GPLv2). Something like FreeBSD feels much lighter and better still being desktop‐ready. Looks like that guys from Hyperbola think the same and that’s why they are doing HyperbolaBSD. Btw there’s some progress in GNU Hurd, but they are still far from being desktop-ready.
I had to laugh at the progress in gnu hurd. I've been hearing that one since the 90s
They now provide at least somehow working x86_64 images. It’s of course funny for a project started in the 90s to get x86_64 support only in the 2020s, but it’s still progress in relative terms.
There needs to be a new rule in technical discussion communities that outlaws bland comments that just spew "too bloated" and "feels much lighter". It is completely useless fluff description text.
No, it’s just you having some strange prejudices about these words (probably driven by blind faith in some overhyped technologies), so go better overregulate your preferred echo chamber.
  • a-dub
  • ·
  • 22 hours ago
  • ·
  • [ - ]
freebsd didn't have the hardware support base that linux did and suffered a huge delay in rearchitecture when x86 smp hardware became widely available. (only one cpu could be in the kernel at a time, the "bkl", was a major impediment in the early 00s). freebsd had better resource scheduling at the time and a beloved networking stack, but linux caught up with cgroups etc. i think linux was also just a trendy vanguard of sorts as the world learned of open source software by and of the internet.
It'll never happen. You can't distribute ZFS under GPL
Comes in binary form from Debian and Ubuntu. I can add it to any other distribution via DKMS. Same core ZFS code as BSD uses.
Honest question - if it comes in binary form, how can you know it's the same core ZFS code BSD uses?
Same as anything else installed as a binary package - you trust the people packaging/providing the binary. If you don't, build it yourself. The source is publicly available.
  • pabs3
  • ·
  • 18 hours ago
  • ·
  • [ - ]
Or you build it yourself and verify you got the same checksum.

https://reproducible-builds.org/

  • pabs3
  • ·
  • 18 hours ago
  • ·
  • [ - ]
Debian only distributes it in DKMS form, not binary form.
Agreed. I'd say it goes much deeper than that in the case of FreeBSD though. It's just an ideological thing that can't be changed.
Can you please elaborate the str of Freebsd vs Linux?
  • ·
  • 19 hours ago
  • ·
  • [ - ]
[dead]
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Love the Sun E10k reference !!

- ex Sun

The E10k reminds me of Bryan Cantrill's story about the motivation for dtrace. Sun engineers were working day and night, trying to debug what seemed to be a Solaris kernel networking bug, on a benchmarking cluster of E10k machines. I won't spoil the end, but it's great:

https://www.youtube.com/watch?v=wTVfAMRj-7E&t=2640s

Yeah if Sun had continued to exist, what wonderful things would we have now!

But Oracle destroys everything it touches.

I would still use it but NVIDIA abandoned it, so I can't. I wish they still had some support for modern CUDA.
Other than a few niche areas like Netflix and Playstation or even MacOS as some like to often put fourth as examples of how great it is…

yeah.

The BSD/Illumos OSs are used quite frequently as the base OS for high end commercial/enterprise network, SAN, NAS etc. solutions. They are chosen for it's performance, stability and HA features.
Or that niche app called Whatsapp
  • f1shy
  • ·
  • 10 hours ago
  • ·
  • [ - ]
Also Redit started as "some BSD boxes" The problem is, when a project scales up, at some moment you will need "commodity" sys admins, so it is easier to just go for linux.

Also as the project gets bigger, at some point somebody will come with the idea to move to linux.

  • Gud
  • ·
  • 15 hours ago
  • ·
  • [ - ]
If I recall correctly, they’ve moved to Linux
More of a Net/Open guy myself, but the Qotom network appliance I mentioned a few posts back runs FreeBSD. I use it as a wifi bridge to provide backhaul for my office's wired LAN over the house wifi. There are gadgets you can buy for this, but I like my solution running stock FreeBSD + some configuration.
As a small user I find it hard to find a use case where I’d want a bsd for some reason. I even installed ghostbsd in a vm to try it but it seemed very similar to linux so I didn’t understand what’s the upside?
ZFS and jails are two things FreeBSD does very well
ZFS on Linux and BSD share the same code now. Hope this helps.
Sure, but ZFS is much better integrated into FreeBSD. It supports ZFS on root with boot environments out of the box.

And when running a Samba server, it's helpful that FreeBSD supports NFSv4 ACLs when sitting between ZFS and SMB clients; on Linux, Samba has to hack around the lack of NFSv4 ACL support by stashing them in xattrs.

You can arguably get even better ZFS and SMB integration with an Illumos distribution, but for me FreeBSD hits the sweet spot between being nice to use and having the programs I need in its package library.

But on Linux you need to load external modules. Before upgrading or changing kernels you need to check if ZFS supports it. Specially bad in rolling distros.
>you need to check

This can be automated by whatever is updating your kernel.

Linux has btrfs and multiple containerization and security sandboxing options. ZFS and jails aren't Linux differentiators.
IME the integration with FreeBSD and ZFS just works better than BTRFS and linux distors, and I've read far too many reports about data loss with BTRFS to trust it.

But I definitely believe that everything you can do on FreeBSD, you can also do on Linux. For me it's the complete package though that comes with FreeBSD, and everything being documented in the man pages and the handbook.

and pf.
  • ggm
  • ·
  • 21 hours ago
  • ·
  • [ - ]
A small thing, but the mechanistic approach to bundling packages into bigger meta state, is (in my personal opinion) better than the somewhat ad-hoc approach to both writing and including things in an apt/dpkg.

If the product is python, thats what it is. there is no python-additonal-headers or python-dev or bundle-which-happens-to-be-python-but-how-would-you-know.

There is python, and there are meta-ports which explicitly 'call' the python port.

The most notable example being X11. Its sub-parts are all very rational. fonts are fonts. libs are libs. drm is drm. drivers are drivers.

(yes, there is the port/pkg confusion. thats a bit annoying.)

You don't have to reinstall with every software upgrade. Reliability and long term uptime are the norm.
  • loeg
  • ·
  • 20 hours ago
  • ·
  • [ - ]
These statements could equally describe Linux, macOS, or even Windows.
  • loeg
  • ·
  • 20 hours ago
  • ·
  • [ - ]
1990s nostalgia.
> If someone wants hype or the latest shiny thing every month, they have Linux.

Just. Run. Debian.

FreeBSD has very different from Debian package update policy, essentally delayed rolling release. The only one other system I can think of is OpenSUSE Slowroll.
And then install Postgres
A love letter to the last operating system that isn’t trying to gaslight you. FreeBSD really is the anti-hype choice: no mascot-as-a-service, no quarterly identity crisis, just a system that quietly works until the heat death of the universe.
Speaking of better vendor support, why doesn’t it support Apple Silicon yet? Obviously, Asahi has led the way on this and their m1n1 boot loader can be used out of the box. But OpenBSD has supported Apple Silicon for three years now.
  • a96
  • ·
  • 1 hour ago
  • ·
  • [ - ]
FreeBSD has always been the non-portable one.
The why is simply: because nobody wants it enough to build it. Otherwise it would exist.
Why does it have to? Why does everything have to supper everything? Why can’t a project have a focus on servers and that’s its “thing”?

Also it’s OSS — contribute that support if you’re so passionate about it.

> everything

Firstly, FreeBSD already supports x86 Mac Minis. Servers? M-series Minis and Studios are very good servers. Lastly, FreeBSD has an Apple Silicon port which has stalled.

https://wiki.freebsd.org/AppleSilicon

I'll ignore your last point.

The original, unedited version of the grandparent was bemoaning the lack of vendor support behind FreeBSD so the parent's comment made a lot more sense in-context.
Yeah, sorry for removing that part. Changed my mind just minutes after posting, because I really like FreeBSD any my critique sounded a bit too harsh.
Sigh. Yes. It’s the boring choice and therefore the better choice a lot of the time. Not all of the time, but most of the time.

Impatience and lost skills is why it’s not a mainstream player.

  • ·
  • 22 hours ago
  • ·
  • [ - ]
I know this is the noob perspective but they should try (yes, I'm already aware of GhostBSD) to make getting into the desktop a little bit easier, it can be very hard to bootstrap anything and learn if you're new to it
After you install the base system, install your favorite desktop by doing "pkg install <your_favorite_desktop>" and it's done.

What's so difficult?

"Before FreeBSD can render a graphical environment, it needs a kernel module to drive the graphics processor. Graphics drivers are a fast-moving, cross-platform target, which is why this is developed and distributed separately from the FreeBSD base system."

"To enable the driver, add the module to /etc/rc.conf file, by executing the following command: ..."

https://docs.freebsd.org/en/books/handbook/x11/

I get that this isn't brain surgery. But come on

And, again, "pkg install <your_favorite_desktop>" done. Quit pulling blurbs out of thin air when you don't know how it works.
The truth is that FreeBSD doesn't want casual users, though.

The Linux (Ubuntu, etc) install experience leads to a usable desktop. Heck, the installer disc boots to a usable desktop.

Also no unsophisticated users even know the name of their favorite DE. Or what a DE is.

Requiring a text login and a shell command, even one as simple as "pkg install KDE" is a big ask for a casual user these days. Also, that command line will probably fail. :)

I write these things as a very big fan of FreeBSD! I think not catering to casual users keeps FreeBSD in a better technical place overall, but Linux is obviously much more popular. This carries risks too.

Actually pkg install kde is exactly what you should do. Just not in capitals.

But in FreeBSD 15 it will be part of the installer. However even an installer is too much to ask of today's mainstream users. I don't want freebsd to become mainstream though especially because what mainstream users want (everything decided on by a vendor) is completely contrary to what FreeBSD stands for and what I want.

Casual users become experienced users become contributors

I'm not saying Make Everything Easy. If there's real reasons not to have easy x11 onboarding, if FreeBSD really is intended to be an OS for experts (and I get that it may well be, for a variety of historical reasons), then fine

Linking directly to documentation is thin air?
That's where Linux fails too IMO. Both GNOME and KDE really suck in my opinion. Or perhaps suck is too strong a word. I find both to be hugely problematic.

That does not mean they do not work, mind you - GNOME succeeds in dumbing things down that even 60 years old grandmas could use it (until they misclick and then are presented with 20 windows all put side to side). And KDE gives a lot of flexibility in tweaking it how you may want it (if we ignore Nate's donation widget). But it just is still waaaaaaay too complicated and convoluted to use. I am better off just describing my system in .yml files and then have ruby autogenerate any configuration value than struggle through annoying widgets to find some semi-random semi-new setting (or none such setting existing such as is the case in GNOME). I'd wish we could liberate these DEs from upstream developers and their dictatorship. I mean, we, can, e. g. patch out the code that shouldn't exist (like Nate's Robin Hood widget), but I mean on a global basis as-is. We as users should be in full control of EVERYTHING - every widget. Everything these widgets do, too. And everything they don't do right now but should do. Like in evince, I hate that I can't have tabs. That annoys me. I am aware that libpapers changes this, but boy ... just try to discuss this with GNOMEy devs. That's just a waste of time. I want to decide on everything here - upstream must not be able to cripple my system or influence it in no way I approve of.

I love KDE. Especially because it gives me agency. I'm not stuck with the choices the developers made like with gnome that's super opinionated. And I find things easy to find and confuse.

It's probably not for a grandma but I don't care. It doesn't have to be. For me the more software is suitable for the mainstream, the less suitable it is to me.

Not sure what you mean by donation widget, I use KDE on FreeBSD as daily driver (and on the latest version) and I've never seen it. I donate monthly to KDE but it doesn't have any way of knowing that.

I do not like KDE, I'd rather user Gnome, but KDE is massively better at fractional scaling on 4k and hidpi monitors.
Hmm... I am sure your yaml config files would also not please your grandma. Anyway if you don't like change, there are other DEs apart from those two are more fitting. Try XFCE or Mate, they will look and behave the same years after setting them up.
[dead]
[dead]
[dead]
[flagged]
> Culture matters too. One reason I stepped away from Linux was the noise, the debates that drowned out the joy of building.

No clue what he is babbling about. LFS/BLFS is active. FreeBSD doesn't have that. I am sorry but Linux is the better tinker-toy. I understand this upsets the BSD folks, but it is simply how it is. Granted, systemd and the corporatification took a huge toll into the Linux ecosystem but even now as it is in some ruins (KDE devs recently decreed that xorg will die and they will aid in the process of killing off xorg, by forcing everyone into wayland), it is still much more active as a tinker-toy. That's simply how it is.

I recall many years ago NetBSD on the mailing list pointed out that Linux now runs on more toasters than NetBSD. This is simply the power of tinkerification.

> Please keep FreeBSD the kind of place where thoughtful engineering is welcome without ego battles

K - for the three or four users worldwide.

> There’s also the practical side: keep the doors open with hardware vendors like Dell and HPE, so FreeBSD remains a first-class citizen.

Except that Linux supports more hardware. I am sorry FreeBSD people - there is reality. We can't offset and ignore it.

> My hope is simple: that you stay different. Not in the way that shouts for attention, but in the way that earns trust.

TempleOS also exists.

I think it is much more different than any of the BSDs.

> If someone wants hype or the latest shiny thing every month, they have Linux.

Right - and you don't have to go that route either. Imagine there is choice on Linux. I can run Linux without systemd - there is no problem with that. I don't need GNOME or KDE asking-for-donation begging devs killing xorg either. (Admittedly GTK and QT seem to be the only really surviving oldschool desktop GUIs and GTK is really unusuable nowadays.)

> the way the best of Unix always did, they should know they can find it here.

Yeah ok ... 500 out of 500 supercomputers running Linux ...

> And maybe, one day, someone will walk past a rack of servers, hear the steady, unhurried rhythm of a FreeBSD system still running

I used FreeBSD for a while until a certain event made me go back to Linux - my computer was shut off when I returned home. When I left, it was still turned on. It ran FreeBSD. This is of course episodical, but I never had that problem with Linux.

I think FreeBSD folks need to realise that Linux did some things better.

  • Gud
  • ·
  • 15 hours ago
  • ·
  • [ - ]
I use FreeBSD and Linux. I am not married to any operating system.

For some reason, every time FreeBSD is put in a positive light there is always a lot of Linux(-only) users who has to put FreeBSD down.

I don’t understand why. Would be interesting to understand this psychological phenomena.

The same for me.

For 30 years I have been using permanently both FreeBSD and Linux, because they both have strengths and weaknesses.

I am using Linux on laptops and desktops, where I may need support for some hardware devices not supported by FreeBSD or I need software compatibility with certain applications that are not easily ported to FreeBSD.

I also use Linux on some computational servers where I need compatibility with software not available on FreeBSD, e.g. NVIDIA CUDA. (While CUDA is not available for FreeBSD, NVIDIA GPUs are still the right choice for FreeBSD computers when needing a graphic display, because NVIDIA provides drivers for FreeBSD, while AMD does not.)

I use FreeBSD on various servers with networking or storage functions, where I value most to have the highest reliability and the simplest administration.

I believe it's an effort to promote Linux, to get more market share.

Linux users aren't interested to see potential adopters go for BSD instead of joining their ranks.

On every forum, every discussion, there is at least one guy saying he runs that game on Linux, or that other OS is somehow inferior to Linux, or this problem would never happen on Linux...

It's all about marketing, if you will...

I think it is just trolls and fanboys who want to troll really.

There is fanboyism on every OS, pretty much like soccer fans or religious zealots.

> Yeah ok ... 500 out of 500 supercomputers running Linux ...

So what? Big whoop.