IIRC in about 99 I got sick of Mandrake and RH RPM deps hell and found FreeBSD 3 CD in a Walnut creek book. Ports and BSD packages were a revelation, to say nothing of the documentation which still sets it apart from the haphazard Linux.
The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.
Currently I run FreeBSD on several home machines including old mac minis repurposed as media machines throughout the house.
They run kodi + linux brave and with that I can stream anything like live sports.
Also OpenBSD for one firewall and PFSense (FreeBSD) for another.
I completely agree.
Supermicro mobo's with server-grade components combined with aggressive cooling fans/heat sinks running FreeBSD in a AAA data center resulted in two prod servers having uptimes of over 3000+ days. This included dozens of app/jails/ports updates (pretty much everything other than the kernel).
And it was all indeed on Supermicro server hardware.
And in parallel, while our routing kit was mostly Cisco, I put a transparent bridging firewall in front of the network running pfSense 1.2 or 1.3. It was one of those embedded boxes running a Via C3/Nehemiah, that had the Via Padlock crypto engine that pfSense supported. Its AES256 performance blew away our Xeons and crypto accelerator cards in our midrange Cisco ISRs - cards costing more than that C3 box. It had a failsafe Ethernet passthrough for when power went down and it ran FreeBSD. I've been using pfSense ever since, commercialisation / Netgate aside, force of habit.
And although for some things I lean towards OpenBSD today, FreeBSD delivers, and it has for nearly 20 years for me. And, as they say, it should for you, too.
Oof, that sounds scary. I’ve come to view high uptime as dangerous… it’s a sign you haven’t rebooted the thing enough to know what even happens on reboot (will everything come back up? Is the system currently relying on a process that only happens to be running because someone started it manually? Etc)
Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.
I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.
So if we are talking about garbage windows servers sure. It's just a question of what is accepted by the customers/users.
My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed. I tried Red Hat 9, and it was slow as a snail by comparison. For me, the choice was obvious. Later on I was running FreeBSD on my ThinkPad, and I still remember the days of coding on it using my professor's linear/non-linear optimisation library, sorting out wlan driver and firmware to use the library wifi, and compiling Mozilla on my way home while the laptop was in my backpack. My personal record: I never messed up a single FreeBSD install, even when I was completely drunk.
Even later, I needed to monitor the CPU and memory usage of our performance/latency critical code. The POSIX API worked out of the box on FreeBSD and Solaris exactly as documented. Linux? Nope. I had to resort to parsing /proc myself, and what a mess it was. The structure was inconsistent, and even within the same kernel minor version the behaviour could change. Sometimes a process's CPU time included all its threads, and sometimes it didn't.
To this day, I still tell people that FreeBSD (and the other BSDs) feels like a proper operating system, and GNU/Linux feels like a toy.
The "completely drunk" comment made me chuckle, too familiar... poor choices, but good times!
This is more about OpenBSD, but worth mentioning that nicm of tmux fame also worked with us in the same little office, in a strange little town.
AJG also made some contributions to Postgres, and wrote a beautiful, full-featured web editor for BIND DNS records, which, sadly, faded along with him and was eventually lost to time along with his domain, tcpd.net, that has since expired and was taken over.
I run some EVE Online services for friends. They have manual install steps for those of use not using containers. Took me half a day to get the stack going on FBSD and that was mostly me making typos and mistakes. So pleased I was able to dodge the “docker compose up” trap.
But...
As a veteran admin I am tired of reading trough Docker files to guess how to do a native setup. You can never suss out the intent from those files - only do haphazardous guesses.
It smells too much like "the code is the documentation".
I am fine that the manual install steps are hidden deep in the dungeons away from the casual users.
But please do not replace Posix compliance with Docker compliance.
Look at Immich for an unfortunate example. Theys have some nice high level architecture documentation. But the "whys" of the Dockerfile is nowhere to be found. Makes it harder to contribute as it caters to the Docker crowd only and leaves a lot of guesswork for the Posix crowd.
I use docker+compose for my dev projects for about the past 12 years. Very tough to beat the speed of development with multi-tier applications.
To me Dockerfiles seem like the perfect amount of DSL but still flexible because you can literally run any command as a RUN line and produce anything you want for layer. Dockerfiles seem to get it right. Maybe the 'anything' seems like a mis-feature but if you use it well it's a game changer.
Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).
Are their supply chain risks? sure -- Like many package systems. I build my important images from scratch all the time just to mitigate this. There's also Podman with Podfiles if you want something more FOSS friendly but less polished.
All that said, I generally containerize production workloads but not with Docker. If a dev project is ready for primetime now I port it to Kubernetes. Used to be BSD Jails .
If you run bare-metal, and instructions to build a project say "you need to install libfoo-dev, libbar-dev, libbaz-dev", you're still sourcing it from your known supply chain, with its known lifecycles and processes. If there's a CVE in libbaz, you'll likely get the patch and news from the same mailing lists you got your kernel and Apache updates from.
Conversely, if you pull in a ready-made Docker container, it might be running an entire Alpine or Ubuntu distribution atop your preferred Debian or FreeBSD. Any process you had to keep those packages up to date and monitor vulnerabilities now has to be extended to cover additional distributions.
Posix is the standard.
Docker is a tool on top of that layer. Absolutely nothing wrong with it!
But you need to document towards the lower layers. What libraries are used and how they're interconnected.
Posix gives you that common ground.
I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.
The manual steps need to be documented. Not for regular users but for those porting to other systems.
I do not like black boxes.
You /should/ be scanning your containers just like you /should/ be scanning the rest of your platform surface.
FreeBSD always has been, and always will be, my favorite OS.
It is so much more coherent and considered, as the post author points out. It is cohesive; whole.
That haphazard nature is probably part of the reason for its success, since it allowed for many alternative ways of doing things being experimented in parallel.
I prefer FreeBSD.
Two clear problems with the init system (https://en.wikipedia.org/wiki/Init) are
- it doesn’t handle parallel startup of services (sysadmins can tweak their init scripts to speed up booting, but init doesn’t provide any assistance)
- it does not work in a world where devices get attached to and detached from computers all the time (think of USB and Bluetooth devices, WiFi networks).
The second problem was evolutionary solved in init systems by having multiple daemons doing, basically, the same thing: listen for device attachments/detachments, and handling them. Unifying that in a single daemon, IMO, is a good thing. If you accept that, making that single daemon the init process makes sense, too, as it will give you a solution for the first problem.
To make things worse, the opinionated nature of systemd's founder (Lennart Poettering) has meant many a sysadmin has had to fight with it in real-world usage (eg systemd-timesyncd's SNTP client not handling drift very well or systemd-networkd not handling real world DHCP fields). His responses "Don't use a computer with a clock that drifts" or "we're not supporting a non-standard field that the majority of DHCP servers use" just don't jive in the real world. The result was going to be ugly. It's not surprising that most distros ended up bundling chrony, etc.
But IPv6 is not the solution to Ipv4's issues at all.
IPv6 is something completely different justified post-facto with EMOTIONAL arguments ie. You are stealing the last IPv4 address from the children!
- Dual stack -- unnecessary and bloated - Performance = 4x worse or more - No NAT or private networks -- not in the same sense. People love to hate on NAT but I do not want my toaster on the internet with a unique hardware serial number. - Hardware tracking built into the protocol -- the mitigations offered are BS. - Addresses are a congintive block - Forces people to use DNS (central) which acts as a censorship choke point.
All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc
I actually wrote a code project that implements this network as an overlay -- but it's not ready to share yet. Works though.
If I were to imagine my self in the room deciding on the IPv6 requirements I expect the key one was 'track every person and every device every where all the time' because if you are just trying to expand the address space then IPv6 is way way way overkill -- it's overkill even for future proofing for the next 1000 years of all that privacy invading.
That is what we have in ipv6. What you write sounds good/easy on paper, but when you look at how networks are really implemented you realize it is impossible to do that. Networks packets have to obey the laws of bits and bytes and there isn't any place to put that extra 0 in ipv4: no matter what you have to create a new ipv6. They did write a standard for how to send ipv4 addresses in ipv6, but anyone who doesn't have ipv6 themselves can't use that and so we must dual stack until everyone transitions.
My prototype/thought experiment is called IPv40 a 40bit extension to IPv4.
IPv40 addresses are carried over Legacy networks using the IPv4 Options Field (Type 35)
Legacy routers ignore Option 35 and route based on the 32-bit destination (effectively forcing traffic to "Space 0". IPv40-aware routers parse Option 35 to switch Universes.
This works right now but as a software overlay not in hardware.
Just my programming/thought experiment which was pretty fun.
When solutions are pushed top down like IPv6 my spider sense tingles -- what problem is it solving? the answers are NOT 'to address address space limitations of IPv4' that is the marketing and if you challenge it you will be met with ad hominen attacks and emotional manipulations.
Or, it’s in the most-significant place, meaning every new ipv40 IP is in a block that will be a black hole to any old routers, or they just forward it to the (wrong) address that you get from dropping the first octet.
Not to mention it’s still not software-compatible (it doesn’t fit in 32 bits, all system calls would have to change, etc.)
That all seems significantly worse than IPv6 which already works just fine today.
Hardware is important - fast routers can't do work in the CPU (and it was even worse in the mid 90's when this started), they need special hardware assistance.
Just like today, it is likely that most client will support your new address, but ISPs won't route them for you.
I still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.
That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there.
So true.
> That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there
JS Fingerprinting is a huge issue.
Honestly if IPv6 was just for the internet of things I'd ignore it. Since it's pushed on to every machine and you are essentially forced to use it -- with no direct benefit to the end user -- I have a big problem with it.
So it's not strictly needed for YOU, but it solves some problems that are not a problem for YOU, and also happens to address space. I do not think the 'fixes' to IPv6 do enough to address my privacy concerns, particularly with a well-resourced adversary. Seems like they just raised the bar a little. Why even bother? Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.
You probably don't see it directly, but IPv4 IP addresses are getting expensive - AWS recently started to charge for their use. Cloud providers are sucking them up. If you're in the developed world, you may not see it, but many ISPs, especially in Asia and Africa, are relying on multiple levels of NAT to serve customers - you often literally can't connect to home if you need or want to. It also breaks some protocols in ways you can't get around depending on how said ISPs deal with NAT (eg you pretty much can't use IPSEC VPNs and some other protocols when you're getting NAT'd 2+ times; BitTorrent had issues in this environment, too). Because ISPs doing NAT requires state-tracking, this can cause performance issues in some cases. Some ISPs also use this as an excuse to force you to use their DNS infra that they can then sell onwards (though this can now be mitigated by DNS over HTTPS).
There are some benefits here, though. CGNAT means my phone isn't exposed directly to the big bad internet and I won't be bankrupted by a DDOS attack, but there are other, better ways to deal with that.
Again, I do get where you're coming from. But we do need to move on from IPv4; IPv6 is the only real alternative, warts and all.
I go back to early Linux' initial success because of the license. It's the first real decision you have to make once you start putting your code out there.
1) Linux's popularity has enlarged the pool of users interested in Unix-like operating systems. Some proportion of users familiar with Unix genuinely like FreeBSD and the unique features it offers.
2) The rise of docker and the implosion of VMWare has driven an increase of interest in FreeBSD Jails and the Bhyve hypervisor.
3) Running a homelab is a popular hobby. ZFS is popular for RAID, and pf is popular for networking.
4) Podman being brought to FreeBSD: (https://freebsdfoundation.org/blog/oci-containers-on-freebsd...).
5) Dell, AMD, Framework, and the FreeBSD foundation committing $750,000 to making FreeBSD easier to use last year: (https://freebsdfoundation.org/blog/why-laptop-support-why-no...).
6) Apple announcing that they're bringing the Swift language to FreeBSD this year.
As I've aged, what I've come to value most in software stacks is composability. I do not know if [Free]BSD restores that, but Linux feels like it has grown more complicated and less composable. I'm using this term loosely, but I'm mostly thinking of how one reasons and cognates about the way the system work in this instance. I want to work in a world where each tool on the OS's bench has a single straightforward man page, not swiss army knives where the authors/maintainers just kept throwing more "it can do this too" in to attract community.
I would actually be interested in running it in some production environments but it seems like that is pitted against the common deploy scenarios that involve Docker and while there is work on bringing runc to FreeBSD it is alpha stage at best currently.
Still, if you just want an ssh server, a file server, a mail server, it is a great OS with sane defaults and a predictable upgrade schedule.
Jails and BHYVE vms are excellent -- but I use Docker every day and if I could use BSD as my docker host I would.
Good thing my docker servers are all built with terraform so I do not have to touch.
FreeBSD is largely free of those. And it leaves all the agency to the operator, rather than the distro forcing stuff down (except arch, but I don't like the community there)
I'm not sure that that's the win that you think it is. Linux 10 to 20 years ago was pretty terrible, at least on desktops.
Everyone hates on systemd, but honestly I really think that the complaints are extremely overblown. I've been using systemd based distros since around ~2012, and while I've had many issues with Linux in that time, I can't really say that any of them were caused by systemd. systemd is easy to use, journalctl is nice for looking at logs, and honestly most of the complaints I see about it boil down to "well what if...", what-if's that simply hasn't happened yet.
FreeBSD is cool, but when I run it I do sometimes kind of miss systemd, simply because systemd is easy. I know there was some interest in launchd in the FreeBSD world but I don't know how far that actually got or if it got any traction, but I really wish it would.
And I don't want to go into all of the time spend getting systemd unit files correct. There is very active community suggesting things you can add, which then of course breaks your release for users in unexpected ways. An enormous waste of time.
Looking back on the time I spent in systemd land, I don't miss it at all. My system always felt really opaque, because the mountain of understanding systemd seemed insurmountable. I had to remember so much, all the different levers required to drive the million things systemd orchestrated... and for very little effect. I really prefer transparency in my system, I don't want abstraction layers that I have no purpose for. I don't take it as a coincidence at all that since I moved away from systemd distributions, my system has become quite a bit more reliable. When I got my Steamdeck, the first systemd setup I've used in years, one of the first things I noticed is that the jank I used to experience has showed its face once again. It might not be directly tied to poetteringware, but it's very possible that this is a simple 2nd or 3rd order effect from having a more complex system.
[1] - https://www.fortinet.com/resources/articles/xz-utils-vulnera...
I am hardly a super genius and I really didn’t find systemd very hard at all to do most of the stuff I wanted. Everyone complains about it being complicated but an idiot like me has been able to figure out how to make my own services and timers and set the order of boot priorities and all that fun stuff. I really think people are exaggerating about the difficulty of it.
None of these issues are "difficult" and perhaps that is why you think people are "exaggerating" and engaging in bad faith. I would challenge you on this and suggest you haven't seriously interrogated the idea that the standpoint against systemd has a firm basis in reality. Have you ever asked the question "Why?" and sought to produce an answer that frames the position in a reasonable light? Until you find that foundation, you won't understand the position.
For all its usability issues, Linux 10 to 20 years ago had advantages that, for a certain kind of user, were worth the cost. Frankly Linux on the desktop today is the worst of all worlds - it doesn't have the ease-of-use or compatibility of Windows or OSX, but it doesn't have the control and consistency/reliability of BSD either.
- find Zoom in the package manager (can't)
- find zoom-client in the package manager (can, but it appears to be authored by some person and not Zoom Inc)
- go to the Zoom website and download a .deb and then run a command
This is fine for me, but let's not pretend that a regular user wanting to install something as basic as Zoom is going to have an easy time of it.
[0] https://support.zoom.com/hc/en/article?id=zm_kb&sysparm_arti...
I tried using FreeBSD for two different projects (NAS and router) and it turned out to be unsuitable for both, for each one switching to Linux solved the problem. Despite having solved my problems, the FreeBSD faithful seemed to think that using FreeBSD in itself was supposed to be the goal, not to solve the task at hand.
From the BSDs, I think only OpenBSD has a really unique selling point with its focus on security. People ask "why pick FreeBSD rather than Linux" and most will not find compelling arguments in favour of FreeBSD there.
Small, well integrated base system, with excellent documentation. Jails, ZFS, pf, bhyve, Dtrace are very well integrated with eachother, which differs from linux where sure there's docker, btrfs, iptables, bpftrace and several different hypervisors to choose from, but they all come from different sources and so they don't play together as neatly.
The ports tree is very nice for when you need to build things with custom options.
The system is simple and easy to understand if you're a seasoned unix-like user. Linux distros keep changing, and I don't have the time to keep up. I have more than 2 decades of experience daily driving linux at this point, and about 3 years total daily driving FreeBSD. And yet, the last time I had a distro install shit itself(pop os), I had no idea how to fix it, due to the rube-goldberg machine of systemd, dbus, polkit, wayland AND X, etc etc that sits underneath the easy to use GUI(which was not working). On boot I was dropped into a root shell with some confusing systemd error message. The boot log was full of crazy messages from daemons I hadn't even heard of before. I was completely lost. On modern Linux distros, my significant experience is effectively useless. On FreeBSD, it remains useful.
Second, when it comes to OpenBSD, I don't actually agree that security is its main selling point. For me, the main selling point of OpenBSD is as a batteries included server/router OS, again extremely well documented in manpages, and it has all the basic network daemons installed, you just enable them. They have very simple configuration files where often all you need is a single digit number of lines, and the config files have their own manpages explaining everything. For use cases like "I just want an HTTP server to serve some static content", "I just want a router with dhcpd and a firewall", etc, OpenBSD is golden.
I used (and pushed) it everywhere I could and first encountered on Solaris before FBSD. Even had it on my Mac workstation almost 18 years ago (unsupported) -- aside I will never forgive that asshole Larry Ellison for killing OpenSolaris. NEVER.
Systemd is the worst PoS every written. RCs are effective and elegant. Systemd is reason enough to avoid Linux but I still hold my nose and use it because I have to.
Something I love with systemd is how I can get very useful stats about a running process, e.g. uptime, cumulated disk & network IOs, current & peak mem usage, etc.
Also the process management (e.g. restart rules & dependency chain) is pretty nice as well.
Is that doable with RC (or other BSD-specific tooling) as well?
In terms of uptime or IO and stuff, those metrics are already available. Be that via SNMP or other means. Say you start an nginx in systems, which network and disk usage does it report? Just the main process or all its forks? Same problem in RC.
But that is part of the point. Why in the ever-loving existence should an INIT system provide stats like disk usage? That is NOT what an init system is for.
If you need memory usage or IO usage or uptime, there are so many other tools already integrated into the system that the init system doesn't need to bother.
Init systems should only care about starting, stopping and restarting services. Period. The moment they do more than that, they failed at their core job.
This might came across stronger than meant to, but still holds true.
BSDs are about "keep it simple, keep it single purpose" to a "I can live with that degree". What you get though is outstanding documentation and every component is easily understandable. Prime examples are OpenBSD/FreeBSD PF. That firewall config is just easy to grok, easy to read up on and does 99.999% of what you ever need out of a firewall.
Well, the main process and its whole hierarchy, that's what you would expect of an init system monitoring its services, right? And what's nice with systemd is that I can get that from a simple `systemctl status my-service` – of course I could deploy a whole observability stack, but better if I can avoid it.
But there is no need to be defensive, it RC can that's nice, if it can't, then well, too bad.
> there are so many other tools already integrated into the system that the init system doesn't need to bother.
That's what I'd love to hear about, what are the equivalent in the BSDs world.
Ditch the LLMs (not insinuating that you use them, but just in case), try to use the Handbooks and the man pages.
If you ever feel the need that you have so many interdependent services that you need something more complex than RC, then you might have an actual architectural problem to be honest.
Bang on.
I love the simplicity of RC scripts. Easy to understand, easy to debug, it just fucking works.
Simplicity is king, because it's understandable. A behemoth like systemd feels like it requires a PhD.
Systemd also runs 100% against the Unix/Linux philosophy of composability and single purpose.
You’re handwaving away something that is pretty important. You can say that having 500 services is its own problem but it’s also a reality, even on desktop operating systems.
Linux is "just" the kernel and every distro invites new solutions to perceived core problems whereas the BSDs have a whole base system that comes from one source, reducing the chance of a systemd popping up there. Both approaches have their ups and downs.
In reality systemd is 69 different binaries (only one of which runs as pid 1), all developed under the same project, designed to work together.
They’re designed to work together but as far as I am aware there’s no reason you couldn’t replace individual binaries with different ones, though admittedly I have never done that.
This is especially true compared to how beautifully well and consistently the BSDs tend to document their init and configuration systems. Or Mac OS, again—launchd is still way easier to use and far more of a "fire and forget" system without adding complicated interfaces for unrelated stuff like network interfaces and logging. But that has always been true as well.
There's pretty much nothing I can't do on FreeBSD that I would get with one Linux or another. Not much of a gamer so maybe that factors in..
Can you give that shot for me on Linux? Could you spin up a Ubuntu 14 VM and do a full system update to 24.04 without problems? Let me know how you go.
I once needed help with a userland utility and the handbook answered the question directly. More impressive was the conversation I had with a kernel developer, who also maintains the userland tools — not because they choose too but because the architecture dictates that the whole system is maintained as a whole.
Can you say the same for Linux? You literally cannot. Only Arch and RedHat (if you can get passed the paywall) have anything that comes close to the FreeBSD Handbook.
FreeBSD has a lot going for it. It just sits there and works forever. Linux can do the same, if you maintain it. You barely need to maintain a FreeBSD system outside of updating packages.
Most people who use containers a lot won’t find a home in FreeBSD, and that’s fine. I hope containers never come to the BSD family. Most public images are gross and massive security concerns.
But then, most people who use FreeBSD know you don’t need containers to run multiple software stacks on the same OS, regardless of needing multiple runtimes or library versions. This is a lost art because today you just go “docker compose up” and walk away because everything is taken care of for you… right? Guys? Everything is secure now, right?
The command you most likely used is freebsd-update[0]. There are other ways to update FreeBSD versions, but this is a well documented and commonly used one.
> I don’t recall having to reboot — might have needed to.
Updating across major versions requires a reboot. Nothing wrong with that, just clarifying is all.
> Most people who use containers a lot won’t find a home in FreeBSD, and that’s fine. I hope containers never come to the BSD family.
Strictly speaking, Linux containers are not needed in FreeBSD as jails provide similar functionality (better IMHO, but I am very biased towards FreeBSD). My preferred way to manage jails is with ezjail[1] FWIW.
> But then, most people who use FreeBSD know you don’t need containers to run multiple software stacks on the same OS, regardless of needing multiple runtimes or library versions.
I completely agree!
0 - https://docs.freebsd.org/en/books/handbook/cutting-edge/
If anything is mainstream, it’s BSD, because OS X is BSD.
I tried OpenBSD to setup a firewall system and fell in love. Everything just made more sense and felt more cohesive. PF rules syntax was just so much easier to work with and flexible. I loved the ports system and the emphasis on code correctness and security. The Man pages were a revelation! I could find everything I needed in the command line.
I tried all the BSDs, and each have their own strengths and weaknesses. FreeBSD had the most ports and seemed to also have good hardware support, NetBSD had the most platform support, DragonflyBSD was focused on parallel computing, etc. They all borrow and learn from each other.
BSDs are great and I heartily recommend people give them a whirl. This article in The Register is also worth a read:
https://www.theregister.com/2024/10/08/switching_from_linux_...
Compare this to RedHat: yes, a paid subscription is expensive, but RedHat backports security fixes into the original code, so open source package updates don’t break your application, and critical CVEs are still addressed.
Microsoft, for all its faults, provides remarkable stability by supporting backward compatibility to a sometimes ridiculous extent.
Is FreeBSD amazing, stable, and an I/O workhorse? Absolutely: just ask Netflix. But is it a good choice for general-purpose, application-focused (as opposed to infrastructure-focused) large deployments? Hm, no ?
Where are you getting 3 months from? It's usually 9 months and occasionally 12 months.
Also, major versions are supported for 4 years and unless you're messing with kernel APIs nothing should break. (Testing is always good! But going from 14.3 to 14.4 is not a matter of needing lots of extra development work.)
https://www.freebsd.org/security/#:~:text=on%20production%20...
Recent point releases:
14.3 (June 10, 2025)
14.2 (December 3, 2024)
14.1 (June 4, 2024)
14.0 (November 20, 2023)
13.4 (September 17, 2024)
>> Also, major versions are supported for 4 years and unless you're messing with kernel APIs nothing should break.
Well, things may not break but your system may be open to published vulnerabilities like these:
https://bsdsec.net/articles/freebsd-security-advisory-freebs...
For keeping up to date with vulnerability fixes for packages/ports (which are far more frequent) the "easy" path is to use the last FreeBSD point release.
I think that's a big misunderstanding coming from other systems. Minor system updates are the kind of updates that a lot of other systems would pull in silently, while FreeBSD's major releases are a lot more like OpenBSD's releases (where minor and major version numbers don't make a difference).
Minor in FreeBSD means that stuff isn't supposed to break. It's a lot more like "Patch Level". I always want to mention Windows here for comparison, but keep thinking about how much Windows Updates break things and did so for a long time (Service Packs, etc.).
Maybe going about it from the other side makes more sense: FreeBSD got a lot of shit for not changing various default configurations for compatibility reasons - even across major versions. These are default configurations, so things where the diff is a config file change. I think they are improving this, but they really do care about their compatibility, simply because the use case of FreeBSD is in that area.
This is in contrast to eg OpenBSD where not so few people run -current, simply because it's stable enough and they want to use the latest stuff. They only support the last release (so essentially release +6 months) but again even there things do not usually break beyond having to recompile something. They all have their ports/packages collections and want stuff to run and OpenBSD being used a lot more "eating your own dogfood" style, which you can see with there being an OpenBSD gaming community, while that OS doesn't "even" support wine.
The minor point releases are close to a year in support. And that is only talking base system. Packages and ports you can also easily support yourself with poudriere and others.
As for backwards compatibility: FreeBSD has a stable backwards compatible ABI. That is why you can run a 11.0 jail on a 15.0 host. With zero problems.
Other way around is what doesn't work. You can't run a 15.0 jail on a 11.0 host for example. But backwards compatibility is definitely given.
How much support do you plan on getting? The old releases don't really turn into pumpkins. Yes, every two or three major releases, they end up with a minor release that adds something to libc where binary packages from X.2 won't run on X.1 or X.0. But this isn't usually a huge deal for servers if you follow this plan:
Use FreeBSD as your stable base, but build your own binaries for your main service / language runtimes. If you build once and distribute binaries, keep your build machine / build root on the oldest minor revision you have in your fleet. When you install a new system, use an OS version that's in support and install any FreeBSD built binary packages then.
You do have to be prepared to review updates to confirm if they need you to take action (many to most won't if you are careful about what is enabled), backport fixes, build packages yourself, or upgrade in a hurry when necessary, but you don't often actually need to.
I don't think this strategy works for a desktop deployment; there's too many moving pieces. But it works well for a server. Most of my FreeBSD servers for work got installed and never needed an OS upgrade until they were replaced by better hardware. I did have an upgrade process, and I did use it sometimes: there were a couple kernel bugs that needed fixes, and sometimes new kernels would have much better performance so it was foolish to leave things as-is. And a couple bugs in the packages we installed; usually those didn't need an OS upgrade too, but sometimes it was easier to upgrade the handful of old servers rather than fight everything; choosing battles is important.
Or you can go like Netflix and just run as close to -CURRENT as you can.
The point is that for any system that has a publicly facing (internet) part you will have to keep up to date with known vulnerabilities as published in CVEs. Not doing so makes you a prime target to security breaches.
The FreeBSD maintainers do modify FreeBSD to address the latest known vulnerabilities.... but you will have to accept the new release every 3 months.
Aditionally, those releases do not only contain FreeBSD changes but also changes to all third party open source packages that are part of the distribution. Every package is maintained by different individuals or groups and often they make changes that change the way their software works, often these are "breaking" changes, i.e. you will have to update your application code for it to be compatible with that.
No they don't. Only major releases so, which are once every 2 years or so. And the old ones stay supported until the release after that. There's always two major releases in support. So you have about 4 years.
Sure, you have to be aware of them, but for something like this [1], if you don't use SO_REUSEPORT_LB, you don't have to take any further action.
The defect is likely in other FreeBSD releases that are no longer supported, but still, if you don't use SO_REUSEPORT_LB, you don't have to update.
If you do use the feature, then for unsupported releases, you could backport the fix, or update to a supported version. And you might mitigate by disabling the feature temporarily, depending on how much of a hit not using it is for your use case. Like I said, you have to be prepared for that.
You can also do partial updates, like take a new kernel, without touching the userland; or take the kernel and userland without taking any package/ports updates.
Some security advisories cover base userland or ports/packages... we can go through an example one of those and see what decision criteria would be for those, too.
[1] https://www.freebsd.org/security/advisories/FreeBSD-SA-25:09...
I think point releases "don't count". Point releases means you run freebsd-update, restart and are done.
And major releases tend to be trivial too. You run freebsd-update, follow instructions it puts out, do `pkg update -u`.
Been doing that for production database clusters (Postgres) for hundreds of thousands of users for over a decade now and even longer in other settings.
Sure you do your planning and testing, but you better do that for your production DB. ;)
These are thousands of queries a second setup including a smaller portion of longer queries (GIS using PostGIS).
That said: Backwards compatibility is something that is frequently misunderstood in FreeBSD. Eg. the FreeBSD kernel has those COMPAT_$MAJORVERSION in there by default for compatibility. So you usually end up being fine where it matters.
But also keep in mind that you usually have a really really long time to move between major releases - the time between a new major release and the last minor release losing support.
And to come back to the Postgres Setup. I can do this without doing both the DB (+PostGIS) upgrade at once cause I have my build server building exactly the same version for both versions. No weird "I upgrade the kernel, the OS, the compiler and everything at once". I actually did moved a from FreeBSD 13 to 14 and PG from 14 to 18 - again with PostGIS which tends to make this really messy on many systems - without any issues whatsoever. Just using pg_upgrade and having the old versions packages in a temporary directory.
This is just one anecdote, but it's a real life production setup with many paying customers.
I also have experience with RedHat, but for RedHat the long term support always ends up being "I hope I don't work here anymore when we eventually do have to upgrade".
But keep in mind we talk about years for something that on FreeBSD tends to be really trivial compared to RedHat which while supporting old stuff for very long does mean a lot of moving parts, because the applications you run are a lot more tied to your release.
On FreeBSD on your old major release you totally can run eg the latest Postgres, or nginx, or python or node.js or...
FWIW I switched from Debian to FreeBSD 25 years ago as my main OS.
Yes and no. If you get yourself into a position where you have servers deployed on version x.y of whatever Linux distribution you went with and now can't or won't upgrade from that, the vast majority of the time you're going to be exactly as stuck as if you were on FreeBSD. If you wanted to benefit from paid RedHat backports you had to decide to deploy your application to LTS RedHat on day 1, and the vast majority of people don't.
Open source packages often include breaking changes, all but guaranteeing your application to fail. With (a paid version of) RedHat Linux, RedHat modifies the open source packages to remediate CVEs by modifying the original version.
No it doesn't!
You can totally stick with the old version of packages. You are NOT forced to switch third party version numbers. And as mentioned elsewhere I did switch eg. Postgres versions interdependently of the OS.
What is being updated is the userland in the OS not in ports per se. According to the Release Notes of the latest FreeBSD release 14.3[1], OpenSSL, XZ, the file command, googletest, OpenSSH, less, expat, tzdata, zfs and spleen have been updated when it comes to third party applications as well. ps has been updated and some sysctl flags to filter for jails have been introduced.
These are the kinds of updates you'll get from point releases, not the breaking kind. These go into major releases, which is exactly why the support strategy is "The latest do release + X months and at least that long".
[1] Scroll down a bit: https://www.freebsd.org/releases/14.3R/relnotes/
The only Linux distro that actually lives up to that promise in my experience is Alpine.
Citation needed. Some long time ago, yes. Not anymore.
This is just such a bizarre view ... what do they think Linux really is? Maybe if you are on bleeding edge Arch as a hobbyist who follows the latest shiny windows managers or something like that. But those of us who run Linux in production do that on stable releases with proven tech that hasn't changed significantly in more than a decade. Or longer for some things.
The FreeBSD folks need a reality check. They are so out of touch with what Linux really is. It is hard to take these kind of articles seriously.
Pretty sure the firewall commands have changed at least once in that time, and the device layer and maybe the init system. I hear the preferred sound system is changing again in the last few years too.
There is 'different' as in 'alternative/edgy', and then there is 'different' as in 'won't implement/yagni' which becomes highly subjective.
You might have just hit a bad hardware setup that’s outside the scope of support. It happens.
Which laptop?
Did you use the battery, touchpad, and the wifi?
I find most BSD users who say they use it on a laptop are just using a laptop-form-factor machine like a thinkpad that is plugged in, with a mouse not the touchpad, and connected via ethernet 99.9% of the time. There's nothing wrong with this, but it bears little resemblance to what I consider "using a laptop".
My experience with distros including Open- and FreeBSD on laptops has been universally negative. OpenBSD in particular is very slow compared to Linux on the same hardware, to say nothing of awful touchpad drivers and battery management.
On one of them I use a creative bt-w2 bluetooth dongle for audio output, openbsd removed software bluetooth support due to security concerns. The latest wifi standards are not supported on these models, which doesn't bother me. It's not the size of your network, it's what you do with it! I don't mind not having the latest flashy hardware - been there, done that.
I have to pay attention when I purchase hardware, and am happy to do so, because openbsd aligns much better with my priorities. For me that includes simplicity, security, documentation and especially stability through time - I don't want to have to rearrange my working configs every two years cuz of haphazard changes to things like audio, systemd, wayland, binary blobs, etc.
The reason I like the BSD is that they are easily understood. Have you tried to troubleshoot ALSA? Or use libvirt? Linux has a lot of features, but most of them are not really useful to to general computer user. It felt like a B2B SaaS, lot of little stuff that you wonder why they are included in the design or why they're even here in the first place.
OpenBSD at least booted far enough that I could shim the Wifi firmware in as needed. I probably picked the wrong Linux distribution to work with, since I've had okay luck with Debian and then Devuan on that machine's replacement (a L13)
FreeBSD has a few laptop developers, but most are doing server work. There is a project currently underway to help get more laptops back into support again: https://github.com/FreeBSDFoundation/proj-laptop
If you could handle Linux in the late 90s you can handle it.
But which DE you run is entirely up to you. I'm writing this with FreeBSD running fvwm. On my laptops I run dwm.
Also:
> enlightenment
I haven't kept up with recent developments, but this is the most 1990s WM that ever 1990s'd.
But back in the early 2000s I got access to a free Unix shell account that included Apache hosting and Perl, and if I'm not misremembering, it was running on FreeBSD and hosted by an ISP in the UK using the domain names portland.co.uk and port5.com.
That was formative for me: I learned all of Unix, Perl, and basic CGI web development on that server. I don't know who specifically was running that server, or whether they have any relation to the current owner of that domain. But if you're out there, thanks! Having access to FreeBSD was a huge help to a random high schooler in the U.S., who wouldn't have been able to afford a paid hosting account back then.
Don't get me wrong: ports is pretty cool and jails are cool, but every time I've tried running FreeBSD on a laptop I end up spending a day chasing problems with drivers or getting things like brightness or volume controls working. Basically, FreeBSD on laptops (as of the last time I tried it about two years ago) feels like Linux on laptops about fifteen years ago. Linux on laptops nowadays generally works out of the box, at least with AMD stuff. I didn't have much issue getting NixOS working on my current laptop, but I am not sure that would be the case with FreeBSD, even still.
That said, FreeBSD on servers is pretty sweet. Very stable, and ports is pretty awesome. I ran FreeBSD on a server for about a year.
I reboot a lot. Mostly I want to know that should the system need to reboot for whatever reason, that it will all come back up again. I run a very lightly loaded site and I highly doubt anybody notices the minute (or so) loss of service caused by rebooting.
Pretty sure I don't feel bad about this.
In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.
On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).
And of course FreeBSD hasn't implemented kernel live patching -- but then, that isn't a "long uptime" solution anyway, the point of live patching is to keep the system running safely until your next maintenance window.
I can't speak for FreeBSD, but on my OpenBSD system hosting ssh, smtp, http, dns, and chat (prosody) services, restarting userspace is nothing to sweat. Not because restarting a particular service is easier than on a Linux server (`rcctl restart foo` vs `systemctl restart foo`), but because there are far fewer background processes and you know what each of them does; the system is simpler and more transparent, inducing less fear about breaking or missing a service. Moreover, init(1) itself is rarely implicated by a patch, and everything else (rc) is non-resident shell scripts, whereas who knows whether you can avoid restarting any of the constellation of systemd's own services, especially given their many library dependencies.
If you're running pet servers rather than cattle, you may want to avoid a reboot if you can. Maybe a capacitor is about to die and you'd rather deal with it at some future inopportune moment rather than extending the present inopportune moment.
My recollection is that, usually, it crashed more often than that. The 50 days thing was IIRC only the time for it to be guaranteed to crash (due to some counter overflowing).
> In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.
Or that the part of the system which needs the security updates not be exposed to the Internet. Other than the TCP/IP stack, most of the kernel is not directly accessible from outside the system.
> On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).
You don't need a software update for that. Normal use of the system is enough to make it gradually diverge from its "clean" after-boot state. For instance, if you empty /tmp on boot, any temporary file is already a subtle difference from how it would be on a fresh reboot.
Personally, I consider having to reboot due to a security fix, or even a stability fix, to be a failure. It means that, while the system didn't fail (crash or be compromised), it was vulnerable to failure (crashing or being compromised). We should aim to do better than that.
Sure: People have smart TVs and tablets and stuff, which variously count as computing devices. And we've broadly reached saturation on pocket supercomputers adoption.
But while it was once common to walk into a store and find a wide array of computer-oriented furniture for sale, or visit a home and see a PC-like device semi-permanently set up in the den, it seems to be something that almost never happens anymore.
So, sure: Still-usable computers are cheap today. You've got computers wherever you want them, and so do I. But most people? They just use their phone these days.
(The point? Man, I don't have a point sometimes. Sometimes, it's just lamentations.)
I built the servers myself and then shipped to colo half way around the world.
I got over 1400 once and then I needed to add a new disk. They ran for almost 13 years with some disk replacements, CPU upgrades, and memory additions
Do you ever apply kernel patches? I also run FreeBSD and reboot for any kernel patches and never can get my uptimes to 1,000 days before that.
Do you just run versions that don't get security patches? Security support EOL dates generally means I need to upgrade before 1,000 days too. For example the current stable release gets security patches only from June 10, 2025 to June 30, 2026 giving just over 360 days of active support.
I get FreeBSD is stable and get days of uptime, and I could easily do the same if I didn't bother upgrading etc, it's just that I can't see how that's done without putting your machine at risk. Perhaps only for airgapped machines?
For my personal machines, I just run GENERIC kernels and that includes a lot, so I need to do a lot of updates. I also reboot every time I update the OS (even when it's an update that doesn't touch the kernel) so that I'm sure reboots will be fine... but I did setup my firewalls with carp and pfsync so I can reboot my firewall machines one at a time with minimal disruption.
For work machines, I use a crafted kernel config that only includes stuff we use, although so far I've usually had one config for all boxes, because it's simpler. If there's a security update for part of the kernel that we don't use, we don't need to update. Security update in a driver we don't have, no update; security update in tcp, probably update. Some security updates include mitigation steps to consider instead of or until updating... Sometimes those are reasonable too. Sometimes you do need to upgrade the whole fleet.
When there's an update that's useful but also has an effective mitigation, I would mitigate on all machines, run the update and reboot test on one or a few machines, and then typically install on the rest of the machines and if they reboot at some point, great, they don't need the mitigation anymore. If they are retired with 1000 days of uptime and a pending kernel update, that's fine too.
I would not update a machine just because support for the minor release it was on timed out. Only if there was an actual need. Including a security issue found in a later release that probably affects the older one or at least can't be ruled out. Yes, there's a risk of unpublished security issues in unsupported releases; but a supported release also has a risk of unpublished security issues too.
Risk is relative and not just about THAT security. I worked at an AV vendor and the joke internally was our security threat lists were the scoreboard for bad actors. But if you asked our salespeople you are an irresponsible hack if you don't keep up to date. They never account for the person who really can do it himself -- that is not their customer.
Yes I used to build custom FreeBSD kernels a lot. I manually made security patches on a few occasions and I put in many work-arounds by reading the security mailing list etc. Yes I went well past EOLs a few times for sure.
Always behind a firewall, workloads always in a Jail.
IIRC the release cycles used to be longer and it was less of an issue ~10 years ago. Can anyone confirm?
Most of my downtimes started with a power issue in the datacenter or a need for a hardware upgrade.
Tightly firewall'd to my VPN and SSH is restricted to certificates. There are no services on the host that would allow users to upload inject, the most you could achieve is a DDoS.
I run bHyve in a jail and any legacy services that could jump out of bHyve stack would straightly go to jail.
I realized the right way to start is with GOOD hardware. So I went on eBay (I know, I know...) and found a nice Supermicro uATX server board and a 65 watt quad core Xeon in the 1151(?) socket then bought a fresh set of Kingston 16 GB ECC DIMMs, and 4x 8TB enterprise CMR SATA disks.
I read the docs and read a few how-to's from blogs and personal sites. In a day I had a everything setup, accounts, Raid-Z5 data store, samba and nfs exporting the data store. It was so damn easy. It's been running solid ever since. It's so reliable it's boring. I have to remind myself its even there so I can run updates and check on the thing to make sure its not full of dust or snakes or whatever.
$ uname -a
Linux deb2 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux
$ uptime
08:50:41 up 2512 days, 17:15, 1 user, load average: 18.70, 20.46, 21.43
I get what you’re going for. But…
Please god no. Immutable images, servers are cattle not pets.
But I don't like Linux. I use it daily, but I don't like it. I wish FreeBSD held the position Linux does in the market today. That would be heaven.
No, the success Linux has had is because it ran on the machines people had at home, and was very easy to try out.
An instructive example would be my own path into Linux: I started with DJGPP, but got annoyed because it couldn't multi-task (if you started a compilation within an IDE like Emacs, you had to wait until it finished before you could interact with the IDE again). So I wanted a real Unix, or something close enough to it.
The best option I found was Slackware. Back then, it could install directly into the MS-DOS partition (within the C:\LINUX directory, through the magic of the UMSDOS filesystem), and boot directly from MS-DOS (through the LOADLIN bootloader). That is: like DJGPP, it could be treated like a normal MS-DOS program (with the only caveat being that you had to reboot to get back to MS-DOS). No need to dedicate a partition to it. No need to take over the MBR or bootloader. It even worked when the disk used Ontrack Disk Manager (for those too young to have heard of it, older BIOS didn't understand large disks, so newer HDDs came bundled with software like that to workaround the BIOS limitations; Linux transparently understood the special partition scheme used by Ontrack).
It worked with all the hardware I had, and worked better than MS-DOS; after a while, I noticed I was spending all my time booted into Linux, and only then I dedicated a whole partition to it (and later, the whole disk). Of course, since by then I had already gotten used to Linux, I stayed in the Linux world.
What I've read later (somewhere in a couple of HN comments) was that, beyond not having all these cool tricks (UMSDOS, LOADLIN, support for Ontrack partitions), FreeBSD was also picky with its hardware choices. I'm not sure that the hardware I had would have been fully supported, and even if it were, I'd have to dedicate a whole disk (or, at least, a whole partition) to it, and it would also take over the boot process (in a way which probably would be incompatible with Ontrack).
Copy / paste of my comment from last year about FreeBSD
I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.
The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.
https://en.wikipedia.org/wiki/CMD640
The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.
The sound card was another issue.
I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"
When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.
Many people and organizations adapted BSD to run on their hardware, but they had no obligation to upstream those drivers. Linux mandated upstreaming (if you wanted to distribute drivers to users).
Probably GPL was indeed a factor that made device makers and hackers to create open source drivers for linux. I am not convinced that it was a major one.
Apparently many here are unaware of the history and story as to what stalled FreeBSD in a long lawsuit involving ATT. You need to read up on that. Copyleft had nothing to do with it.
Some users of FreeBSD prefer more freedoms than GPL offers. The contributors must not be put off by providing more freedoms.
Places I've worked have contributed changes to FreeBSD and Linux, mostly for the same reason ... regardless of any necessity from distributing code under license, it's nicer to keep your fork close to upstream and sending your changes upstream helps keep things close.
> I wish FreeBSD held the position Linux does in the market today. That would be heaven.
Well The BSD's were embattled with a lawsuit from AT&T at the time Linux came around, so it got a late start as it were, even if it's a lot older.
I don't. That would break everything I love about it. If it was as big as Linux there would be a lot of corpo suits influence, constant changes, constant drive to make it 'mainstream' etc. All the things I hate about Linux.
> Anything that makes it easier to use GCC back ends without GCC front ends--or simply brings GCC a big step closer to a form that would make such usage easy--would endanger our leverage for causing new front ends to be free.
I have zero interest in tinkering with my operating system. I mostly want it to just get out of my way, which Linux does well 95% of the time.
It did take a while to set it up but then it runs fine. I don't view my OS as a hobby, but I do want to have full control over it and to be able to understand how it works. I don't want to have to trust a commercial party to act in my best interests, because they don't. The current mess that is windows, full of ads and useless ai crap, mandatory telemetry, forced updates, constantly trying to sell their cloud services etc is a good example. FreeBSD doesn't do any of those things.
Most Linuxes don't either but there's still a lot of corpo influence. I feel like it's becoming a playing ball of big tech. You only need to see how many corp suits are on the board of the Linux foundation, how many patches are pushed by corp employees as part of their job etc. I don't want them to have that much influence over my OS. I don't believe in a win-win concerning corporate involvement in open-source.
FreeBSD has a little bit of that (netgate's completely botched wireguard is and example) but lessons are learned.
This is one of those things that mom-Linux people think but isn't really true. I can think of two episodes in the last decade (systemd and Wayland) that constituted controversial changes but frankly there are people who make "not using systemd" their entire identity and it's just so much cringe.
Even on a rolling release bleeding edge distro like Fedora things really don't change that much at all.
>I don't view my OS as a hobby, but I do want to have full control over it and to be able to understand how it works.
FreeBSD doesn't afford you any more or less control over how the system works than Linux.
FreeBSD has always required far less tweaking or maintenance than Linux, though.
Personaly I think Intel's early investment in linux had a lot to do with it. They also sold a compiler and marketed to labs and such which bought chips. So linux compatibility meant a lot to decision makers.
AMD the underdog went more in on Linux compat than NVIDIA. Which may have been a business decsion.
I dunno, maybe the GPL effect was more a market share thing with developers than a copyleft thing.
Nota Bene: I do love copyleft and license all my own projects AGPL
- ex Sun
But Oracle destroys everything it touches.
yeah.
Also as the project gets bigger, at some point somebody will come with the idea to move to linux.
And when running a Samba server, it's helpful that FreeBSD supports NFSv4 ACLs when sitting between ZFS and SMB clients; on Linux, Samba has to hack around the lack of NFSv4 ACL support by stashing them in xattrs.
You can arguably get even better ZFS and SMB integration with an Illumos distribution, but for me FreeBSD hits the sweet spot between being nice to use and having the programs I need in its package library.
This can be automated by whatever is updating your kernel.
But I definitely believe that everything you can do on FreeBSD, you can also do on Linux. For me it's the complete package though that comes with FreeBSD, and everything being documented in the man pages and the handbook.
If the product is python, thats what it is. there is no python-additonal-headers or python-dev or bundle-which-happens-to-be-python-but-how-would-you-know.
There is python, and there are meta-ports which explicitly 'call' the python port.
The most notable example being X11. Its sub-parts are all very rational. fonts are fonts. libs are libs. drm is drm. drivers are drivers.
(yes, there is the port/pkg confusion. thats a bit annoying.)
Just. Run. Debian.
Also it’s OSS — contribute that support if you’re so passionate about it.
Firstly, FreeBSD already supports x86 Mac Minis. Servers? M-series Minis and Studios are very good servers. Lastly, FreeBSD has an Apple Silicon port which has stalled.
https://wiki.freebsd.org/AppleSilicon
I'll ignore your last point.
Impatience and lost skills is why it’s not a mainstream player.
What's so difficult?
"To enable the driver, add the module to /etc/rc.conf file, by executing the following command: ..."
https://docs.freebsd.org/en/books/handbook/x11/
I get that this isn't brain surgery. But come on
The Linux (Ubuntu, etc) install experience leads to a usable desktop. Heck, the installer disc boots to a usable desktop.
Also no unsophisticated users even know the name of their favorite DE. Or what a DE is.
Requiring a text login and a shell command, even one as simple as "pkg install KDE" is a big ask for a casual user these days. Also, that command line will probably fail. :)
I write these things as a very big fan of FreeBSD! I think not catering to casual users keeps FreeBSD in a better technical place overall, but Linux is obviously much more popular. This carries risks too.
But in FreeBSD 15 it will be part of the installer. However even an installer is too much to ask of today's mainstream users. I don't want freebsd to become mainstream though especially because what mainstream users want (everything decided on by a vendor) is completely contrary to what FreeBSD stands for and what I want.
I'm not saying Make Everything Easy. If there's real reasons not to have easy x11 onboarding, if FreeBSD really is intended to be an OS for experts (and I get that it may well be, for a variety of historical reasons), then fine
That does not mean they do not work, mind you - GNOME succeeds in dumbing things down that even 60 years old grandmas could use it (until they misclick and then are presented with 20 windows all put side to side). And KDE gives a lot of flexibility in tweaking it how you may want it (if we ignore Nate's donation widget). But it just is still waaaaaaay too complicated and convoluted to use. I am better off just describing my system in .yml files and then have ruby autogenerate any configuration value than struggle through annoying widgets to find some semi-random semi-new setting (or none such setting existing such as is the case in GNOME). I'd wish we could liberate these DEs from upstream developers and their dictatorship. I mean, we, can, e. g. patch out the code that shouldn't exist (like Nate's Robin Hood widget), but I mean on a global basis as-is. We as users should be in full control of EVERYTHING - every widget. Everything these widgets do, too. And everything they don't do right now but should do. Like in evince, I hate that I can't have tabs. That annoys me. I am aware that libpapers changes this, but boy ... just try to discuss this with GNOMEy devs. That's just a waste of time. I want to decide on everything here - upstream must not be able to cripple my system or influence it in no way I approve of.
It's probably not for a grandma but I don't care. It doesn't have to be. For me the more software is suitable for the mainstream, the less suitable it is to me.
Not sure what you mean by donation widget, I use KDE on FreeBSD as daily driver (and on the latest version) and I've never seen it. I donate monthly to KDE but it doesn't have any way of knowing that.
No clue what he is babbling about. LFS/BLFS is active. FreeBSD doesn't have that. I am sorry but Linux is the better tinker-toy. I understand this upsets the BSD folks, but it is simply how it is. Granted, systemd and the corporatification took a huge toll into the Linux ecosystem but even now as it is in some ruins (KDE devs recently decreed that xorg will die and they will aid in the process of killing off xorg, by forcing everyone into wayland), it is still much more active as a tinker-toy. That's simply how it is.
I recall many years ago NetBSD on the mailing list pointed out that Linux now runs on more toasters than NetBSD. This is simply the power of tinkerification.
> Please keep FreeBSD the kind of place where thoughtful engineering is welcome without ego battles
K - for the three or four users worldwide.
> There’s also the practical side: keep the doors open with hardware vendors like Dell and HPE, so FreeBSD remains a first-class citizen.
Except that Linux supports more hardware. I am sorry FreeBSD people - there is reality. We can't offset and ignore it.
> My hope is simple: that you stay different. Not in the way that shouts for attention, but in the way that earns trust.
TempleOS also exists.
I think it is much more different than any of the BSDs.
> If someone wants hype or the latest shiny thing every month, they have Linux.
Right - and you don't have to go that route either. Imagine there is choice on Linux. I can run Linux without systemd - there is no problem with that. I don't need GNOME or KDE asking-for-donation begging devs killing xorg either. (Admittedly GTK and QT seem to be the only really surviving oldschool desktop GUIs and GTK is really unusuable nowadays.)
> the way the best of Unix always did, they should know they can find it here.
Yeah ok ... 500 out of 500 supercomputers running Linux ...
> And maybe, one day, someone will walk past a rack of servers, hear the steady, unhurried rhythm of a FreeBSD system still running
I used FreeBSD for a while until a certain event made me go back to Linux - my computer was shut off when I returned home. When I left, it was still turned on. It ran FreeBSD. This is of course episodical, but I never had that problem with Linux.
I think FreeBSD folks need to realise that Linux did some things better.
For some reason, every time FreeBSD is put in a positive light there is always a lot of Linux(-only) users who has to put FreeBSD down.
I don’t understand why. Would be interesting to understand this psychological phenomena.
For 30 years I have been using permanently both FreeBSD and Linux, because they both have strengths and weaknesses.
I am using Linux on laptops and desktops, where I may need support for some hardware devices not supported by FreeBSD or I need software compatibility with certain applications that are not easily ported to FreeBSD.
I also use Linux on some computational servers where I need compatibility with software not available on FreeBSD, e.g. NVIDIA CUDA. (While CUDA is not available for FreeBSD, NVIDIA GPUs are still the right choice for FreeBSD computers when needing a graphic display, because NVIDIA provides drivers for FreeBSD, while AMD does not.)
I use FreeBSD on various servers with networking or storage functions, where I value most to have the highest reliability and the simplest administration.
Linux users aren't interested to see potential adopters go for BSD instead of joining their ranks.
On every forum, every discussion, there is at least one guy saying he runs that game on Linux, or that other OS is somehow inferior to Linux, or this problem would never happen on Linux...
It's all about marketing, if you will...
There is fanboyism on every OS, pretty much like soccer fans or religious zealots.
So what? Big whoop.