Last year I had aha moment when I learned about SAS boards.
My final build was:
- Jonsbo N3 case
- ITX PSU
- ASRock B550M/ITX
- Ryzen 5500GT
- LSI 9207-8i
- 16GB DDR4
Result - for ~$1k I have 8 bay NAS that's also strong HTPC and console capable of running games like Silksong with no sweat.Before I learned about SAS cards my biggest limiter was number of SATA slots. If not for that card I would be looking at $600+ niche mobos.
But the biggest winner was 5500GT. I was looking at N300 for lower power consumption, but Ryzen draws 1W at idle and has APU capable of serious gaming (not 4k Elden Ring, but c'mon).
I love some AliExpress deals for some of my hobby purchases, but a NAS motherboard is not one of them.
The contrast between the fashionable case, boutique Noctua fans, and then an AliExpress motherboard doesn’t inspire confidence in the priorities for this build. When it comes to a NAS I prioritize the core components from well known manufacturers and vendors. With everything from hobby gear to test equipment AliExpress has been a gamble on wherever you’re getting the real deal or a ghost shift knockoff or QA reject for me. It’s the last place I’d be shopping for core NAS components.
The Jonsbo case is probably both cheaper and easier to buy from AliExpress than Amazon too to be fair. It's also a Chinese brand.
This listing should be as close to a sure thing as you can get on AliExpress. But even then, dealing with AliExpress when things go sideways isn't always all that great.
I have purchased this motherboard from this link, I'd do it again in a heartbeat if I needed to, and I wouldn't fault anyone for avoiding AliExpress either.
If somebody wants to buy from a different vendor, it's sometimes possible to find resellers on Amazon or eBay (myself included). The prices might be a bit more expensive, but some folks might think it's worth it.
I talk about it in the blog, it's my understanding that there's shortages of the Intel CPUs which I would expect to drive up the pricing.
See those NVMEs he stuck in there? They're running at 1/8th their rated link speed. Yeah...12.5%. (PCIe 4.0 x4 vs PCIe 3.0 x1). This board is one of the better ones on pcie layouts [0], but 9 gen3 lanes is thin no matter how you look at it so all those boards have to cut corners somewhere on that
I decided I'm better off with a ebay AM4 build - way better pcie situation, ECC, way more powerful cpu, more sata, cheaper, 6x nvme all with X4 lanes, standard fan compatibility. Main downsides being no quicksync, power consumption and fast ECC UDIMMS are scarce. That was for a proxmox/nas hybrid build though so more emphasis on performance
So if you crunch the numbers you'll see storage (esp flash) is much faster than network, meaning NAS can cut corners on storage. If you're running something like a VM that accesses the storage directly then suddenly storage speed matters
Realistically for your DIY stuff you're talking a different beast to these NASs. Bang for your buck you'd be attaching the mass 3.5" storage in an external second hand JBOD enclosure and the main device would be dealing with the faster storage and have an HBA to connect to it.
[1] edit: as Havoc pointed out I need my coffee should be 2gb/s which does change the point.
Better question is how the article is getting 1.2 on fio tests. That must be a testing artifact
For reference, the UNAS Pro comes with 10G networking, and will deliver roughly 500MB/s from a 4 HDD RAID5 array, and close to 1GB/s from the SSDs (which it never gets a chance to do, as I use them for photos/documents).
My entire "network stack", including firewall, switch, everything POE, hue bridge, tado bridge, Homey Pro, UPS, and whatever else, consumes 96W in total, and does pretty much all my family and I need, at reasonable speeds. Our main storage is in the cloud though, so YMMV.
I know that's not true. I say this as someone who measures the power consumption of individual components, and even individual rails with a clamp meter. The OP measures an idle power of 67W. He has 6 x 8TB HDDs. These typically consume 5W idling (not spun down). So the OP's NAS without drives is probably around 37W.
A UNAS Pro without drives reportedly consumes 20W with no drives. Adding 4 x 8TB at 5W per drive, means your UNAS Pro config with drives probably idles at 40W (again, drives not spun down). That means you are 17W under his NAS idle power. So you claim your remaining hardware (Mac mini, 4 APs, 4 cameras) run in under 17W... Yeah that's not possible. 17W is peanuts; it's half the power of a phone's fast charger (~30W).
PS: for the OP, an easy way to further reduce power consumption is to replace your 500W PSU with a smaller one, like 250-300W which is still amply over-specced for your build. Because the typical efficiency curve of a PSU drops sharply at very low loads. For example at idle when your NAS pulls 67W from the wall it's very probable it supplies only ~50W to the internal components, so it's running at 10% load and it's only 50/67 = 75% efficient. The smallest load for which the 80 Plus Gold standard requires a minimum efficiency is 20%. If you downgrade to a 250W PSU you are enforcing a minimum 20% load for which the 80 Plus Gold standard requires minimum 87% efficiency. The load at the wall would thus drop to 50/.87 = 57W thereby saving you 10W.
So ~75W in total for everything PoE, Mac Mini and UNAS Pro. I was 8.5W over, so remove the Mac Mini from the equation.
The rest of the consumption (21W) is made up of a UDM Pro with a 4TB WD Red, USW Pro Max 16 POE, Hue Bridge, Tado Bridge, Homey Pro, and a Unifi UPS Tower.
and yes, that's at idle (drives spinning). It does rise to 120-130W when everything is doing "something".
About 10x difference in CPU performance, 4x in RAM, zfs vs btrfs, quicksync, kubernetes/docker etc.
Doesn't make the unify an inferior machine - it just reflects a narrower specialized focus on serving files...and yes does so with lower idle draw.
Are your drives spun? 70w is a pretty low bar. The nas by itself is probably 40w with drives, Mac mini is another 7-10w (especially at wall) and now we are at 50w, so 20w left for 4 AP and cameras
It's part of the 3-2-1 backup setup, but where other people have their "offsite backup" in the cloud, I keep my working copy there, and have backups at home.
I outsourced operations of it though. I have self hosted for decades, and for the first time in 15-20 years, I'm able to take a vacation and not bring my laptop in case something breaks.
As for main storage, as was probably evident from my comment, I don't have 30TB of cloud storage. We have our important stuff in the cloud, and "everything else" at home, but nothing at home is accessible from the internet unless you're on a VPN.
I've answered the "You build one every year?" question quite a few times over the years.
These blogs have a shelf life. After about a year, newer hardware is available that's a better a better value. And after 2-3 years, it starts to get difficult to even find the many of the parts.
I don't replace my NAS every year, but every now and then I do keep my yearly NAS for my own purposes, but 2026 won't be one of those years.
> How does one establish the reliability of the hardware...?
One guy on the Internet is--and always will be--an anecdote, I could use this NAS until its hardware was obsolete and I'd still be unable to establish any kind of actual reliability for the hardware. Unfortunately, I don't have the time or money required to elevate beyond being a piece of anecdotal data.
However, there's a sizable audience out there who have realized that they need some kind of storage beyond what they already have, but haven't implemented a solution yet. I think if you put yourself in their shoes, you'll realize that anything is better than nothing in regards to their important data. My hope is that these blogs move that audience along towards finding that solution.
That's true of course. The problem, in my view, is that this is how everyone on the internet acts especially the "reviewers" or "builders" or "DIYers". It's not just you, so don't take this as a personal attack.
Almost all articles and videos about tech (and other things now too) do the equivalent of "unboxing review". When it's not strictly an unboxing, it's usually like "I've had this phone/laptop/GPU/backpack/sprinkling system/etc for a month, and here is my review"
I stopped putting much weight on online reviews and guides because of that. Almost everyone who does them uses whatever they are advertising for _maybe_ a month and moves on to the next thing. Even if I'm looking for an older thing all reviews are from the month (or even day) it was released and there is very little to non a year or 2 after because understandably they don't get views/clicks. Even when there are later reviews, they are in the bucket of "This thing is 3 years old now. Is it still worth it in 2025? I bought a new one to review and used it for a month"
Not to mention that when reviewers DO face a problem, they contact the company, get a replacement and just carry on. Assuming everyone will be in the same position. From their prospective, it's understandable. They can't make a review saying "Welp, we got a defective one. nothing to see here". On the other hand, if half the reviewer faced problems, and documented it, then maybe the pattern will be clearer.
Yes, every reviewer is a "one guy on the internet" and "is--and always will be-- an anecdote". No one is asking every reviewer to be come Consumer Reports and test hundreds of models and collect user feedback to establish reliability scores. But at the same time if each did something similar it would be a lot more useful than what they do.
I'll give you a concrete example off the top of my mind --a Thermapen from ThermoWorks.
When I was looking for "the best kitchen thermometer" the Thermapen was the top result/review everywhere. Its accuracy, speed and build quality were all things every review outlined. It was a couple of years old by then and all the reviews were from 2 years ago. I got one and 6-8 months later, it started developing cracks all over the body. A search online then showed that this is actually a very common issue with Thermapens. You can contact them and they might send you another one of the older models if they still have them (they didn't in my case) but it'll also crack again. Maybe you can buy the new one?
May sound petty to put that one example on the spotlight, but very similar thing happened to me with a Pixel 4, a Thinkpad P2, a Sony wireless headphones, a Bose speaker, and many more that I'm forgetting. All had stellar "3 week use reviews". After 6 months to a year and they all broke down in various ways. Then it becomes very easy to know what to search for and the problems are "yeah, that just always happens with this thing"
These DIY NAS build blogs have a bit of formula: Here's my criteria, here are the parts that I chose to meet that criteria, and here's what I think after I've built and tested it to the best of my ability.
If I had my choice, my blog would inspire people to understand their own criteria and give them the confidence to go build something unique that meets that same criteria. This absolutely happens, but it's the exception rather than the rule. The rule is that people choose to replicate these DIY NAS builds part-for-part.
I'm as confident in this DIY NAS as I've been for the ones I created in the past. The times there were issues with these builds (eg: the defective C255X/C275X CPUs from Intel), I've updated those blogs with all the details I can muster about those issues.
It seems like you have a specific person in mind as the audience member (yourself?), but the piece could benefit from a wider view. Given your hardware choices, reliability seems to not be a factor at all, but near-term cost (at the expense of long-term cost).
I'm not expecting you to personally test your hardware choices, but make choices based on the aggregate accounts of others, like the rest of us.
I would imagine the mean audience member would want to buy something they can set and forget, which would necessitate the reliable choice. That is wholly different than a person who is excited about tinkering with a brand new NAS each year
The audience does want something that they can set and forget, which they've been doing for about as long as I've been writing these blogs. The few recipients of these actual DIY NAS builds (via raffles, giveaways, auctions, etc) have used them for years. For years, people have been telling me about how they used one of my previous DIY NAS buids as inspiration years ago and have told me how they have worked for them over the years. I expect in the not-to-distant future, someone will be telling me the same about this particular NAS. Despite disbelief and insistence otherwise, I'm insanely confident that this DIY NAS will be reliable for years to come.
They're tagged for the post and year so must be worth it to go to that trouble rather then using generic tag for the whole blog.
tag=diyans2024-20, tag=diynas2025-20,tag=diynas2026-20
It doesn't increase the price or impact your buyer experience in any way, so why do you care? If this blog post introduced you to a product you wanted to buy, why should you have a problem with the author getting a finders fee from the seller? Just seems mean-spirited.
These two statements have a very different impact:
1. I love product X and I won't get paid if you buy it too.
2. I love product X and I will get paid if you buy it too.
Money motivates people to claim they love a product or that a product is good, even if not true. It's a problem that has plagued the internet for decades.
Influence and power are far more intoxicating currencies than affiliate revenue.
And if someone complained "you're just publishing this helpful thing to become more influential in [community]," well, at some point we need to acknowledge incentives drive all behavior in one way or another.
Refusing the incentive doesn't make one per se virtuous.
> undisclosed affiliate links.
That's quite controversial, compared to disclosed affiliate links. IMO for good reason.
I am no longer an Amazon affiliate. I no longer make any money with that program. I still give people Amazon links when I want to introduce an example of a particular product where I think it will be helpful.
Nothing has changed about the way I write or recommend gear, except for the present-day absence of an affiliate link. It was the same before I was an affiliate, it was the same while I was an affiliate, and it remains the same now that I am no longer an affiliate.
---
"So why Amazon links when you could just link to the manufacturer's page instead," you may be asking?
That answer is simple: Because Amazon is consistent, accessible, and includes pricing.
Manufacturers' web pages too often have a profound tendency to be absolutely awful: It's a spectacle of moving images and flashing lights, noisemakers, pop-ups and fucking "SPIN THE WHEEL FOR A PRIZE!!!!" bullshit instead of -- you know -- information.
But all a person really needs as a jumping-off point is basic information. A description, some photos, and a realistic price is a good start.
That latter set is really all that Amazon provides. And that kind of simplicity is useful to me.
My ultimate motivation when I link a product is to be helpful to others. Affiliate or not, linking to Amazon furthers that goal of mine in ways that sending clicks to some Web analog of the Vegas Strip cannot ever accomplish.
---
"So if you're so [euphemism], then why aren't you an affiliate anymore," you may wish to ask next.
That answer is also easy: Several years ago, Amazon demanded that I submit of all of my social media information in order to maintain participation in the program. I was not OK with doing this, so I ignored that demand. They subsequently kicked me out.
Isn't that assumed nowadays that every link to a marketplace is an affiliate link?
Yes.
> Isn't that assumed nowadays that every link to a marketplace is an affiliate link?
Other people doing something wrong is seldom a good reason to do it wrong yourself.
My own personal pettiness: If an article declares the existence of affiliate links, I'll use those links more often than not. If they don't, I'll make an effort to revisit the links without the affiliate IDs. If an article presents both affiliate and non-affiliated links, I will generally use the former, and I'll trust the writers opinions a little more than otherwise. I actually keep a separate browser for buying things once they have been researched, to slightly inconvenience the tracking of me generally, so I won't be linked by “last affiliate” tracking unless fairly decent profiling is in action (which it won't be: sellers won't make that much effort just to pay money out to affiliates), only if I copy over the affiliate-id decorated link (or the original source article and click the link in that environment).
For some people, building a NAS, or a fuller home-lab, is a hobby in itself. Posts like this are generally written by one of those people for an audience of those sorts of people. Nothing wrong with that. I was someone like that myself, some time ago.
On a more cynical note: if the blog is popular enough, those affiliate links might be worth more than a few pennies and a post about previous years builds with links to those years choice of tech, will not see anything like the same traction. It wouldn't get attention on HN for a start (at least not until a few more years time, when it might be part of an interesting then-and-now comparison).
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
When building a device primarily used for storing personal things, I'd much prefer to save money on the motherboard and risk that failing than skimping on the drives themselves
How do I know? I've had two drives and one MB fail in quick succession thanks to a silently failing power supply.
Motherboards have fried connected hardware before, poor grounding/ESD protections, firmware bugs together with aggressive power management, wiring weirdness and power related faults have broken people's drives before.
What I've never heard about is a drive breaking something else in a system, but broken motherboards have taken friends with them more than once.
I’ve experienced many drive failures over the years, but never lost data due to RAID. Failing MB or PSU on the other hand has wiped out my entire system.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
Got a new network switch that runs somewhat hot (TP-Link) and it's behaving the same way, built-in fan runs either not at all, or at 100% (and noisy at that). Installed OpenWRT on it briefly, before discovering 10Gbe NIC didn't work with OpenWRT, and it had much better fan control. Why is it so hard to just place a basic curve on the fan control based on the hardware temperature? All the sensors and controllers are there apparently, just a software thing...
I think it's technically possible to make a modem which will consume less power and use passive coiling but I don't think they (ISP and device manufacturer) care.
Incidentally I actually found Truenas to be a solid upgrade from my old vanilla FreeBSD install; the tuned performance defaults made things a lot better for me after I recovered my volume (the USB stick I was using for the OS died).
So, its sad to recommend something else, or obviating it entirely.
It's great for people who just want storage and don't want the heavy features that came with TrueNAS' move to Linux (Kubernetes, etc.) or who want full control over vfs_fruit options for serving Macs.
YMMV but for my use case I have my NAS hooked up to a TV in my living room that also works as HTPC, gaming console, and occasionally a spare PC for visitors.
But I'd be a bit worried about the availability of drivers for much of the hardware found on this particular motherboard.
"Just skip owning a car, just buy a nice pair of Adidas, they're easy to clean and don't cost much."
???
Why do you think using FreeBSD as a NAS complicates your life?
With a FreeBSD or Linux machine, even step one requires considerable thought. Are there web UI packages that I can use? Which one do I pick? Where do I install it from? How do I ensure that it runs on boot? Do I have to mess with the network configuration to ensure that http://mynas.local is accessible? How do I configure SMB? What's the deal with security updates? And so on, dozens of times over.
It's great if you're already in the depths of sysadmindom and know what you're doing, but man, I just want to put my files on some LAN drives and call it a day.
If you are looking to replicate the exact same thing(why would a NAS need a web ui), of course there is considerable work ahead of you.
I believe you are overstating how difficult it is to configure a FreeBSD box as a NAS. Configuring samba for example is a breeze.
The questions you are asking are good questions that are answered by the manual. Keeping FresBSD updated requires the use of two commands, both well documented in the handbook, freebsd-update and pkg.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
[1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f...
[2] https://www.seagate.com/content/dam/seagate/en/content-fragm...
I built the case from Makerbeam and printed panels, an old Corsair SF600 and a 4 year old ITX system with one of Silverstone's backplanes. They make up to 5 drives in a 3x5-1/4 bay form factor. It's a little overpowered (a 5950X), but I also use it as a generic server at home and run a shared ZFS pool with 2x mirrored vdevs. Even with inefficient space it's more than I need. I put in a 1080ti for transcoding or odd jobs that need a little CUDA (like photo tagging). Runs ResNet50-class models easily enough. I also wondered about treating it as a single-node SLURM server.
How much RAM did you install? Did you follow the 1GB per 1TB recommendation for ZFS? (i.e. 96GB of RAM)
For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.
There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.
ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.
But yes, there's almost no instance where home users should enable it. Even the traditional 5gb/1tb rule can fall over completely on systems with a lot of small files.
Some ZFS discussions suggest that an L2ARC vdev can cache the DDT. Do you know if this is correct?
I'm not sure about whether an L2ARC vdev can offload the DDT, but my guess is no given the built-in logic warning against mismatched replication levels.
I'm way to bothered by how long it would take to resilver the disks that size.
That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate).
It felt like an unnecessary purchase at the time (I'm still waiting to CAD a CPU cooler mounting solution for the build in a new case that has room for the drives). But it seems like that deal is going to be the high water mark for a few years, at least.
The developer hardkernel also publishes all relevant info such as board schematics.
I do not know whether the 8-core version (H4 Ultra) also enables in-band ECC, as for that CPU Intel does not specify embedded uses, so they may disable the ECC support in the factory.
See e.g.:
https://www.cnx-software.com/2024/05/26/odroid-h4-plus-revie...
However, looking right now at:
https://forum.odroid.com/viewtopic.php?f=171&t=48377
I see that someone has enabled successfully in-band ECC on the 8-core ODROID H4 Ultra and has run benchmarks with ECC disabled/enabled. Therefore it appears that in-band ECC support exists on all models.
The results of benchmarks with in-band ECC disabled/enabled may be not representative for real workloads. In-band ECC relies on caching the ECC bits in a dedicated ECC cache, in order to avoid excessive memory accesses. The effectiveness of the ECC cache can be very different for the benchmark and for the real workload, leading to misleading results. Usually for the real workload it is likely that the cache hit-rate will be higher, so the performance drop with in-band ECC enabled will be less conspicuous.
Only when they expose this feature in Linux EDAC drivers it becomes possible to do this. In the past Intel had maintained well its Linux EDAC drivers, but AMD had frequently great delays between the launch of a CPU and the update of the drivers. After the many lay-offs at Intel, it is unknown whether in the future their Linux support will remain as good as in the past.
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
The rated power supply spec is the maximum it can provide, not the actual consumption of the device.
Before that I had a full size NAS with an efficient Fujitsi motherboard, pico-psu, 12V adaptor and spinning HDD's. That required so much extra work for so little power efficiency gains vs the Odroid.
If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage:
* A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it.
* The other ports can be used for conventional M.2 nVME drives.
It's very well made, not as tight on space as I expected either.
The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array.
Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports.
Integrating a dust filter (not necessarily HEPA, but MERV 11) and the required fan upgrades would be wonderful.
The only downside is slightly higher power consumption. But just bought a 32 core 3rd gen Xeon CPU + motherboard, 128GB RAM, it idles at 75w without disks which isn't terrible. And you can build a more powerful NAS for a third of the price of a high end Synology. Unlikely that the additional 20-30w idle power consumption will cost you more than that.
Do you have any source for this claim? Why would be the firmware so different? Software is cheap i don't think they would be that different.
I mean a used enterprise disk gets sold after it was running on heavy load for a long time. Any consumer hdd will have a lot less runtime than enterprise disks.
75 W probably need active cooling. 4 W do not.
Anyway you can probably do many more things with that 75 W server.
What exactly is failing in Germany, and why is it important in this context?
The hardware has different form factors (19"), two power supplies, very loud, very power hungry.
There are so many good combinations of old and still functional hardware for consumers.
My main pc 6 years ago had a powerful cpu and idle load of 15 watts due to the combination of mainboard and amount of components i had in it (one ram block instead of 2 or so).
And often enough, if you can buy enterpsire hardware, the hardware is so outdated that a current consumer system would beat it without looking at it.
If you then need to replace something, its hard to find or its differennt like the power supply.
That's perfectly fine, if your NAS has redundancy and you can recover from 1 - 2 disk failures, and you're buying the drives from a reputable reseller.
I'm not sure what the benefit would be since all it's doing is moving information from the drives over to the network.
I'm running a TrueNAS box with 3x cheap shucked Seagate drives.*
The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office.
I can un-confidently say that it feels like the fastest TM device I've ever used!
TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware.
I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS.
So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster.
And I can confirm that it soaks up most of the 48GB of RAM!
TrueNAS reports ZFS Cache currently at 36.4 GiB.
I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL
* I don't recommend this. They seem unreliable and report errors all the time. But it's just what I had sitting around :-) I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs...
Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.
I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.
Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.
[1]: https://openzfs.github.io/openzfs-docs/Project%20and%20Commu...
I have some workloads where I have to go through a lot of files multiple times and the extra RAM cache makes a huge difference. You can tell when the NAS is pulling from cache or when it has a cache miss.
If you're running a NAS for a company that has many users and multi disc access at the same time, sure. But then you're probably then not buying hdds to shuck and cheap components off ebay.
1) A refurbished Dell Wyse 5070 (8GB of RAM) with a cheap 64GB SSD from 2013 2) An 8-bay USB-C hard drive enclosure 3) 4 used 12TB hard drives from eBay, 4 3TB drives from 2010 that still somehow haven’t died. 4) A headless Debian with various Linux ISO trackers / downloaders in Docker / Plex (the CPU, though slow, has decent hardware encoding) 5) No RAID, but an rsync script for important Linux ISOs and important data that runs weekly across different drives. I also have cold storage backup by purchasing “Lot of X number” 500GB hard drives on eBay from time to time which store things like photos, music, etc over two drives each.
The whole setup didn’t cost me much, and is more than enough for what I need it to do.
Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine.
But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it.
Well by default the timeout is 5 seconds[1], so not that long.
[1]: https://openzfs.github.io/openzfs-docs/Performance%20and%20T...
Obligatorische Pastete: "16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen."
My NAS is around 100W (6-year old parts: i3 9100 and C246M) which comes to $25/£18 per month (electricity is expensive), but I can justify it as I use many services on the machine and it has been super reliable (running 24/7 for nearly 6 years).
I will try to see if I can build a more performant/efficient NAS from a mix of spare parts and new parts this coming month (still only Zen 3: 5950X and X570), but it is more of a fun project than a replacement.
But i do have to acknowledge that the US has relatively low power costs, and my state in particular has lower costs than that even, so the equation is necessarily different for other people.
The remote KVM options from HP and Dell and whatnot are usually so useless they might as well not exist except from remote power up / down, so I don't really care about that.
There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin
But it was always annoying having to 'eject' them before unplugging the laptop from the dock. Or sometimes overnight they would disconnect themselves and fill up my screen with dozens of "you forgot to eject" notifications. Yes I'm on macOS.
Do NAS avoid this issue? Or you still have to mount/unmount?
Why does there seem to be much more market for NAS than for direct attached external HDD?
Eventually I got a new laptop with bigger SSD, started using BackBlaze for backups, and mostly stopped using the external HDDs.
I always assumed NAS would be slower and even more cumbersome to use. Is that not the case?
The main risk with directly attached storage is that most kernels will do "buffered writes" where the data is written to memory before it's committed to disk. Yanking the drive before writes are synced properly will obviously cause data loss, so ejecting is always a good idea.
Generally, NAS is a bit safer for this type of storage because the protocols are built with the assumption that the network can and will be interrupted. As a result, things are a bit slower since you're dealing with network overhead. So, like everything, there are some trade-offs to be made.
I can access my NAS from anywhere in the world, but you can only access your direct-attached drives when sitting at your desk.
I can hide my NAS in a closet, but your direct attach drives are wasting valuable desk space and causing noise in your workspace.
My NAS has a software raid (raidz2) so any two of my drives could die without losing a single bit of data. Technically this is possible with direct attached drives too, but usually people aren't attaching multiple external drives to their computer at the same time.
Multiple people/computers/phones can access my NAS simultaneously, but your direct attach drives are only usable by a single computer at a time.
I can use my NAS from any device/operating system without worrying about filesystem compatibility. With direct attach drives, you need to pick a filesystem that will be supported by the devices you want to plug in to it.
The downside is a NAS is running 24/7 which will consume more electricity than drives you only plug in on-demand, and file transfers will be slower over a network than directly plugged in to your computer, but 99% of the time the speed difference does not matter to me. (It really only impacts me when doing full-disk backup/restore since I'd be transferring hundreds of gigabytes.)
defaults write /Library/Preferences/SystemConfiguration/com.apple.DiskArbitration.diskarbitrationd.plist DADisableEjectNotification -bool YES
I have two NAS servers (both based on Synalogy). But I need something where I can back it up and forgot about it till I want to restore the stuff. I am looking at a workflow of say, weekly backup to tape. Update the index. Whenever I want to restore a directory or file, I search the index, find the tape and load the same for retrieval.
NAS can be used for continuous backup (aka timemachine and timeshift). And archival at a weekly level.
At least with drives you can run regular health checks a corruption scans. Tape is good for large scale but you must have automation that keeps checking the tapes.
However, there is little need to check the tapes, because the likelihood of them developing defects during storage is far less than for HDDs.
Much more important than checking the tapes from time to time is to make multiple copies, i.e. to use at least duplicate tapes that are stored in different places.
Periodic reading is strictly necessary only for SSDs, and it is useful for HDDs, because in both cases their controllers will relocate any corrupted blocks. For tapes it is much less useful. There is more risk to damage the tape during an unnecessary reading, e.g. if the mechanism of the tape drive happens to become defective at exactly that moment, than for the tape to become defective during storage.
The LTO cartridges are quite robust and they are guaranteed for 30 years of storage after you write some data on them.
In the past there have existed badly designed tape cartridges, e.g. the quarter-inch cartridges, where the tape itself did not become defective during storage, but certain parts of the cartridge, i.e. a rubber belt, which was necessary to move the tape, disintegrated after several years of storage. Those have disappeared many years ago.
I've got a HP StorageWorks Ultrium 3000 drive (It's LTO-5 format) connected to one (LSI SAS SAS9300-4i), in my NAS/file server (HP Z420 workstation chassis). Don't go lower than LTO-5 as you will want LTFS support.
About £150 all in for the card and drive (including SFF-8643 to SFF-8482 cables etc..) on EBay
Tapes are 1.5TB uncompressed, and about £10/each on Ebay, you'll also want to pick up a cleaning cartridge.
I use this and RDX (1TB cartridges are 2-4 times the price, but drives are a lot cheaper, and SATA/USB3, and you can use them like a disk) for offline backup of stuff at home.
However, is there no open formats? The whole LTO ecosystem of course reeks of enterprise, and I'd expect by now at least one hardware hacker had picked together some off-the-shelf components to build something that is magnitude cheaper to acquire, maintain and upgrade.
The only problem is that the LTO tape drives are very expensive. If you want to use 18 TB LTO-9 tapes, the cost per TB is much lower than for HDDs, but you need to store at least a few hundred TB in order to recover the cost of the tape drive.
There is no chance to see less expensive tape drives because there is no competition and it would be extremely difficult for anyone to become a competitor as it is difficult to become able to design and manufacture the mechanical parts of the drive and the reading and writing magnetic heads.
Tape is really complicated and physically challenging, and there are no incentives for people investing insane amounts of time for something that has almost no fan base. See the blog post about why you don’t want tape from some time ago.
Edit: https://blog.benjojo.co.uk/post/lto-tape-backups-for-linux-n...
Like that has stopped anyone before? :p Probably explain why we haven't seen anything FOSS in that ecosystem yet though.
The problem is that while the tapes are at least 3 times cheaper than HDDs, and you have other additional advantages, e.g. much higher sequential reading/writing speed and much longer storage lifetime of the tape, the tape drives are extremely expensive, at a few thousand $, usually above $3k.
You can find tape drives for obsolete standards at a lower price, but that is not recommended, because in the future you may have a big tape collection and after your drive dies you will no longer find any other compatible drive.
Because the tapes are cheap, there will be a threshold in the amount of data that you store where the huge initial cost of the tape drive will be covered by the savings from buying cheap tapes.
That threshold is currently at a few hundred TB of stored data.
I use an LTO tape drive and I have recovered its cost a long time ago, but I have more than 500 TB of data.
However, only a third of that is actual useful data, because I make 2 copies of each tape, which are stored in different locations. I am so paranoid because it is data that I intend to keep forever and I have destroyed all the other supports on which it was stored, e.g. the books that I have scanned, for lack of storage space. An important purpose of the digitization has been to reduce the need for storage space, besides reducing the access time.
I keep on my PC a database with the content of all tapes, i.e. with all the relevant metadata of all the files that are contained inside the archive files stored on the tapes.
When I need something, I search the database which will give me the location of the desired files as something like "tape 47 file 89" (where "file 89" is a big archive file, typically with a size of many tens of GB). I insert the appropriate tape in the drive and I have a script that will retrieve and expand the corresponding archive file. The access time to a file averages around 1 minute, but then the sequential copying speed is many times higher than with a HDD. Therefore, for something like retrieving a big movie, the tape may be faster overall than a HDD, despite its slow access time.
There are programs that simulate a file system over the tape, allowing you to use your standard file manager to copy or move files between a tape and your SSD. However I do not use such applications, because they reduce a lot the performance that can be achieved by the tape drive. I handle frequently large amounts of data, i.e. the archive files in which I store data on the tapes are typically around 50 GB, so the reduced performance would not be acceptable.
Today I would strongly recommend against buying a LTO-7 drive, as it is obsolete and you risk to have a tape collection that will become unreadable in the future for the lack of compatible drives. A LTO drive can read 2 previous generations of tapes, e.g. a LTO-9 drive can read LTO-7 and LTO-8 tapes. LTO-10 drives, when they will appear in a few years, will no longer be able to read LTO-7 tapes.
The current standard is LTO-9 (18-TB tapes). If you write today LTO-9 tapes, they will remain readable by LTO-11 drives, whenever those will appear.
Unfortunately, LTO-9 is a rather new standard and the tape drives, at least for now, are even more expensive.
For instance, looking right now on Newegg, I see a HPE LTO-9 tape drive for $4750.
Perhaps it could be found somewhat cheaper elsewhere, but I doubt that it is possible to find a LTO-9 tape drive anywhere for less than $4500.
If you need to store at least 200 TB of data, you may recover the cost of the tape drive from the difference in price between LTO-9 cartridges and HDDs.
Otherwise, you may choose to use a tape drive for improved peace of mind, because the chances for your data that is in cold storage on tapes to become corrupt are far less than if it were stored on HDDs.
I have stored data for many years on HDDs, but the only thing that has kept me from losing that data was that I have always duplicated the HDDs (and I had content hashes for all files, for corruption detection, as the HDD controller not always reported errors for the corrupted blocks). After many years, almost all HDDs had some corrupted blocks, but the corrupted blocks were not in the same positions on the duplicated HDDs, allowing the recovery of the data.
Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities).
I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance.
It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there.
It adds enough depth to fit plenty of regular-sized components.
I feel like the mini ITX market for motherboards is just too niche. If you want something small, buy an off the shelf NAS. If size is not an issue, buy a case that can hold a full sized motherboard and lots of disks.
You need a newish Linux kernel (6.12 maybe? Don't remember exactly) for proper support of the N150 iGPU and the Realtek NICs it uses.
Either way, it's a great time for home NAS.
- Finding a low-wattage/high-efficiency ATX/SFX PSU is the hardest part. To achieve the advertised efficiency, your Gold-rated PSU needs at least 20% load. I.e. 100W for a 500W PSU. If you are building for low-power, you will need much lower wattage for the PSU to operate at optimal conditions/efficiency. Good luck finding anything under 450W these days.
- Do your math before choosing RAIDX, where X != 1. E.g. the disk cost for 2*16TB RAID1 array is pretty close to the cost of 3*8TB RAID5 array of the same capacity. But future upgrades with RAID1 are much easier and less costly, given that your NAS box will probably have only 4-5 slots. RAIDX make sense if you want to go wild (target NAS capacity >> maximum available single disk capacity).
- If you have not jumped into the "homelab" rabbithole and you only want a NAS and some services, NAS operating systems like TrueNAS are a PITA. Your hardware will be "owned" by the NAS OS, and you will need to jump through hoops to get any other software running. Most of them encourage you to not run anything else on them, except from prepackaged apps from their "store". So, you may want to stick with something more prosaic. E.g. vanilla debian.
- If you are thinking of ZFS/TrueNAS because of the scrubbing functionality, RAID1 + BTRFS also have scrubbing.
- Motherboards from AliExpress save you time. If I could procure a motherboard with 6 SATA ports, at least 2 2.5GbE ports, and an N series CPU from a mainstream vendor, I probably would. But there aren't any such models. If you try to add these features on top of a standard motherboard, you need another round of researching components. Plus, if it is a mini-ITX mobo, you may run out of PCIe slots.
- Motherboards from AliExpress are just fine. I'm not sure why people nag about "reliability" without even anecdotal evidence. If your mobo dies, too bad. But mobos are pretty low in the list of components affecting the safety of your data, with PSU, disks and software being more important.
In my experience, BTRFS is to be avoided. It's the only FS that I've lost data to.
I have one btrfs going at the moment. Nothing important, naturally. Just a Silverblue test installation. Will see how that fares.
I was not interested in maintaining an extensive homelab (so that I have separate storage and computing nodes), or buying into a new "software ecosystem" (I would consider buying e.g. a Synology/QNAP box if I did), so I ended up with vanilla Debian. Debian 13 (trixie) got released right on time, so I will be on the latest for a couple of years.
From what I tried (TrueNAS, OpenMV, Unraid), Unraid seemed to be the most appealing. TrueNAS was very unfriendly towards even the idea of opening a shell [1] and IIRC you couldn't even install debian packages out of the box. OpenMV had problems booting on my hardware, plus it is lagging behind mainline debian (the Debian 12 version of OpenMV got released around 2 months before Debian 13).
Unraid also had limitations regarding what you could run, but the community seemed to be the most robust. Also, it is the only one that stores its parity data externally. This gives you the most flexibility with disk configurations. Also, IIRC you can pull out a disk and the data on it would be readable, so migrating your data to something else would be relatively painless.
So, if I had to choose a NAS OS, it would probably be Unraid. The downside is that you need to buy a license. But hey! Black Friday to the rescue!
[1] https://www.truenas.com/docs/scale/scaletutorials/systemsett...
It's meant to be an appliance, so it makes sense in that setting. That said, it does support hosting Docker images, so you don't really need much in the way of installing packages IME.
I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol.
Are you satisfied with it? Any issues that came up since building?
ref: https://blog.briancmoses.com/2024/07/migrating-my-diy-nas-in...
- low idle power consumption since your NAS will be sitting doing nothing most of the time - pretty much any desktop MB will do
- fast networking, 1gbe means ~100MB/s transfers, nicer to have 10gbe. Limited benefits beyond 10gbe in practice.
- enough PCIe lanes to connect enough drives. HDD of course but nice to have a separate fast SSD array plus SSD caching. You might also want a SAS HBA if you are looking enterprise drives or SSDs (and even for SATA SSD you will get a better performance via a HBA than through the motherboard). Some people also want a graphic card for video transcoding
- ECC memory
- IPMI - once you start using it it becomes hard to give up. Allows you to manage the server remotely even when switched off, and access the BIOS via a web interface. Allows you to keep the server headless (i.e. not have to go plug a screen to understand why the server is taking so long to reboot).
I'd say a good candidate for a NAS motherboard would be something like a supermicro X11SSH-LN4F, you can find used ones pretty cheap on ebay.
The main benefits of this board were:
* it's not from an obscure Chinese company
* integrated power supply – just plug in DC jack, and you're good to go
* passive cooling
Really hope they make an Intel N150 version.
For me personally, there are two things I am concerned about:
1. Issues that can only be resolved via BIOS update. Almost all obscure Chinese SBCs won't get any updates, so you're stuck with whatever issues you encounter.
2. In case of hardware failures, there's a 0% chance for RMA. You are not getting a replacement or your money back.
In A DC environment sure. In a home NAS not so much. I'm on Unraid and just throw WD recertified drives of varying sizes at it (plus some shucked external drives when I find them on offer), that's one of its strengths and makes it much cheaper to run.
How can the total average Wattage be lower than any of the lines it consists of?
Total average power is 66.49W, yet average _Idle_ power is noted as 66.67W.
Out of 108h he did a 18h burn in.
All of the merchant links are affiliate links, which he (illegally) does not disclose.[0] He's effectively acting as a sales rep for these brands, but he's presenting himself as an unbiased consumer.
The affiliate relationship incentivizes Brian to recommend more expensive equipment and push readers to the vendors that pay Brian the most rather than the vendors that are the best for consumers.
I recognize that it's an unfortunate truth that affiliate links are one of the few ways to make money writing non-AI content about computer hardware. I'm fine with affiliate links, but the author should disclose the conflict of interest at the top of the post before getting into the recommendations.
In the interest of full disclosure, I also write about NAS builds on my blog, so I somewhat compete with Brian's posts, but I stopped using affiliate links five years ago because of the conflict of interest.
If you're not familiar with how affiliate relationships create dangerous incentives, I recommend reading the article, "The War To Sell You A Mattress Is An Internet Nightmare."[1] tl;dr - All the top mattress-in-a-box reviewers were just giving favorable reviews to the company that paid the best affiliate rates, even going so far as to retroactively update old reviews if the payout rates changed.
[0] https://www.ftc.gov/business-guidance/resources/ftcs-endorse...
[1] https://www.fastcompany.com/3065928/sleepopolis-casper-blogg...
That aside, as someone who has been building computers for nearly 3 decades, and NAS's for a decade plus, I dislike almost everything about this build.
Spending a lot on the PSU is a good move, but the motherboard is a bad choice for the price when a much more capable socketed board + CPU could be had for around the same price, and the use of no-name SSD and NVMe is an absolute no-no for me.
The impression I got from so many linked mentions of Topton this and Topton that, is that this was mostly done to push that particular brand for a sponsorship or affiliate program.
Youtube has already long gone the way of being untrustworthy for advice on this sort of thing due to sponsorships and affiliates etc, perhaps I should blog my advice and experience that no body pays to influence, in a more generic sense for those who actually need guidance on where to focus when building hardware like PCs and NASs.
I'm not going to suggest that hardware I chose for my "NAS" as it would be universlly bad advice for most people, but there is some generic knowledge to be shared here.
Sometimes it feels just like telling my kids to "learn from my mistakes", does anyone actually want to hear it?
I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works.
does everything and more I need it to (backups, photos, storage, jellyfin, various media servers, torrents etc.)
Plus, DSM has a spectacular web interface.
Most quality hardware will easily last decades. I have servers in my homelab from 2012 that are still humming along just fine. Some might need a change of fans, but every other component is built to last.
In this economy?
https://www.ugreen.com/blogs/news/ugreen-makes-strategic-ent...
UGREEN has apparently inked deals to drop their DXP2800s into (some) Walmarts, which also included bringing in some 10/12TB Toshiba N300 Pro drives as well to go with them on the shelves. Being a super-rural American, I was a bit surprised to see this on my local shelf as a nearly turnkey solution in an area where there's nothing remotely close to a Best Buy, even.
Even more surprisingly: they've been sold by Walmart below minimum advertised prices at UGREEN a few times normally...
The CPU would immediately hit 100C with even the slightest whiff of load.
The entire thing was also unstable and would regularly just lock up without any kernel panic or other error message available, could even get kdump to gather anything (I'd binned their dodgy NAS OS and installed Debian).
It also seemed to amplify the noise of the hard drives within. Every thunk of a drive head moving around would be audible from a different room. Not sure how they managed to do that, but it's an acoustic nightmare.
I haven't paid attention to temps or noise - it's been suitable in my bedroom corner and it's been running for a month+ now like a trouper.
I think for home use with MDADM or raid z2 on zfs it's just gucci. It's cost effective.
If one of your drives fails under RAID5, before you even order a new disk, you should do an incremental backup, so that your backup is up to date. Then it doesn't really matter that the rebuild times take long. And if you have more data coming in, just do more incremental backups during the rebuild time.
I think it's still fine for casual home setups. Depending on data and backup strategy.
This seems awfully wasteful. One of the main reasons for which I've built my own homeserver was to reduce resource usage - one could probably argue that the carbon footprint of keeping your photos in the cloud and running services is lower than building your own little datacentre copy locally and where would we be if everyone builds their own server, then what? Well, I think that paying Google/Apple/Oracle/etc whoever money so that they continue their activities has a bigger carbon footprint than me picking up old used parts and running them on a solar/wind only electricity plan. I also think I'm going a bit overboard with this and I'm not suggesting to vote with your wallet because that doesn't work. If you want real change this needs to come from the government. You not buying a motherboard won't stop a corporation from making another 10 million.
Anyway, except for the hard drives, all components were picked up used. I like to joke it's my little Frankenstein's monster, pieced together from discarded parts no one wanted or had any use for. I've also gone down the rabbit hole to build the "perfect" machine, but I guess I was thinking too highly of myself and the actual use case. The reason I'm posting this is to help someone who might not build a new machine because they don't have ECC and without ECC ZFS is useless and you need Enterprise drives and you want 128 GB of RAM in the machine and you could also pick up used enterprise hardware and you could etc...
If you wish to play around with this, the best way is to just get into it. The same way Google started with consumer level hardware so can you. Pick up a used motherboard, pick up some used ram, a used CPU, throw them into a case and let it rip. Initially you'll learn so much and that alone is worth every penny. When I built my first machine, I wasn't finding any decently used former desktop form hp/lenovo/dell so I found a used i5 8500t for about 20$, 8 gb of ram for about 5$, a used motherboard for 40$, case was 20$ and PSU was $30. All in all the system was 115$ and for storage I used an old 2.5inch ssd for boot drive and 2 new NAS hard drives (which I still have btw!). This was amazing. Not having ECC, not having a server motherboard/system, not worrying about all that stuff allowed me to get started. The entry bar is even lower now, so just get started, don't worry. People talk about flipped bits as if it happens all day every day. If you are THAT worried, then yeah, look for a used server barebone or even a used server with support for ecc and do use ZFS, but I want to ask, how comfortable are you making the switch 100% now over night without having ever spent any time configuring even the most basic server that NEEDS to run for days/weeks/months? Old/used hardware can bridge this gap and when you're ready it's not like you have to throw out the baby with the bathwater. You now have another node in a proxmox cluster. Congrats! The old machine can run LXCs, VMs, it could be a firewall it could do anything and when it fails, no biggie.
Current setup for those interested:
i7 9700t
64 GB DDR4 (2x32)
8, 10, 12, 12, 14 TB HDDs (snapraid setup and 14 TB HDD is holding parity info)
X550 T2 10Gbps network card
Fractal Design Node 804
Seasonic Gold 550watts
LSI 9305 16i
There's always more you can do. I'd rather enjoy my life, and not tell others how to enjoy theirs, unless it's impacting mine. Especially considering that the impact of a single middle-class individual pales in comparison to the impact of corporations and absurdly wealthy individuals. Your rant would be better served to representatives in government than tech nerds.
How uncouth, even just as rhetoric.
It also isn't useful to reduce the conversation and assume that critique directed at the idea of necesarily going out and buying new hardware is a critique against technology or ownership, but, myself included, we do seem to read what we want. You also missed the point I made when I did clearly say voting with your wallet doesn't work. You didnt address the other, more salient point I was getting across, but obviously failed to do so - when starting out, don't worry too much, just get whatever and start learning. Questions will be easier answered when you already have some hardware.
Anyway, enjoy your day
on this website?!
with their money
in this economy?!